Learning, Generative AI, and Maria von Trapp

Calin Drimbau, Renganathan Padmanabhan

Dec 30, 2022

Imagine this (not-so-fictional) scenario.

I am tasked with hunting down rogue AI-generated content, which is made indistinguishable from human-written content. These AI-generated content formats are considered dangerous since they can pass as human-written content and can be used to deceive humans. So my mission is to track down and retire these rogue content creators.

Why does this scenario seem oddly familiar?

Is it a job description from the (immediate/near-term) future?

Is this a loosely-inspired script for Blade Runner, with the content swapped for replicants?

The answer to all three questions: Yes :)

The current scenario (and the subsequent hype) around OpenAI's ChatGPT is a delayed reaction to the possibilities of what AI and in particular Large Language Models (LLM) could do.

But that brings us to one of the most pressing questions: is this the future of learning? Or is this the beginning of the end of human curiosity, nudging us to stop questioning and merely accepting responses as factual information? So we probe further into how our thinking and learning have evolved over the past few years and how we think knowledge discovery will be affected by generative AI.

First, let's go down the rabbit hole on how we learned before GPT entered the scene.

Once upon a time, there existed…books

When we talk about knowledge, discovery, and learning, it is driven by our curiosity. And no one could have summarized our curiosity for knowledge better than George Loewenstein, psychologist, and behavioral economist from Carnegie Mellon University. George famously quoted about curiosity in his information gap theory, as below:

It comes when we feel a gap "between what we know and what we want to know." This gap has emotional consequences: it feels like a mental itch, a mosquito bite on the brain. We seek new knowledge because that's how we scratch the itch.

When we read a fantasy epic or an intriguing crime thriller novel, we are hooked on turning the page since great authors manipulate our information gap on what will happen next. They convey two essential aspects in their expertise to close the information gap: Context and Intent. 

Context

Where their knowledge and inspiration come from

Intent

How they interpret the knowledge to create expertise in their field

Let’s take a simple example of the flat vs curved earth theory: For the longest time, humans believed the earth was flat, until Aristotle and later Eratosthenes observed and proved them wrong. Their curiosity to understand why a specific phenomenon was occurring (in Aristotle’s case, it was observing the movement of the stars around the earth) led to them understanding how it was different in the context of people’s understanding of the earth in that era. They did not stop there - Eratosthenes did experiments with sticks around the Summer Solstice in Alexandria and Syene to interpret these observations and form an intent to understand why this happened, accurately calculating the circumference of the earth in the process, almost two thousand years ago. Renowned scientist Carl Sagan explains these experiments beautifully in this Youtube video.

When we see context and intent blended together, we can rationalize our opinions about the knowledge we gain from that book to memory. And each time we are compelled to recollect it, these two elements help create the stickiness to that knowledge snippet as a dopamine reward, the 'I already know this' signal. Each person might have a unique level of assimilating this knowledge into insight. They could evolve their curiosity by continuing to challenge or accept what they learned as a known fact. This knowledge format evolved from elementary school to university but was seldom challenged with a different source of expert knowledge.

If you feel you know this technique already, we recommend that you hold onto this line of thinking for the following few sections as we rein it back in. But if you did not, you would relish what is coming next.

I feel the need, the need for… more knowledge

As we moved on to consuming more formats beyond books and novels to satisfy our thirst for knowledge, the world entered the online realm with more people sharing what they know in the form of written blogs and then, consequently, photos and videos. We also saw an explosion of more content formats from authoritative sources - news, expert-published journals, and magazines. All of these content formats started out as a skeuomorphic replica of their brick-and-mortar counterparts but quickly evolved into becoming a more independent brand presence that looked native to the Internet.

As we started corroborating our established knowledge from formal learning like school and college and academic learning from books and journals into the content sources we learned online, a different group of seemingly invisible entities was learning alongside us: AI assistants. These entities quietly sprung in the last decade to observe and learn from these expert sources: go through reams of news sources, read blog content, listen to audio, and watch thousands of videos to know and understand, just like an observant school student would.

Is this seat taken?

And all the silent learning efforts slowly evolved into the next logical evolution: AI as a smart helper. If you felt you needed a nudge or a study/work partner in your quest for knowledge, your AI assistant sat next to you while you grappled with your unique set of challenges. 

Trying to master the English language? 

Feeling bored of typing while chatting?

Want to sound suave and know how to pronounce 'croissant' correctly?

Your phone/laptop with AI-enabled smarts helped you with these nifty nudges to get you started, predicting the next word for you, giving you language smarts, and making your email sound more confident. 

Did it enable you to learn something entirely new? Not quite. But it got you unstuck, on a long day, when that email stood between you working and going to bed. All you had to do: was click a button and let the AI help you get unstuck. 

Could it do your job? Not quite. But it allowed you to continue regardless by showing you how to ask for a croissant in a French cafe.

Did we start learning differently? Not quite. We learned the traditional way through data, but we started getting new assisted recommendations to corroborate that learning with podcasts, books, and movies, based on what we tried to learn in our previous interactions.

AI, take the wheel

And then, the AI Big Bang happened in 2018, with OpenAI presenting the first GPT model, following up quickly in 2019 with their paper on Language Models. GPT-2's multi-tasking abilities enabled us to observe that language models could perform reasonably well without being trained at length on specific topics. With GPT-3 launching in May 2020, AI finally came into the mainstream and started speaking. 

That part was a bit dramatic, but you get the gist. The quality of the results in GPT-3 became increasingly better than the previous models, primarily because GPT-3 was able to learn as it solved a problem. Alberto Romero wrote a beautiful piece comparing GPT-3 to its predecessors and explaining these aspects in detail.

But the primary difference in AI-assisted interactions: you could now prompt the AI assistant with questions. Did it respond well? Again, not quite. But it could answer pointed questions well but not hold a compelling conversation. So you had to remember that you had to frame your question prompts articulately well to get the correct answer. But it learned on the go, as Rohin Shah elaborated in his whitepaper about Meta learning by GPT-3, that you could train it while you fed it more prompts, and it learned how you prompted it to understand how to respond better. 

ChatGPT (technically GPT 3.5 if you are so inclined) brought prompt-assisted knowledge-seeking to the mainstream. Earlier restricted to AI early adopters for the first 2-3 generations, now everyone (and their mum) had accessible knowledge with their reach, with an ability to prompt and have a conversation with a large language model built around the ability to have a dialogue. 

Has GPT-3 or ChatGPT yet changed the way you learn? Not entirely. But now you know that your first order of knowledge discovery changed from reading an entire book to browsing through Google, all the way to getting concise information from just a single prompt. Can it help you in your day job? It just might - it can write code snippets, entire blog content, or speeches (or poems, if you like), and it might also help you tailor how you communicate with people of different knowledge levels. 

So you now have a quick crash course on how learning has evolved, how does learning in a post-ChatGPT world look like? Let's find out.

The hors d'oeuvre: first principles

Humans are tool users. What's really incredible about a book is that you can read what Aristotle wrote. You don't have to have some teacher's interpretation of Aristotle. You can certainly get that, but you can read exactly what Aristotle wrote. That direct transmission of thoughts and ideas is one of the key building blocks of why we are where we are as a society. But the problem with a book is that you can't ask Aristotle a question. I think one of the potentials of the computer is to somehow … capture the fundamental, underlying principles of an experience.

Steve Jobs, Playboy Interview, 1985

Remember we talked about intent and context long ago? Let's dabble with that in a ChatGPT scenario. When you prompt ChatGPT (or any GPT-based platform, for that matter) with an innocuous question, does it satisfy the urge to understand both intent and context? You probably get a summarized version of the fact without the underlying principle of why it is a proven fact or argument. And once we see through that, we might quickly understand that what we heard as a response from ChatGPT, even as part of a long conversation, might not be adequate enough for our brains to internalize that topic to our complete understanding.

Why, you ask? For the most part, most efforts surrounding Large Language Models (LLMs) have not fully grasped the depth and accuracy of a wide range of topics, so you might find it difficult to debate and argue any topic in particular. Although the jury is still out on how LLMs could handle topics prone to bias and prejudice, ChatGPT would still be able to retract its responses to your prompts but is not fully able to understand if its responses are accurate.

So can you fully entrust an AI-enabled model to help you learn about, say, the Big Bang Theory? Not yet, but you can have the model pertinently answer queries on what the Big Bang is about or why it is hard to prove. But it cannot present definitive alternate arguments for the origin and evolution of our universe.

Then the real issue presents itself: With so much information at your disposal, how do you possibly take advantage and become better learners with these new developments in AI?

The answer: layers!

Shrek: Layers. Onions have layers. Ogres have layers. Onions have layers. You get it? We both have layers.

What, you built up this long-form narrative only to lead us to a dialogue from Shrek?

Hear us out. Like onions and well-intentioned ogres, our AI-augmented knowledge discovery should have layers. We would start first and foremost with the following:

Layer 1: the art of inquiry

Simply put: Ask the right set of questions to challenge your thinking. When you are using an open-ended line of critical thinking to learn more about a topic and ignore your existing bias and prejudice, you are able to let every opinion and point of view occupy an equal size of brain real estate for the following layers. Lead that into the next layer, which is:

Layer 2: welcome all points of view

A constructive argument representing both (or all) sides of the debate authoritatively. As stated in the first point, let bias, beliefs, and motives not creep into your learning path yet. We should then have this layer tee up quickly to the next layer, namely:

Layer 3: talk to me

Tailor the conversation to your individual ability to understand. Albert Einstein once famously quoted “If you can't explain it to a six-year-old, you don't understand it yourself.” Well, you may not be a six-year-old, but you have had unique learning experiences throughout your life. Companies might have several people like you at the same role and level, but each one might understand a given topic differently, based on these unique experiences. So if we can have these arguments from Layer 2 presented to our unique level of expertise and understanding, we finally would be able to grasp the intent and context (well, hello again) about these arguments. 

And then, the last layer is:

Layer 4: build a mental model and validate with new data

Package it nicely to store into your knowledge bank as a mental model/framework/brain-muscle-memory/<throw-cringe-psychobabble-term-here> to think about this topic in semi-permanent form. Why semi-permanent, you ask? So you are able to revisit the entire context and intent behind the topic, when you are presented with new data, to reinforce or revisit the knowledge you possess. 

Let’s say you learned to drive on manual transmission early in your life. Maybe you moved up a bit and bought yourself a nice automatic Mercedes GLS. Then you make a surprise trip, where the car rental company decides to surprise you back with a manual SUV instead of the usual automatic, citing low inventory. Do you struggle coming back to manual? Maybe a bit, but you find your way around eventually, since your brain eventually retrieves the dance moves to the right uneasy-clutch-gear-tango to get back on the road again.  

So think of this layer as a brain bookmark for your acquired knowledge.

Wow, quite a layer cake, isn’t it? 

Why go through this whole effort? 

Where does AI fit into all these layers? 

Keep calm and read on because we have bad news to share: We believe learning today sucks. 

Self-paced learning is an immense challenge, with too much content and many courses available in every format. Finding a trustworthy, reputable, knowledgeable source of content to learn about a given topic is rare. Cohort-based courses can help, but not everyone can afford one.

These issues only cloud our understanding of the real learning problem: all courses today currently assume a level of understanding and expertise. They usually start right from the beginning, especially for self-paced courses (like Duolingo), or they assume a certain baseline of awareness on a given topic, like a cohort-based course would. But they are rarely personalized to every learner's current awareness level and desired outcomes. 

Every learner usually begins with a unique understanding and pre-built mental models around a given topic. To help them learn, or even unlearn, these mental models, and then help them build a sustained understanding of the topic is not sustainable for a course creator to build for every skill level. Most live-interaction or cohort-based courses have a time-bound cohort and peer group for accountability, but in reality, time to complete a course may not equate to time to learn a new skill or to acquire deep knowledge in a particular field.

On the tutor/expert side of the spectrum, it is also untenable to attend to every learner uniquely. So self-paced or cohort-based learning is usually a function of how critical it is for the learner to upskill and how disciplined they are to learn, despite the blind spots in their awareness. Creating courses to cater to a gamut of experience levels, like an experienced corporate manager-level learner to a novice joining the ranks of corporate life, involved entire curriculum paths to be painstakingly created.

But maybe, just maybe, Generative AI can help. Here's how.

Look ma - my own private school!

We have immense respect and appreciation for the course platforms we have today. We are an outcome of a beautiful blend of academic learning and self-driven learning content, intertwined with occasional bursts of expert-driven insight, mostly from Youtube and various podcasts. Standing on the shoulders of these giants has helped us at broadn take a hard look critically at what does not work today. However, we also understand that it has taken years for these course platforms and content creators to earn the trust and the audience reach to get it right. So we want to help the next generation of learners not get overwhelmed with the deluge of content but instead employ a blend of 1. critical thinking, 2. curated expert content, and 3. a learning path that evolves with the learner. 

We have bits and pieces of this 3 piece puzzle manifest in some form over the years, but they have not helped us progress the art of inquiry. When you genuinely engage in inquiry-driven learning, you can satiate, engross, and then treat the mind to diverse perspectives and angles about that topic. Once we blend that in with the right level of experts weighing in with their point of view, you get to understand the topic from diverse points of view versus being indoctrinated on a single school of thought. And when you are reminded and nudged every time to put these perspectives into your work to understand the topic from a real-world standpoint, you start developing your worldview. Generative AI gives us a real shot at this piece of the puzzle. It allows us to select the right sources of content and summarizes it for you to interpret at your level. Think of it like this: the way you learn content marketing might look similar, but the way the content is presented, visualized, and taught could be very different based on your current skill as a financial wizard versus a bricklayer versus a beautician. 

As a result, we might have infinite variants of the current set of courses we see available at our disposal. The difference: your personalized variant of a course for a topic like content marketing is going to feel, look, and interact very differently from your friend sitting next to you, even though they are enrolled into the same course. 

You might ask: Well, this sounds interesting. But I am an expert and a course creator. How does it play out in today’s creator economy, where there is a 100000: 1 ratio of consumers: creators? And this is where our big reveal happens.

Drum roll! <Cue-20th-Century-Fox-intro-style-music>

The consumer becomes the co-creator

The consumer-creator playing field has been leveled. Each consumer gets a unique course experience for themselves, which tweaks and evolves itself based on how the consumer creates the flow of their learning experience. Your choices, prompts, and questions shape the course content. 

But does this mean there are as many courses as there are consumers? No, there could be way more, but Generative AI could weave in experiences as shared experiences for multiple learners, so that it learns from the unique behaviors of each learner in the population. 

Does this also mean that course experts are out of a job? No - their expertise is more essential than before, but the way it gets delivered and packaged uniquely to each learner would become more personalized. 

Is the creator economy dead? Hardly, it would help content creators reach more audiences, who might have been left unattended because of skill level or language. As a topic expert, you get to build as many flavors of your own content for every learner seeking knowledge.

But how about me, you ask? What does this mean, first and foremost, the consumer who now has the uneasy crown of co-creator to bear?

You may have had secret musings about having your own Jarvis for investigating a technical topic or having a Maria Von Trapp to learn music. Having your own private tutor was often considered a luxury. With generative AI, you can now experience an AI-enabled learning co-pilot/assistant for every subject you wish to learn, patient enough to keep up with your own pace but firm enough to keep nudging you to learn every day, without enrolling in an expensive cohort-based course.

Conclusion: no more unfinished (learning) business

The Internet is just a world passing around notes in a classroom.

Jon Stewart

If we look at the Blade Runner analogy we discussed at the start, we no longer need to go hunt the rogue content writers. We can move our energy and effort from trusting or doubting the outcome of generative AI, to entrusting that AI will enable us to understand the context and intent of the topic we seek to learn.

If you were to experience every bit of content created online today, chances are you will experience a wave of unfiltered content before you have the patience to get to the right expert-written content. With broadn, we intend to help you cut through the noise to bring you the right learning sources and understand topics thoroughly to build your expertise. 

We aspire that through this platform, you would not leave any stone unturned, or should we say, any topic unlearnt. 

Happy learning!