What we mean by the term AI.
It's the AOAI's that we're most concerned about.
When we first unveiled the idea that our new mission would focus on addressing the challenges of Artificial Intelligence (AI), the response by many of our team members was quizzical. Aren’t we an education company? AI isn’t a problem educators are thinking about right now—are you sure? What do you even mean by AI?
In the context of the chaos and upheaval of COVID that we unveiled it under, this was a pretty understandable reaction.
Honestly speaking, we were getting our heads around what we meant by it too. Were we trying to solve the threats of Artificial General Intelligence (AGI)? Or was it a nearer term, more practical problem? Was it a problem with the application of AI in teaching and learning? The future of jobs? Or was it something else?
The answers to these questions have been slowly becoming more clear. And that clarity starts with defining what we mean when we say “AI” in our mission.
Generally speaking, AI is one of the most exciting frontiers of new technology—it is being used to cure disease, create self-driving cars, process natural language, automate coding and design tools and so much more. The potential to solve deeply entrenched historical problems and compensate for human weaknesses is truly compelling.
We’ve tinkered with it ourselves—to automatically edit video (Intercut.ai) and to automatically clip audio files (experimental features in Synth). It’s fun, challenging and fascinating.
AI can essentially be thought of as machine (i.e., computer) intelligence. Most often these days, the term AI is used to describe machine learning—a method of programming where computers learn by experience and data.
What likely makes AI most challenging to comprehend is the wild divergence in application and goal. How can an algorithm made to drive a car, an algorithm to identify a tumor in a radiology scan, and an algorithm that personalizes math instruction be the same thing?
Most AI is relatively hidden to us. So its effectiveness also isn’t obvious. The stuff that is identifiably AI (e.g., startups with .ai domain names) are often error prone and rudimentary. This has led to the widespread perception it’s a technology for the future, not the present.
And then, along comes OpenAI's GPT-3. When we first saw the demos, it challenged all of our preconceptions of what AI is, what it can do, and how soon it will have an impact on society. When Sam Altman, founder of OpenAI, was asked how he would monetize his technology in the coming years, he said “we will ask it to figure out a way to make an investment return for you.” Creativity was supposed to be a refuge from AI, not a strength of it.
What is the most sophisticated, ubiquitous AI in use today? Without question, it is algorithms with the goal of optimizing for your attention—or as we will start calling them, AOAI’s (Attention Optimizing AI’s—yes, the acronym was selected to be fun to say out loud. Try it!).
The emergence of the internet thirty years ago spawned what has now become known as the attention economy. It dramatically shifted the business models of many large industries to focus on people’s attention (ie, TV, news, phones, advertising, cameras, gaming, and more). The first wave of the attention economy was shaped by humans creating effective attention-capturing solutions (i.e., feature engineering). The theory behind these early algorithms was popularized in the book Hooked by Nir Eyal.
Now, these theories have been refined, abstracted into machine learning algorithms (ones that humans no longer even understand), and brought to scale with ruthless machine efficiency.
They work phenomenally, almost outrageously, well.
There isn’t one AOAI that stands apart of particular concern (ie, Facebook gets the bad rap for them all). What is worth dwelling on is how every digital experience is now converging on the same method and goal—to maximize your attention within a device or a platform using machine learning.
Your phone is designed to maximize your screen time, regardless of the apps you are using. Your TV is designed to maximize the movies and shows watched. Your speakers, headphones and streaming audio services are designed to maximize the amount of music and podcasts you listen to. Social media keeps you engaged in news articles and posts. YouTube keeps you watching more and more videos. News outlets keep you reading news on their sites and not others. Video games and VR keep you playing games for hours on end. Heck, even your banking app might try to extend your session the same way, for all we know.
With all confidence, we can say that we now spend more time with AOAI’s than with anything, or anyone else. We spend more time with them than at work or school (or, at least, paying attention to work or school). We spend more time with them than with our parents and friends. And we spend more time with them than in the outdoors, reading and on hobbies.
They have become the most influential experience shaping the consciousness of billions of people everyday. There is no experience or institution that can compare at this point. Life is converging towards the experience of a video game—and AOAI’s are the source code.
AOAI’s accomplish their goal, in part, by gratifying your impulses. But the question worth asking is, are there trade-offs to all this gratification?
The answer slowly emerging is yes.
While our mission uses the term AI, this is for familiarity’s sake. Our true focus is on working with educators to address the emerging trade-offs with ubiquitous AOAI’s.
In our next post, we’ll identify the trade-offs we see and start to connect the dots on why education should be thought of as part of the strategy for managing them.