hero
homan

Education in a post-AI world

How our current system is broken and what will replace it.

Millions face a bleak reality: that what they've learned is useless and that they are crippled with student loan debt that they're unsure when they can repay, if ever. Billions are in the pipeline to arrive at the same destination.

Artificial intelligence (AI) has accelerated the gap between how students are trained and what the world needs from them, and has also invalidated the business model that powers education: selling education as a product that students (as customers) pay for.

Fixing the education system isn't a matter of updating curriculums, hiring better teachers or even making all education free because its business model itself no longer works. We need an entirely new business model, and the set of incentives that come with it, that works in a post-AI world. In this essay I explore how AI invalidates our education's business model, what the new business model and education system will look like, and how to create it.

I believe that fixing education is the most important problem to work on. AI will not replace all human activity anytime soon, and there will be ample room for humans to create value in the world - amplified by AI. The only way we solve our greatest challenges is by unlocking every individual's potential. In a post-AI world, we will be the only thing holding us back.

Education is inextricably linked to what the world wants because people want to learn skills that will enable them to create value for the world. This desire to create value for others will continue to exist no matter how good AI becomes because it's a deeply rooted evolutionary conditioning. Therefore, to understand how education needs to change we need to first understand how AI changes what the world wants from us.

Extreme outcomes

AI makes outcomes very extreme: a small percentage of people will contribute the majority of economic value created in the world. The more powerful AI becomes, the more extreme outcomes become: even fewer will contribute the majority of economic value created. The more extreme outcomes become, the less room there is for the average performer to create value, and the greater the outcomes become for the best. ** the "shape" of extreme outcomes is a power law distribution. the power is found throughout many natural phenomena.

This trend towards extreme outcomes isn't a new phenomenon, it's been going on for some time. Surprisingly, it's caused by progress: the better technology we have, the better education we have, the easier it is to distribute ideas and products to the world - the more extreme the world becomes. But, AI is poised to accelerate the rate at which the world becomes extreme to unfathomable levels that will make today's extreme outcomes seem quite un-extreme.** while outcomes have become more extreme, the real median income has also risen. it is better to be average today than in the past - if you can curb envy that is. i predict this trend will continue through universal basic income: it will continue to be better to be average in the future.

To understand why outcomes are becoming more extreme, we need to understand the relationship between the complexity & outcomes. Simple tasks have more equal outcomes: 100 factory workers assembling the same product will not have that much variance in their output. On the other hand, complex tasks have very extreme outcomes in which a few individuals account for the major majority of the value created. For example, if we take 1 billion YouTube content creators, a tiny fraction of them will generate the majority of the revenue on the platform. We see such extreme outcomes in any complex task: building companies, making music, creating content, writing books, etc. Extreme outcomes don't have room for the average which is why we don't see a "middle class" in such complex activities: the average startup, musician, or content creator cannot make a sustainable living through their work. They either become quite successful or make nothing.

Complexity and outcomes

Artificial Intelligence (AI) makes our work more complex by doing the simpler work for us, forcing us to move up the "complexity ladder". For example, within a couple of years AI will be able to write the majority code written. As that happens, programmers will be pushed towards more complex work: figuring out what to build, understanding users and their needs & emotions, design, high level architecture, etc. In other words, as AI becomes more powerful, programmers will be pushed to operate more like startup founders than programmers. And we already know how complex that work is because of how extreme the outcomes for startups are. We will see a similar pressure towards more complex work in all domains: video editors will need to be able to make unique contributions to the overall storytelling rather than just executing the director's idea, etc. Only highly complex work will remain, along with its extreme outcomes. ** my belief is that feeling is far more complex than logic, therefore as AI becomes more powerful, we will find ourselves working on how to make people feel and experience, leaving the logical reasoning for AI.

The more powerful AI becomes, the more complex our work becomes, and thus the more extreme outcomes become. AI is accelerating exponentially fast. Even when we estimate progress conservatively, within the next few years outcomes for all knowledge work will be very, very extreme. Though AI is largely restricted to the digital realm today, it's only a matter of time before AI can directly manipulate the physical world and thus make outcomes for most human endeavors very extreme. There is nowhere to hide from extreme outcomes.

Another way to understand why the world is becoming more extreme is that AI scales each individual to the world. One of the best programmers in the world will soon be able to train an AI being to represent them (i.e. their intuition, judgement, taste, etc.) such that anyone can access it as a paid service that would be better than, and a fraction of the cost of, hiring a programmer. Even scaling 10 - 20% of the best's skill (which we are close to) will likely outcompete the majority of people on that skill. Similarly, the best designers will be able to make themselves accessible to millions of builders. Similarly, the best biryani chef will train AI to physically reproduce their biryani anywhere on the planet in minutes. In such a world, why would someone buy anything from anyone other than one of the best? Similar to how we have only two ride-sharing apps (in a region), two mobile platforms, a few top content creators for any category, etc. there will only be room the best in every activity.

A common claim is that AI will create more jobs than it takes. While this is true, it ignores the fact that the work it creates will be far more complex, making it extremely difficult to do it well enough to consistently earn a living from it - in the same way that it is very difficult to make a living from art, startups, music, etc. Those that do well will do exceptionally well, but most will not. AI enables more opportunities than ever before to create value, but each opportunity will be far more difficult to capitalize on. We will soon have individuals who can create trillions of dollars of value in the economy, while many will struggle to be good enough to be able to create any value.

Many incorrectly believe that extreme outcomes is a temporary phenomena: a bug that we will eventually "fix". But, I don't see this happening because any complex activity even with equal opportunity will have extreme outcomes (just imagine the outcomes of a 1000 people with access to the same resources trying to make, say, music) because extreme outcomes is an inherent feature of complex skills. The actual problems to focus on are insufficient median outcome (everyone must have basic needs met) and equal opportunity (an individual's background and wealth should have no bearing on their ability to succeed). But solving these problems will not lead to more equal outcomes. In fact such a world will be far more extreme. To go back to more equal outcomes (and the middle class that comes with it) requires us to not adopt new technologies such as AI that make our work more complex, or tax extreme outcomes so aggressively as to disincentivize creation. Such societies will fail, and fail much faster as change accelerates. ** as the world becomes more extreme, the economic outcomes of countries too will become quite extreme. in a post-AI world, the most prosperous country will be orders of magnitude more prosperous than the 2nd & 3rd prosperous countries, let alone the average one. whether we like it or not, economic might translates to control and influence. countries that don't sufficiently create will be gobbled up by ones that do.

Outcomes will only become more extreme.

What the world wants

In an extreme world with a shrinking room for the average performers, individuals have two levers to pull to find their way to creating value:

  1. Differentiation: find what you're uniquely capable of offering to the world.
  2. Excellence: become one of the best at it.

The more powerful AI becomes, the lower the barrier to create, leading to more people competing in a particular direction - leading to a greater pressure to differentiate if you're not cut out to be one of the best at it. But, no matter how much you differentiate, you will still come across competition - proportional to how lucrative that skill is - and you will have no choice but to strive for excellence in that skill in order to be able to create any value.

In an extreme world, individuals will feel a strong pressure to find what they are uniquely capable of doing, that no one else in the world is "wired" to be able to do as well as them. If there is someone better than them at something, it will become increasingly difficult to compete against them as they scale themselves through AI.

Though the world is a positive sum game and in theory it could be possible that everyone can create value by creating things that no one else can, it's far more likely that most will not discover their natural strengths in time and will be forced to compete on skills that they weren't designed to excel at. Also, the world might not value an individual's unique skills and they will be unable to make a living. Therefore, we will need some form of Universal Basic Income to raise the median outcome to be sufficient.

The current education system fails to enable people to discover their unique differentiation and train themselves to excel at it because it was designed for the industrial and information ages which had ample room for the average to do undifferentiated work. Even today, most graduates go on to do undifferentiated work in Tech / Consulting / Finance, or whatever else is the flavor of the day. True differentiation and excellence is exceptionally rare and an education system that optimizes for this will look quite different.

Cost of education will skyrocket

Many incorrectly believe that the cost of education will drop as everyone gains access to AI personal tutors that can guide them through humanity's knowledge according to their unique temperament, inclinations, interests, and pace. But, the opposite will happen: the cost of education will skyrocket because as the world becomes more extreme (and complex), the teachers who can meaningfully impact outcomes in ways that AI cannot will be unimaginably valuable.

"Average" and "one of the best" are relative terms. Once everyone has access to the knowledge for what to learn and how to get good at it, the bar for how good you have to be to create value will rise as there is more competition. And since the outcomes are that much more extreme, the teachers who can meaningfully impact the chances of success by even by little will be incredibly expensive - far, far more than the most expensive teachers today. The average teacher will be useless because teaching a complex skill is also a complex skill, and hence will also have extreme outcomes.

Teaching simple skills is relatively straightforward: by putting students through a predetermined curriculum, they will get good enough with sufficient time. For example: you can reliably train a hundred people to effectively assemble a product on a factory line in a few months or years. Similarly, teaching someone the syntax of a programming language, or how to use spreadsheets is easy. The simpler the skill, the more teachers can teach the skill.

Teaching complex skills is incredibly difficult. Even with great training, many will fail. Most graduates from the best music programs, filmmaking schools, content creator courses, and even top startup accelerators fail. The more complex the skill, the less you can become proficient by simply learning a methodology / rules / a process. You must develop skills such as intuition (having an innate sense for what to do), taste (an understanding of quality) and great judgement (making the right decisions) that comes from doing (ideally with the right mentorship).

Those who are capable of effectively teaching highly complex skills are rare, and their time will be worth far more than what most students can pay for. The converse is also true: in a post-AI world, a teacher willing to teach for something a student can afford to pay for is almost certainly not a good enough teacher. Average teachers won't just be useless, they'll negatively impact a student's outcomes. Complex skills are learned through experience, not consuming information about it, and therefore teachers without experiential wisdom will almost certainly impart the wrong habits and an incorrect view of how things work, holding students back. This damage becomes more severe, the more complex our work becomes.

The business model is broken

The business model of selling education as a product that students (customers) pay for is being attacked on two fronts:

  1. As outcomes become more extreme, the average student (i.e. most) won't be able to earn back what they spent on their degree in time, or at all - making paying for education financially too risky for students.
  2. The more complex our work becomes, the more expensive the right teachers become - making quality education unaffordable for almost everyone.

Making education free doesn't work because the teachers who can meaningfully impact a student's outcomes will become far too expensive to hire. The teachers that institutions can afford to hire will almost certainly be the wrong teachers who will harm more than help.

We need a new system that incentivizes the best teachers to teach, and to enable students to access the best teachers without having to pay (as they will not be able to afford them). Without such a system, opportunity will be concentrated in the hands of those with extreme wealth who can afford to pay the best teachers for their time, while those who don't have such wealth will be seriously disadvantaged.

There is only one business model for education that meets these requirements: venture capital. Venture capital is a form of financing popular in the startup world in which investors give early companies money in exchange for equity in the company. Venture capital works exceptionally well in domains with extreme outcomes because an investor only needs a few of their investments to become successful to more than compensate for the many failed investments they will inevitably have. Venture capital and extreme outcomes is a marriage made in heaven.

Personal tokens

But venture capital only applies to companies. How would it work for students?

Let's assume we have a new kind of financial instrument (let's call it a “personal token”) that represents an individual's equities in companies and other personal tokens. Let's say this personal token can be divided into shares and sold to investors such that the investors gain equity in the wealth that the owner of a personal token will create throughout their lives (through the equities in companies they will earn).

For example: Alice - a promising young designer - sells 3% of equity in her personal token to investors for $100k. As Alice accumulates equities in companies by working at them (or starting them), the value of Alice's personal token increases, thereby increasing the value of the equities that her investors hold in her personal token. In an extreme, post-AI world, if Alice goes onto become successful, she will likely generate great wealth, providing ample returns to her investors.

In the context of education: instead of charging students money to train them, the best teachers would instead pay students to train them in exchange for equity in their personal tokens.

Personal tokens need to be grounded in its owner's ownership of companies, not future salaries, because only equity captures the extreme upside. Without gaining some equity in the potential upside, great teacher-investors wouldn't have a strong incentive to invest their time and resources in students. In the early days, the kinds of students who would be investable would be ones who would either go onto start companies or work at them (with equity as a part of their compensation). This excludes many kinds of researchers, artists, creators, and those who want to work at companies only for salaries without equity, etc. But, in the long-term, I believe that the range of who is investable will cover almost everyone because almost all external human activity (i.e. activity outside of interactions & relationships with family and friends) will eventually be organized through companies. ** this has been an ongoing trend for a while. a decade ago it would be kind of weird for a content creator to have their own company, but today it's the norm for serious creators. similarly, more ground breaking research is moving from academia to labs structured as companies as the company structure is still the best way to align incentives between collaborating individuals.

For students, instead of paying money, they are instead paid to learn. It's not enough to make education free because students need to pay to live in the right places for their crafts, and also to be able to afford expensive tools that they will need do great work. For great teachers, equity in upside will be the only viable financial incentive to encourage them to teach students. Of course there will be exceptions: great teachers who mentor students out of the desire to give back while sacrificing a lot of wealth in the process, but such already rare individuals will become even more rare as the opportunity cost rises.

A new education system

The education system that works for a post-AI world will form on top of personal tokens.

In my view, the functions of a great education are:

  • Discovery: helping a student to discover their unique talents and their natural strengths.
  • Training: helping a student to become excellent at their craft.
  • Community: providing students ambitious, motivated peers who work together and challenge each other. The most valuable learning happens through osmosis with the right peers.
  • Credentialing: helping a student stand out by signaling their competence.

These features of education go hand in hand, each irreplaceable.

Personal tokens enable a wider range of possible configurations of these functions through a marketplace of possibilities. Personal tokens allow for specializations on the teaching side, for example: there will be scouts who specialize in discovering what individuals' strengths are, and teachers all along the spectrum of training: from general skills (e.g. how to manage the mind) to specific (e.g. product design). Students will be able to mix and match teachers to help them achieve what they want. For example, a student interested in building products would probably want a teacher-investor who's built and scaled products, and perhaps another who is a master designer, and maybe even one to help them train their mind and body for optimal performance.

Teachers are held accountable. They have skin in the game and are directly penalized if their student does not succeed (by losing the money & time they invested in the student). Teachers cannot hide their incompetence for long because they'll go broke and lose credibility, making it easier for students to identify the right teachers because the ones that survive are more likely to be actually useful. Teachers will no longer be insulated from reality, which is good for students and the great teachers alike.

As need to discover your innate strengths becomes a necessity as the world becomes more extreme, more students will be encouraged by teachers to embark on exploratory journeys in order to discover their innate talents. In the same way that Charles Darwin took off on a long voyage to study nature, individuals could raise capital to tinker with a new artistic medium, hone design skills through product design explorations, or build nuclear reactors in their garages, etc. Teachers would be willing to invest in students to enable such explorations because they know that it's an invaluable learning process that improves likelihood of success.

The list of shareholders in a personal token will be a much more powerful signal of competence than a degree from an institution. If a promising designer is invested in by, say, Jony Ive, the world will take notice. Such a signal will be much more powerful than graduating from, say, Stanford because Jony had to actually put his money on the line - which he's unlikely to do unless he actually believes in the student's potential. ** credentials still have an expiry date: an individual eventually has to create value in the world to continue to be held in high regard.

Some teachers will band together to form “guilds” that resemble startup accelerators. They invest together and build communities that enable students to learn with and from peers. There can be a vast range of such guilds and the kind of skills they train.

AI pushes us to find our own unique expressions which will almost certainly not fall neatly into the rigid, fixed learning paths of the world. To cater to the growing range of possible skills, we need a wide range of learning options financed by venture capital. In this new system, each individual will have far greater control over their education, will have access to a higher quality of education that they would not be able to afford otherwise, and can more easily identify the right teachers to learn from (as the bad ones perish more quickly).

There are concerns we'll need to address. We will need strong protections against harassment from investors, fight fraud, and educate people on what they are agreeing to as selling and buying shares in future outcomes will be a new behavior for both students and many teachers.

Some may argue that scaling grants and scholarships to everyone may be better than having students give up equity in their future outcomes. Grants and scholarships won't suffice because the amount of capital that will flow to promising young people through investments will be far greater (by many orders of magnitude) than the capital available through grants because unlike grants, investments come with an opportunity to profit. Also, students would actually want to raise capital (by giving up equity in upside) from great teachers for the signal of competence it gives them (which is more powerful than any degree), to give their teachers skin in the game, and importantly, because in an extreme world this will become the only way to train with the best teachers.

Some may also view such a market based approach for human potential as reductive: pushing individuals to optimize for economic output instead of higher pursuits, but there is always room for artistry: rising above the pressure to optimize for economic output. In fact, many of the greatest outcomes in any field are products of such artistry, and it's usually the case that those who directly optimize for economic output can't achieve the greatest outcomes because they lack artistry. In my eyes, this paradox is one of the greatest wonders (and beauties) of the world. It's often those that hate "optimizing market forces", that hate the stifling "algorithm", that contribute most to the growth of the market, creating market forces that push others to be like them.

Similarly, while the world seems to be getting more competitive, the ones who will do best will be those who escape competition by doing what they're meant to do, that no one else can compete with them on. Many would be surprised by how little the top founders and creators focus on what their competitors are doing. They are instead focused on creating the best products they can and on understanding themselves and their audiences. Obsession with competition is a habit of the average mind.. And since there won't be room for the average as the world becomes more extreme, more individuals will be pushed to escape competition. Personal tokens and the education system that forms around it will encourage people to lean into their unique, innate, differentiated talents that no one can compete with them on, and I believe will even help discourage the vicious, zero-sum, competition we see all around us.

What about those that don't show potential and won't be invested in? Firstly, as I mentioned earlier, we will need some sort of universal basic income as a safety net. There's no way around it.

There are two kinds of students that will struggle to be invested in: those who are genuinely average and will likely never be able to create value in a post-AI no matter how much work they do, and the second kind that has great potential, but hasn't manifested it yet. For the kind of person who has undiscovered potential, there will exist a strong market force that pushes investors to try to discover them before others do because that is when you can invest in them when they are the cheapest (i.e. when no one else yet knows their potential). The world will evolve to pull signs of potential out of individuals (if they have it), making more people investable. Similarly, as investors become wealthy, their risk appetite increases and they will make more risky bets on more individuals who may not display obvious signs of potential, increasing the percentage of students invested in.

Unlike teachers today who share some of their knowledge while keeping some secret (because they sell their knowledge as a product - through courses, etc.), teacher-investors (who invest in students' personal tokens) will be incentivized to share as much as possible, to as many people as possible, without holding back. The value a teacher-investor provides to the students they train isn't knowledge you can consume somewhere, it's the personalized feedback tailored to the student's situation that you can't get from static content - or even the AI's that the teacher trains. The more people teacher-investors reach through content and AI, the more students they will attract and will be able to invest in. Therefore, the new education system will also positively impact students who struggle to be invested in through better learning resources due to better incentives for teachers to scale their teaching. Also, over time a greater percentage of students will become investable as it becomes more clear to everyone what it takes to create value (which today only a tiny percentage of students get).

Fall of traditional education

Education looks like what the publishing industry looked like pre-internet, pre-social-media. Similar to how everyone consumed news from a small number publishers (like the New York Times), most young people aim to get into a small number of top colleges, and are considered "failures" if they can't get in.

Similar to how big publishers were not only slow to adapt to social media, they still haven't figured out how thrive in it, colleges will not be able to adapt to AI because:

  1. they don't have direct skin in the game: they continue to generate immediate revenue from students even if those students go onto fail, and therefore lack the forcing function to adapt.
  2. they won't be able to afford the right teachers as their cost will be too high for the "selling education as a product" business model.

Eventually, the value of their credentials will deteriorate as reality catches up and the world sees that their credential is no longer a signal of competence (and likely even a signal of lack of competence). Yet, like big publishers, the top colleges will continue to be around due to their large endowments & cultural significance. But their role in society will fall.

College admissions authorities are so insulated from reality (market forces), that they've literally gone insane: rejecting individuals with great potential for the weirdest, most absurd reasons. The long-tail of mediocre colleges just copy what the top colleges do because that's what students expect, spreading the insanity through the entire industry. I'm all for colleges having their stupid admissions criteria, but it hurts me to know that hundreds of millions of young people believe that the best (or worse: only) path for them to prosper in life is to get into one of these institutions. Winning an admission at a top college often comes down to knowing the game of what to say and how to present yourself that favors those who come from wealthy families who are already familiar with the game - like an arcane cult ritual that only members know about.

Some bright young students are realizing that college isn't worth it anymore and are doing things like joining startups, etc. While this may seem like a better path given the options today, it isn't a replacement for a more open-ended, exploratory learning that education provides. Startups often require an intense grind in a narrow direction, whereas some of the most ambitious ideas and learning emerge from open-ended tinkering and exploration (ideally with great mentorship). Therefore, the new education system that personal tokens enable will replace college, instead of directly joining / starting companies. ** for certain individuals it will make sense to go straight to starting companies. it just won't be the best option for most imo.

Colleges are still where much research occurs and will continue to attract students who want access to the professors and resources to do those kinds of research. But, I strongly believe that even such research will increasingly be conducted through companies. We're seeing signs of this transition already. Within a decade I predict that most top research will happen in companies, not academic institutions which have mainly been reduced to iterative research with the aim of publishing papers at the next quarterly conference as opposed to actually impacting the world.

AI coupled with an education / research ecosystem built around personal tokens will usher in a new age for education and research that isn't insulated from reality. Outdated colleges, and by extension the entire rotting education system that leads to colleges, will become relics of the past. Good riddance.

Parting thoughts

Artificial intelligence is prying our education system open at its core, creating an incredible opportunity to transform how billions will learn. AI and extreme outcomes will hit us much faster than we expect. Now is the time to act.

I'm not a "writer". I don't want to just propose good ideas; I want to actually bring them to life. I want to work with people who are committed, obsessed, and are hungry to do the work of their lives. Even if we're right about the general direction, it's difficult to predict how long it will take to achieve something worthwhile. In the early days, there will be significant backlash because of how strange (and even wrong) it feels to finance learning through venture capital. But given the changes I see AI bringing, I can't imagine any other way the education system can evolve. ** if you see any other possibilities, please let me know! my biggest fear is that i might not be seeing reality as it is, but perhaps as i want it to be.

Yet, I see no endeavor more worthwhile, no battle more alluring, no achievement more fulfilling than to help enable better learning paths for the world.

If you resonate, reach out.

Appendix

Extreme outcomes is proportional to complexity.

How extreme outcomes are is directly proportional to how complex the skill is. My favorite example to demonstrate this is the outcomes of Chess vs. Go. It's already known Go is a more complex game than Chess because there are more possible board positions you can end up in (a game of chess can have in the ballpark of 10^120 different permutations, whereas a game of go can have 10^360 ones). This difference in complexity translates to a more extreme distribution of relative skills (measured via ELO ratings) between the top professional players of each game:

Elo

And to be clear, this is just the difference in skill. Outcomes (wins / losses) will be far more extreme because small differences in skill lead to outsized differences in wins vs. losses (a player with a 400-point advantage is expected to win about 90% of the time).

It's also difficult to argue that somehow top professional Go players had more resources relative to the average Go players than the top Chess players had relative to the average chess players, leading to more extreme distribution of skill for Go vs. Chess. The reality is that many of the world's greatest players today have had quite similar access to resources at young ages: great computers to train with, and attended reputed training programs run by some of the best coaches.

Therefore, the more extreme differences in ability for Go can only be attributed to the difference in game complexity.

Similarly, if we could plot ELO ratings for programming (e.g. coding puzzles) vs. content creation, we would find that the latter would have a far more extreme distribution of skill than programming that is proportional to the underlying complexity of the task itself.

Intro