Many people say that money won't matter post-AGI, or at least it will matter less. By default, this is exactly backwards.
First, some terms: labor means human mental and physical effort that produces something of value. Capital goods are things like factories, data centres, and software—things humans have built that are used in the production of goods and services. We’ll use "capital" to refer to both the stock of capital goods and to the money that can pay for them. We’ll say "money" when we want to exclude capital goods.
The key economic effect of AI is that it makes capital a more and more general substitute for labor. There's less need to pay humans for their time to perform work, because you can replace that with capital—data centres running software replaces a human doing mental labor.
We will walk through consequences of this, and conclude that labor-replacing AI means:
- The ability to buy results in the real world will dramatically go up
- Human ability to wield power in the real world will dramatically go down because:
- The value of people’s labor goes down, which for most people is their main lever of power
- It will be harder for humans to achieve outlier outcomes relative to their starting resources
- Radical equalising measures are unlikely
Overall, this points to a neglected downside of transformative AI: that society might become permanently static, and that current power imbalances might be amplified and then turned immutable. A static society with a locked-in ruling caste does not seem dynamic or alive. We should not kill human ambition, if we can help it.
The default solution
Let's assume human mental and physical labor across the vast majority of tasks that humans are currently paid wages for no longer has non-trivial market value, because the tasks can be done better/faster/cheaper by AIs. Call this labor-replacing AI.
There are two levels of the standard solution to the resulting unemployment problem:
- Governments will adopt something like universal basic income (UBI).
- We will quickly hit superintelligence, and, assuming the superintelligence is aligned with human values, live in a post-scarcity technological wonderland where everything is possible.
Note, firstly, that money will continue being a thing, at least unless we have one single AI system doing all economic planning. Prices are largely about communicating information. If there are many actors and they trade with each other, the strong assumption should be that there are prices (even if humans do not see them or interact with them). Remember too that however sharp the singularity, abundance will still be finite, and must therefore be allocated.
Money currently struggles to buy talent
Money can buy you many things: capital goods, for example, can usually be bought quite straightforwardly, and cannot be bought without a lot of money1. But it is surprisingly hard to convert raw money into labor, in a way that is competitive with top labor.
Consider the stereotypical VC-and-founder interaction, or the acquirer-and-startup interaction. In both cases, holders of massive financial capital are willing to pay very high prices to bet on labor—and the bet is that the labor of the few people in the startup will beat extremely large amounts of capital.
If you want to convert money into results, the deepest problem you are likely to face is hiring the right talent. And that comes with several problems:
- It's often hard to judge talent, unless you yourself have considerable talent in the same domain. Therefore, if you try to find talent, you will often miss it.
- Talent is rare—credentialed talent even more so. Many actors can't afford to rely on any other kind, because of point 1, so there's just not very much of it going around.
- Even if you can locate the top talent, the top talent tends to be less amenable to being bought out by money than others.
With labor-replacing AI, these problems go away:
First, the AIs can be copied. Currently, huge pools of money chase after a single star researcher who's made a breakthrough, and thus had their talent made legible to those who control money, who can judge the clout of the social reception to a paper but usually can't judge talent itself directly. But the star researcher that is an AI can just be copied. Everyone—or at least, everyone with enough money to burn on GPUs—gets the AI star researcher. No need to sort through the huge variety of unique humans with their unproven talents.
Second, the price of talent will go down massively, because the AIs will be cheaper than the equivalent human labor, and because competition will be fiercer because the AIs can be duplicated.
Third, lots of top talent has complicated human preferences that make them hard to buy out. The top artist has an artistic vision they're genuinely attached to. The top mathematician has a deep love of elegance and beauty. The top entrepreneur has deep conviction in what they're doing—and probably wouldn't function well as an employee anyway. Talent and performance in humans are surprisingly tied to a sacred bond to a discipline or mission . In contrast, AIs exist specifically so that they can be trivially bought out (at least within the bounds of their training). The genius AI mathematician, unlike the human one, will happily spend its limited time on Earth proving the correctness of schlep code.
Finally, the AIs will eventually be much more capable than any human employees at their tasks.
This means that the ability of money to buy results in the real world will dramatically go up once we have labor-replacing AI.
No more outlier outcomes?
We’ve already discussed how most labor will be obsoleted by AI and robotics. But eventually, even the most talented humans will be outmatched by AIs. What happens then?
Much change in the world is driven by people who start from outside power, achieve outlier success, and then end up with power. This makes sense, since those with power rarely have the fervour to push for big changes, since they are exactly those who are best served by the status quo.
Whatever your opinions on income inequality or any particular group of outlier successes, the possibility of someone achieving outlier success and changing the world is important for avoiding stasis and continued social progress.
Let's consider the effects of labor-replacing AI on various routes to outlier success through labor:
Entrepreneurship is increasingly what Matt Clifford calls the "technology of ambition" of choice for ambitious young people. Right now, entrepreneurship has become easier. AI tools can already make small teams much more effective without needing to hire new employees. They also reduce the entry barrier to new skills and fields. However, labor-replacing AI makes the tenability of entrepreneurship uncertain. There is a possible future in which AIs remain mostly tool-like and entrepreneurs can succeed long after most human labor is automated because they provide agency and direction. However, it also seems likely that sufficiently strong AI will eventually obsolete human entrepreneurship. For example, VC funds might be able to directly convert money into hundreds of startup attempts all run by AIs, without having to go through the intermediate route of finding human entrepreneurs to manage the AIs for them.
The hard sciences. The era of human achievement in hard sciences may end within a few years because of the rate of AI progress in anything with crisp reward signals.
Intellectuals. Keynes, Friedman, and Hayek all did technical work in economics, but their outsize influence came from the worldviews they developed and sold, which made them more influential than people like Paul Samuelson who dominated mathematical economics. John Stuart Mill, John Rawls, and Henry George were also influential by creating frames, worldviews, and philosophies. The key thing that separates such people from the hard scientists is that the outputs of their work are not spotlighted by technical correctness alone, but require moral judgement as well. A core reason why intellectuals' ideologies can have so much power is that they're products of genius in a world where genius is rare. A flood of AI-created ideologies might mean that no individual ideology, and certainly no human one, can shine so bright anymore. The world-historic intellectual might go extinct.
Politics might be one of the least-affected options, since we'd guess that most humans want a human to do that job, and because politicians get to set the rules for what's allowed. However, the charisma of AI-generated avatars, and a general dislike towards politicians might throw a curveball here. It's also hard to say whether incumbents will be favoured. AI might bring down the cost of many parts of political campaigning, reducing the resource barrier to entry. However, if AI too expensive for small actors is meaningfully better than cheaper AI, this would favour actors with larger resources. We expect these direct effects to be smaller than the indirect effects from whatever changes AI has on the memetic landscape.
Military success as a direct route to great power and disruption has—for the better—not really been a thing since Napoleon. Advancing technology increases the minimum industrial base for a state-of-the-art army, which benefits incumbents. AI looks set to be controlled by the most powerful countries. One exception is if coups of large countries become easier with AI. Control over the future AI armies will likely be both (a) more centralized than before, since a large number of people no longer have to go along for the military to take an action, and (b) more tightly controllable than before,since the permissions can be implemented in code rather than human social norms. These two factors point in different directions so it's uncertain what the net effect on coup ease will be. Another possible exception is if a combination of revolutionary tactics and cheap drones enables a Napoleon-of-the-drones to win against existing armies. Importantly, though, neither of these seems likely to promote the good kind of disruptive challenge to the status quo.
All this means that the ability to get and wield power in the real world without existing capital will dramatically go down once we have labor-replacing AI.
Enforced equality is unlikely
The Great Leveler is a good book on the history of inequality that (at least per the author) has survived its critiques fairly well. Its conclusion is that past large reductions in inequality have all been driven by one of the "Four Horsemen of Leveling": total war, violent revolution, state collapse, and pandemics. Leveling income differences has historically been hard enough to basically never happen through conscious political choice.
Imagine that labor-replacing AI is here. There's a massive scramble between countries and companies to make the best use of AI. This is all capital-intensive, so everyone needs to woo holders of capital. The top AI companies wield power on the level of states. The redistribution of wealth is unlikely to end up on top of the political agenda.2
Therefore, even if we end up in a very rich society, it is unlikely that people in the future will be starting in it on an equal footing. It is also unlikely that they will be able to greatly change their relative footing later on.
Consider also equality between states. Some states stand set to benefit massively more than others from AI. Many equalising measures, like UBI, would be difficult for states to extend to non-citizens under anything like the current political system. This is true even of the United States, the most liberal and humanist great power in world history. By default, the world order might therefore look—even more than today—like a global caste system based on country of birth.3
The default outcome
By default, in the post-labor-replacing-AI world:
- Money will be able to buy results in the real world better than ever
- People's labor gives them less leverage than ever before
- Achieving outlier success through your labor in most or all areas is now impossible
- There will have been no transformative leveling of capital, either within or between countries
This means that those with significant capital when labor-replacing AI started have a permanent advantage. They will wield more power than the rich of today. Upstarts will not defeat them, since capital now trivially converts into superhuman labor in any field.
In the best case, this is a world like a more unequal, unprecedentedly static, and much richer Norway: a massive pot of non-human-labor resources (in Norway’s case, oil) has benefits that flow through to everyone, and yes some are richer than others but everyone has a great standard of living. The only realistic forms of human ambition are playing local social and political games within your social network and class. If you don't have a lot of capital, you don't have a chance of affecting the broader world anymore. Remember: the AIs are better poets, artists, philosophers—everything; why would anyone care what some human does, unless that human is someone they personally know? In feudal societies the answer to "why is this person powerful?" would usually involve some long family history, perhaps ending in a distant ancestor who had fought in an important battle: "my great-great-grandfather fought at Bosworth Field!". In the future, the answer to “why is this person powerful?” would trace back to something they or someone they were close with did in the pre-AGI era: "oh, my uncle was technical staff at OpenAI". The children of the future will live their lives in the shadow of their parents, with social mobility extinct. This is far from the worst future we could imagine, but something important will have been lost.
In a worse case, AI trillionaires have near-unlimited and unchecked power, and there's a permanent aristocracy that was locked in based on how much capital they had at the time of labor-replacing AI. The power disparities between classes might make modern people shiver, much like modern people consider feudal status hierarchies grotesque. But don't worry—much like the feudal underclass mostly accepted their world order due to their culture even without superhumanly persuasive AIs around, the future underclass will too.
In the absolute worst case, humanity goes extinct, potentially because of a slow-rolling optimization for AI power over human prosperity over a long period of time. Because that's what the power and money incentives will point towards.
Towards the intelligence curse
We’ve seen how advanced AI threatens both normal career paths, as well as the ability of outlier talent to make a difference in the world. Most people’s power and leverage over the world comes from them being able to do useful work—from being economically relevant. Labor-replacing AI might delete that power and leverage. By default, that will have far-reaching consequences.
Over the past few centuries, there's been a big shift towards states caring more about humans. Why is this? We can examine the reasons to see how durable they seem:
- Moral and political changes downstream of the Enlightenment, in particular an increased centering of liberalism and individualism.
- Affluence & technology. Pre-industrial societies were mostly so poor that significant efforts to help the poor would've bankrupted them. Many types of help (such as effective medical care) are also only possible because of new technology.
- Incentives for states to care about freedom, prosperity, and education.
AI will help a lot with the 2nd point. It will have some complicated effect on the 1st. But the 3rd in particular is unappreciated. With labor-replacing AI, the incentives of states—in the sense of what actions states should take to maximize their competitiveness against other states and/or their own power—will no longer be aligned with humans in this way. Adam Smith could write that his dinner doesn't depend on the benevolence of the butcher or the brewer or the baker. The classical liberal today can credibly claim that the arc of history really does bend towards freedom and plenty for all, not out of the benevolence of the state, but because of the incentives of capitalism and geopolitics. But after labor-replacing AI, this will no longer be true.
We call this the intelligence curse, and it’s what we’ll examine next.
-
Or other liquid assets, or non-liquid assets that others are willing to write contracts against, or special government powers. ↩
-
An exception might be if some new political movement or ideology gets a lot of support quickly, and is somehow boosted by some unprecedented effect of AI (such as: no one has jobs anymore so they can spend all their time on politics, or there's some new AI-powered coordination mechanism). ↩
-
This is especially true because there will likely be even fewer possibilities for immigration. The main incentive to allow immigration is its massive economic benefits which only exist when humans perform economically meaningful work. ↩