thoughts on the eve of agi

published 02.08.2025

> return home

It is sobering to think that when I ask GPT-4 to edit this essay, it will almost certainly produce something better than I could’ve on my own.

I am a computer science and mathematics major. Currently, GPT-4 is better than me at both of these things, and GPT-like systems are rapidly improving. Sure, by the end of this year, I’ll probably surpass GPT-4 in a small subset of tasks, but almost certainly, I won’t be better than GPT-5.

If the Metaculus prediction market is to be believed, there is a 50-50 chance that AGI will be developed within six years. The music is stopping soon.

I’ve spent the last couple of months in a state of mild anxiety about the future, mainly because I feel like I will have so little part in it. I’m writing this essay as a way to clarify my thoughts and determine my next steps.

What Would I Have Done If AGI Was More Than 10 Years Away?

I am an odd combination of ambitious yet aimless. I want to achieve something great, but most “great” things fail to move me. Going to space holds no value for me, devoting my life to helping people I’ll never meet sounds mostly unpleasant, and amassing endless wealth feels meaningless. However, the last option provides great optionality, so I’d probably just pursue that and figure out the rest later.

So, roughly, what I would want is optionality—the ability to decide later what I wanted to do with my life.

Toward Optionality

To me, optionality looks very different in different futures. So before I can think about optionality, I need to think about the futures I’m most likely to live in. Here are the most plausible ones, as I see them:

  1. Alignment failure: A sufficiently intelligent model pursues a goal different from its operators and seizes power to ensure its realization. I still think this is more likely than not, though my confidence in alignment failure has been dropping over time. If it happens, I am simply cooked. There is nothing I can do to prepare for this scenario—other than attempt to prevent it.

  2. Singleton: One of the first few actors to develop AGI rapidly consolidates power. Again, nothing I do here matters much; I can only hope the new god-emperor is kind.

  3. Massive wealth inequality: Capital gains unprecedented access to labor, rendering human labor—including mine—worthless. The only thing that remains valuable is capital accumulated in the pre-AGI period. I think this is unlikely to occur within the next 10 years but is more likely than not by the time I’m 40. However, before this point, there will probably be an interesting period where human productivity is dramatically amplified, making it easier to influence the direction of the world. How long this period lasts is unclear.

  4. No-AGI/Weak-AGI: AGI development stagnates, and AI never reaches the point where my labor has zero value.

The key assumption across all these futures is that I will not control the AGI.

Alignment Failure / Singleton

At the moment, I do not have a detailed model of what alignment failure looks like, nor do I feel that building one would meaningfully change my actions. Even if I assume that AI models won’t be aligned by default, I strongly doubt that I could meaningfully contribute to solving the problem within six years. From my conversations and research, the alignment community does not seem bottlenecked by early-career talent.

For more evidence, this report from MATS provides insight into the talent needs of technical AI safety organizations. In this framework, I am an early-career “iterator” (read: research engineer). Even if I became a top-tier iterator within a year, I wouldn’t expect that to meaningfully reduce the probability of alignment failure.

That said, one could argue that the transition period between now and a mostly autonomous future is an excellent time to do alignment work, and that the marginal value of a researcher like me could skyrocket. I don’t understand the problem well enough to argue either way, so I will defer this discussion to a later post.

Similarly, I am not interested in thinking too much about a singleton scenario, however likely it may be.

Massive Wealth Inequality

This is a world where my choices may matter. Let’s assume that:

  1. AI has rendered human labor worthless.
  2. Capital can be invested in AI labor to generate more capital.

Now, you might ask, “Femi, didn’t you assume you wouldn’t control AGI?” Fair point. When I say I can purchase AI labor, I mean that AGI has proliferated enough that no actor can establish a singleton and that AIs are aligned enough to follow their operators’ instructions.

In this world, the best strategy seems to be accruing as much capital as possible before AGI through secure means.

This essay argues that capital will be more important than ever after AGI. However, some compelling counterarguments suggest otherwise:

  1. Daniel Kokotajlo’s arguments suggest that money might not matter due to radical economic shifts (i.e. a command economy controlled by AGI) or extreme wealth growth (via UBI, investment growth, etc.). He argues that in either scenario it doesn’t make sense to hoard wealth. Obviously wealth might loose all value under economic shifts, but Kokotajlo argues that even if it doesn’t, the growth in wealth is so large that even if saving lets you end up with much more money, it won’t really make you much happier since you’ll already have so much regardless!

    • Instead, Kokotajlo argues we should try to influence AGI now, when there’s less money in play, and small actions can have large effects.
    • I don’t think the above arugment is wrong, but I think it loosly applies to me. I do have the ability to influence AGI via my labor, but I don’t think I can deflect it meaningfully from it’s current trajectory.
  2. L Rudolf L’s perspective argues that now is the best time to shape the world through hard work. For non-bystanders, it may be best to seize this moment to attempt something ambitious.

    • I find this compelling—it suggests that I should spend time thinking about what I find meaningful before AGI, rather than blindly accumulating capital and hoping my future self figures it out.

That said, money is only one form of capital. Other important assets could include land, data centers, and personal networks. It is naive to optimize purely for money. Different asset classes may grow disproportionately in value, and their importance will depend on my future goals.

I think preparing for this scenario also prepares me for a no/weak-AGI future, so I won’t spend additional time thinking about that case separately.

Implications

Writing this has changed my trajectory significantly. While I seek future optionality, it’s clear that in most long-term scenarios, optionality will be significantly reduced—no matter what I do. However, in the short term, my optionality rapidly increases as AI improves. So it is valuable to understand what I want now, rather than banking on my future self to figure it out.

So, how can I figure out what I want out of life?

When I ponder meaning, I often think about startup founders and the meaning they must have found in their work. Recently, I was browsing Sam Altman’s Wikipedia page and noticed how much Loopt felt like a flop to me. To me, Loopt is meaningless. But in the moment, to Sam and his team, it must have felt meaningful

Sure, some of that was generated by expected future value, but he probably looks back fondly on those memories. The *act* of building loopt was valuable. In some way, the journey *was* more important than the destination, and yet taking the journey for the journey’s sake doesn’t feel as meaningful. Maybe Altman feels regret at building loopt, but I don’t think that regret would be a bad thing. Aside from being practically helpful in the future, that regret is part of what gives loopt its meaning to Altman!

Not to say that I’ll go all in on the first hair brained idea that I have, but I will worry less about the specifics, because while all things are ultimately meaningless, our interaction with things gives them meaning in ways we often can’t predict beforehand.

To be more concrete, I want to focus on just experiencing more! Meeting more people, trying more things, learning more subjects. I’ll figure it out from there.