February 11, 2026
How Would You Behave?
Among the general public, there is a wide range of opinions on AI. What it means for the future of work, our social order, politics, and a range of other things is all up for debate.
In the sphere of people who spend a lot of time thinking about AI, the range of outcomes people think are likely is narrower. There is certainly a camp of AI thinkers who think it will just be a regular technology, like the internal combustion engine or the telephone. Impactful things that changed the world in many ways, certainly, but not inventions that fundamentally reorganized society or posed a serious threat to humanity, at least directly.
My non-scientific impression is that outside of this 'regular technology crew' the majority of AI thinkers are under the impression that the technology will fundamentally reshape the world in ways that are hard to imagine. In this category I'm including both people who think AI will lead to untold prosperity and advancement, as well as people who think AI will lead to the downfall of humanity through some means.
This is an alarming thought, but not something that can be reckoned with in a blog post. And seemingly not something being reckoned with very effectively at all in most frontier labs as AI development accelerates. But it got me thinking about this question: How would you behave now if you were convinced AI would lead to a complete upheaval in society over the next ~20 years, either for good or bad?
I want to clarify that I don't count myself in this category, most of all because I have not formed a strong opinion yet on where this is all headed. Though I'm closer to an opinion on it than I was maybe 2 months ago. But let's say I was convinced society would be turned upside down by AI by the time I'm in my mid-40s in a couple decades, what actions would I take now?
I would stop saving for retirement
Sort of obvious, but if the two options for the world by the time I'm 65 are untold prosperity for all or my destruction, I'm not sure what good my 401k would be. Doesn't mean I'd cash it out and buy a sports car, but hey I could pay a 15% tax penalty or whatever and use that money for down payment on a house in the next few years.
I would change my career path to work in the AI safety field
This is one where I will give credit to the doomers and preachers on AI: Most people who I see (at least on the internet) banging the drum most forcefully about AI being great or terrible are working in the field, or at least adjacent to it. Some are pushing the frontier forward, others trying to restrain it, which make sense as vocations if you really think this way. Tricky to untangle causation vs. correlation here, but I would be highly skeptical of someone who went around saying "AI will destroy the world in 10 years" and was a junior partner at a law firm (to pick a random corporate job that doesn't really impact AI development).
I'd re-orient my politics around AI safety issues
I find it strange that the frontier AI labs that talk so much about the upside and downside potential of AI are so resistant to regulation. I get it in the sense that they are companies, but they also don't seem to operate under the typical silicon valley premise of making money first and foremost (see OpenAI the cash incinerator or Anthropic the 'public benefit' company). If people building this technology at the frontier labs really believe what they say in public, I am completely baffled by their decisions to fight regulation. They would essentially be able to write the regulation themselves in a way that theoretically should enable AI development while better prioritizing safety and alignment. The big counter-argument is the "race with China" piece, but as many people have written, that only serves as evidence that we need better international co-operation to decrease the massive downside risk.
If I were a random person who thought AI would remake the world for good or bad, how candidates plan to regulate AI would immediately become my top political consideration. Someone who used to be a voter who prioritized social justice and the environment, but they came to believe in AI remaking the world, would logically replace their old issues with AI being most important. Same for a conservative voter who prioritizes perhaps traditional values and personal liberties. It would not be rational to make huge, confident upside or downside claims about AI and then not have it be your primary political issue. If you really believed this, coming from either side of the political aisle, you would be doing a lot more than voting to influence policy. You'd likely be organizing, volunteering, showing up at hearings, donating a significant portion of your income, etc.
Things you might think to do but that upon reflection don't make sense:
Honorable mentions of things I considered:
Become a prepper
Won't work, if AI takes over the world or gives unlimited prosperity and health, a log cabin and 200 cans of beans will not save you
Get an AI degree
Too slow, by the time you finish your degree in 3 years the cutting edge will be way past you
Have kids/don't have kids
Not sure which way this one really points. Like if the world will be amazing I guess have kids, and if the world will end maybe don't? Or maybe do and enjoy having a family for a decade before the end? To each their own
Upon further reflection I think the right answer if you're an AI doomer is get a dog. Timelines should line up well!
Well that's all for today's thought experiment. I would love to hear your thoughts, please reach out via the contact page if you have any!
And remember to cultivate your garden.