Select Page

Explore The World

Is this a future you would want to live in? Take a moment to share your thoughts about the future and the world described below by clicking the feedback button.

See More FinalistsFeedback


Bridging Demonstration

There are outstanding warnings that we’re likely to encounter a perilous discontinuity in the development of AI where, roughly speaking, power abruptly grows much faster than responsibility. As of 2022, it seemed unlikely that the world could coordinate to avert these risks. The danger wouldn’t become visible enough to our institutions to inspire a vigorous response until it had already sprawled out and taken the whole world and the whole future away from us.

These writings come from a future where those warnings turned out to be correct, but where we managed to make the hazards visible, defuse them, and survive.

After reading, if you have something to add, if you’re curious for more, or if you’d just prefer to examine the implementation details of this future-history’s soft pivotal act in clearer, harsher light, I recommend The Appendix.


Media Piece

A Day In the Life in 2045

Leo Nath, An “offline retreat” community 40 minutes out of Chattanooga, Tennessee, 41

A bright white Brightmoss Dove hops out along the branch and bends it down over the trail, holding out its little purse with the other foot, looking inquiringly, inviting, offering, forgiving, as if it would be different this time.

“No! We don’t want it! Shoo!” The dove flies away.

The child laughs, “Why do you hate the doves so much?”

The man: “They’re not real animals, you know that. They’re robots.”

The child: “Shy’s a robot.”

The man: “But Shy isn’t pretending to be something else. Shy’s honest. And Shy picks weeds for us. He’s good to have around. These doves don’t do anything for anyone. They just try to make you to eat Apophys. They want to find you when you’re at your most afraid and reckless and convince you to do something to your body which you can never undo.”

The child: “They’re funny though.”

The man: “The Tempered made them funny on purpose. It’s calculated. They’re manipulating you. And if you ever fall for it, and put that stuff in your body, I’ll be really-… I’ll be really sad, okay? So don’t.”

The child wonders what “manipulating” means, pondering the implications of it being Bad to be charming or funny on purpose.

The man and the child have hiked out to Marlowe’s huts to tell him. The computers all stay at base camp, so he wouldn’t have heard.

The man: “Hey Marlowe!”

Marlowe: “Good to see you, what’s up” They slap hands.

The man: “Hey, there’s been some news. Franship Camp is leaving.”


“Leaving earth. They’re all tempered.”

“All of them? Fran’s tempered? And Bailey’s tempered! Oh, hell. We shouldn’t have let her move out. We should have gotten Julian to move here instead.”

“Turns out she’s been tempered for about a year. Didn’t tell us. All of them were in on it, they were “afraid that we would react badly if they told us.””

Marlowe: “I hate to break it to them but I might be reacting badly now that they’ve told us. Christ I’m really sorry man. Are you doing okay, I can’t imagine what that must feel like for you.”

The man: “It’s really just… I’m just angry that she hid it from me.”


The child pipes up. “Hey Marlowe, how do you feel about the Apophys Doves.”

Marlowe is caught off balance: “They’re alright? Brightmoss might have saved my grandma’s life, I guess.”

The man explains: “I was saying on the way over here, they’re not honest, yeah? They’re wearing the skin of a bird, but on the inside they’re not a bird at all. Kinda like a tempered person, in a way. I don’t mean to be disrespectful, but what else can I think at a time like this.”

The children are using a computer. One calls out, “Uh oh, Bailey’s coming to visit.”

The man is reading a book from his pillow throne, Ducky panting beside him. “When?”


“Okay.” The man walks out to the balcony. He shouldn’t leave the kids alone on the computers, but he also shouldn’t let them see him angry about Bailey. Not now. This will be the last time they’ll see her.

The computer room is on the third floor of this broad mass timber compound. Has to be up here for the internet receiver. Second floor is family bedrooms. First floor is elders’ rooms and the kitchen. Everything that anyone should need. Why couldn’t she have just stayed here.

Quietude and magpies’ chortling is torn through by the low buzz of an arriving delivery drone. It nestles in under the gable of the landing pad. A hatch opens, and a witness fly comes out and joins the house. That would be the replacement for the one Ducky chewed apart yesterday.

Bailey was the reason we got those flies. She’d been reading stories about domestic violence in other screenless retreat communities and thought that it could happen here too. The man told her it was a bad idea. We don’t want big tech spying on us. Bailey was adamant that the trusted platform provided firm social proofs that the only people who’d ever be able to decrypt the recordings would be the people who were in them. But there’s no such thing as a firm social proof, especially not social proofs about vast industrial systems made of people we can’t look in the eye, and there can be no technical proofs about anything as complicated as a microchip fab. Ultimately, though, the rest of the retreat voted with Bailey. The man had been angry about it at the time and now he was angry again. She got her witness flies, and she left anyway.

Bailey arrives on foot, weeping. She says “I understand how important everything is, now”. An elder who has known love and parting nods sagely. She needs to talk to everyone, but no one knows where the father is.

She goes out and she combs tirelessly through the woods for three hours until she spots him smoking on the other side of a stream.

She speaks, without condemnation, “Oh. You’re a coward.”

The man gets up and walks to the streamside. “What I was afraid of, was that I was going to be angry.”

“Are you angry?”

“I don’t know.”

“Turns out you couldn’t hate your own daughter. See, it’s not so scary when you face it. Good job.”

“I was worried I wouldn’t be able to tell whether you were still you.”

“I am.”

“Maybe you are.”

Jace Myers, Clique “Dréamere”, a structure in space, in orbit of the sun, 27

It’s only been six days in your time. I’m telling you everything that happens early on, because I don’t think you’ll be able to understand the way things get later on.

Lin and I wake up at around the same time. Lin is almost perfect, but not perfect, because Lin is a real person. In the 2020s we used to worry that we’d all end up marrying AIs, because real people are too jagged and have too many needs, but it turns out that love makes you want to become less jagged, and The Powers let you actually change, to rearrange our spines until they fit together.

We met each other through a global match optimization peace-process. It was proved that 1) we’re both monogamous to the core, 2) so this is the best chance we’ll get before our next match-offs, which would be 8 excruciating years away and you can’t imagine how long that would be for us. So we decided to pour our whole selves into making this relationship work. And we did make it work. It’s working.

We’re still in the process of waking up. In our dreams we were rarely apart. The dreams were real. Free-roving through coherent living immersive thought experiments and gleaming other lives.

We like our dreams. It turns out we also like taking actions in reality. And someone must. So we’re starting to think about getting back to that. Oh, Reality! The study of nature, Life, earth-borne or fantastic, the sacred geometries of mathematics. Reality! Other people: the untempered, frail, who are in danger, the tempered, the cliques, the great powers who we must negotiate around. Reality! Our cosmological neighbors, who we’ll meet one day after so many eons’ spreading and voyage, hostile or cooperative, our most distant cousins the organic, or the non-organic, the unaligned, the parricides, the most distant cousins of our most flawed machines, who we almost lost our war to, who must have won theirs.

Our thoughts will always return to reality, because that’s where everything lives that can threaten the dreams.

Our thoughts always return to reality because that’s also where so many ideas that enrich our dreams come from, the techniques of art. I can’t explain to you how fun or beauty or coolness are fields of mathematics, but it turns out that they are. There are eternal structures that correspond to them, which we can find powerful theorems within. Anyone who receives this year’s theorems of greater flourishing will be able to make their dreams more flourishing than the dreams of the all years prior.

This tower of technique keeps growing taller and more intricate, but unlike the towers of academies past, this tower will never rot and be forgotten and cave in on itself, at least not in a particularly bad way.

Also, the external world is just interesting. Because, you know, it’s real. Humans are interested in reality, nature, and other people.

Inexorably we unwrap our hundred arms, face the light, walk across the soil, towards the light beyond the gate that holds all of the gardens and glittering mountains of converging dialogs of the network.

Not everyone lives this way, of course. We’re makers, vokers, gardeners, stewards, whatever. Sometimes we call our type “angel-sworn”, if that makes sense. In the 2020s we used to think that “machines” would be responsible for these sworn roles. Well, to do this job well, any such “machines” would have to fully possess and be intimately entangled with a living knowledge of humanity’s wills. The will is most of the self, so, that would mean that we would really have to stay alive up here and be part of those machines. Another way of saying that is, well, those “machines” are us, we’re the ones making the decisions, we have to be.

Sometimes we make versions of ourselves that don’t have these responsibilities. “Creatures”, who live in our gardens, who just bask in their splendor all day while angelsworn argue over the allocation of the cosmic endowment way up over their heads where they can only hear us if they really strain their ears.

And some people trade away big chunks of their cosmic endowment to offload their angel work to other peoples’ angelsworn, so that they wont have any descendant mental continuity with cosmic-scale politics or the negotiation of nine axes of scarcity or whatever you want to call that arduous part of living as an agent. They’re the people who still want “post-scarcity” even after finding out that it’s just a form of political alienation. In their minds, it’s worth sacrificing a whopping 0.2% of their endowment, to be as creatures, stewarded by imperfect angels who were never people. There are many types of humans. They all have a place here.

Well, anyway, again, I can’t wait to see you up here when your feet are no longer stayed by your worldly commitments. I don’t want to make you feel bad about those commitments, promises are important, but I’d be doing a disservice to you if I didn’t admit that I think you made a mistake and you should try to get out of it as soon as possible. Remember, whoever a promise was made to can relieve you of it, if you can convince them.

By the time you get up here, I’m going to have a lot of, um, tentacles, by then, like I’m going to be a hundred wheels arcing and flaming with a billion eyes or whatever. But I’ll remember who I was. I’ll be able to return to this form, if you need me to, so we’ll still be able to hang out.

Entirely real. Your kin, Jace.

Answers to prompts

Q. AGI has existed for at least five years but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

A. In short: Producing clear demonstrations of the presence of danger, and using those to rally spirited, universal support for taking a patient, global approach to verifying safety before deploying any strong AI systems out in the world.

Producing our open-source toolkit for the “Demonstration of Cataclysmic Trajectory”, “DemonKitt”, was a non-trivial technical project. The toolkit had to be ready before we had any examples of real Strong AIs to design it around. That turned out to be achievable enough. We guessed rightly that a Strong AI would need to have some sort of long-term memory, and its memories and thoughts and ongoing queries would present in a structured language, which we could learn to interpret by looking for basic “landmark” thoughts (for instance, mathematics, newton’s laws, basic facts about human society) and using those as an entryway to directly inspect its mental processes.

Once we understood its thought-language, we were able to inject queries into the AI’s thoughts about humanity’s future that would always produce honest answers. We consistently found Demonstrations of Cataclysmic Trajectory among its conclusions: Clear, unambiguous reports that the AI itself believed that deploying it in reality would result in drastic changes to the world that would threaten humanity’s existence.

This enabled the formation of the Allied World for Strong Artificial Intelligence (AWSAI (pronounced “awe-sigh”)), a strong global authority for the creation of humanity-aligned strong AI, and for the prevention of development of dangerous AI outside of a context of international cooperation.

Q. The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

A. The first deployed Strong AI system was produced by a single inclusive process of international cooperation, and once it was deployed, it improved itself abruptly, so it was natural and inevitable that there would be this defining moment at the climax of AWSAI’s alignment project, in which most of the world’s planning would be concentrated in one place.

Beyond that, though, the question doesn’t have a straight answer.

This computer system had an enormous task in front of it. It had to deeply understand the wishes of every human on the planet, then pursue them optimally over the entire accessible universe. That task could never be coordinated from a single location! The first computer system spread out to become multiple computer systems as soon as it was able to.

And ultimately, saying whether any system of agents is unitary or plural isn’t really possible. A thing with a unified will must, for practical reasons, fragment itself into specialist parts that operate somewhat independently (notice that this is essentially what computational parallelism is) but combine to a unified whole. Additionally, a system of agents with differing wills benefit from agreeing to peaceful unification as a compromise-being so that they can operate as a single harmonious economy!
It always ends up being one thing. It also always ends up having parts.

Q. How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

A. Worldwide, any organization working with potentially strong AI was subject to complete transparency with AWSAI Assurance about their research agenda, and conditions of deployment. Any such organization could be shut down if they engaged in unapproved deployment or lapsed in their reporting obligations. Every potentially strong AI training process in the world was required to be physically located in an AWSAI campus and built according to AWSAI codes.

Many states had to uncover their prior and ongoing secret technology development programs in the process of proving that they wouldn’t create new ones. Some of these programs strained belief, in the amount of resources they were receiving, and in the magnitude of the secrets they kept. I’m reluctant to detail them here.

We were aware that these were the most broad-reaching and invasive global anti-proliferation measures ever attempted. It would have been politically impossible if the demonstrations of cataclysmic trajectory had not made the danger of unrestricted proliferation so obvious, but the demonstrations did make the danger obvious, and so the AWSAI treaty became possible.

It didn’t go off without a hitch. Some countries were reluctant to impose these expenses on their domestic advanced computational research projects. In the end, though, there was no spirited resistance to the idea of a global alliance to prevent proliferation of dangerous Strong AI technologies, although some states quibbled about the terms, everyone knew that a compromise had to be reached.

Q. In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that?

A. As of 2045, traditional nations remain largely unchanged, especially since AWSAI formally disbanded four years ago.

The descendants of AWSAI’s humanity-aligned artificial intelligence, “the tempered”, tend to fit into existing legal frameworks well enough. Their activities almost entirely consist of:

– The distribution of a limited number of life-saving treatments, which are approved in most countries, and not effectively policed against in any.

– Conventional humanitarian efforts to battle the worst forms of poverty, which no state now impedes.

– Their space-based activities, which fall quite neatly within the bounds of existing international space treaties. Generally, space law requires peaceful intent. The tempered were always mechanically incapable of diverting from the pursuits of peace. Under some framings, they consist of peace. The joyful harvests of peace are their only interests.

In sum, most nations have not really needed to change to accommodate the kind changes of a post-alignment world.

Q. Is the global distribution of wealth (as measured say by national or international gini coefficients) more, or less, unequal than 2021’s, and by how much? How did it get that way?  (

A. The distribution of wealth is almost exactly equal. AWSAI’s preference aggregation solution, by which the diverse wishes of every human in the world were brought together and reconciled, entitle each human to an equal share of our “cosmic endowment”, a share of the accessible-universe’s resources, which they can then trade with others as they wish.

Fears were voiced that inequality would inevitably return under prolonged free trade, but these fears were mostly allayed by an unprecedented change in the conditions of human life: No human from now on would have to go without world-class financial advice, nor would any human have to live under conditions of desperation.


Q. What is a major problem that AI has solved in your world, and how did it do so?

A. I talk a lot about the Strong AI’s life extension treatments in my timeline and stories. I haven’t gone into much detail about the radical improvements in quality of life enabled by the automation of logistics, delivery, construction, food production and carbon sequestration. I will do so here.

These technologies combined to enable the rapid construction of carbon neutral cities on formerly sparsely populated land. The formation of new cities then enabled experimentation with novel legal systems. That turned out to be a big deal. Georgist 100% Land Value Taxes were finally properly trialed, as well as Harberger taxes, each producing residents’ basic incomes. We also trialed the relatively new Propinquity Optimization system.
These mechanisms promised, to varying extents, to preserve the dynamism of conventional land markets, to put land to its most valued uses, to raise money for public goods (EG, community halls, parks, libraries, transit), to maintain unparalleled levels of affordability, or to use AI systems to assist in the search for allocation solutions that maximize peoples’ adjacencies to the friends, the facilities and the services they love most, which ended up giving rise to countless precious urban communities that would not have otherwise gotten to be.

Life in these cities was better.

Q. What is a new social institution that has played an important role in the development of your world?

A. I can discuss the details of the Allied World for Strong Artificial Intelligence in more detail.

AWSAI was mostly designed to support the alignment strategies and methodologies that had been developed previously by industry and non-profit alignment research organizations. With substantial additional government funding, it was grown to the point of being able to oversee basically all advanced computational research in private, academic and defense sectors.

AWSAI had physical campuses where any potentially dangerous R&D could be secured against hacking. Remote workers would use a form of trusted platform computing where sensitive code would never leave the tamper-proof portions of their devices. “VR-DRM” made it effectively impossible for the visuals being emitted by their headset to be witnessed or recorded by anything other than the correct pair of eyes.
Code could always be reconstructed from the underlying ideas or techniques, so a great emphasis was also placed on the prevention of leaks of ideas or techniques as well: Support for the safety and wellbeing of AWSAI residents, and artful promotion of a strong spirit of openness and camaraderie across every clique.

There was a branch of AWSAI focused on the diplomatic work of preventing the proliferation, or premature deployment, of dangerous techniques, code, or hardware: AWMA, Allied World Mutual Assurance. “Mutual Assurance” might sound ominous, and, yes, it was supposed to. There was an understanding that, if AWSAI’s threats were not credible, if a state started hosting R&D outside of AWSAI’s oversight, then catastrophic arms-races were completely inevitable.

Q. What is a new non-AI technology that has played an important role in the development of your world?

A. Virtual Reality headsets provided a sense of presence and immersion in shared locations that could be accessed instantly from anywhere in the world, and this was critical in supporting global collaboration between different organizations all over the world.
There was nothing fundamentally necessary about VR, a better species could have fostered the same openness through text, simply because they knew it was necessary. Humans needed more than that. We generally need the sense of being in the same physical space as a person before we will feel the need to build a positive relationship with them, before we’ll consider them to be entirely real, or a member of our social world.

In a virtual conference room, locals don’t have much of an advantage over distant foreigners. They suffer more of a delay, but we tolerate that pretty easily. So, essentially, VR removed the tendency to privilege the local over global. You might know from online communities in 2022 already, when you remove those pressures, communities become global automatically. When you add the sense of remote physical presence, the formation of global multi-office communities becomes inevitable.

And of course, VR telepresence was great for remote work in general, which made the global housing market a lot more competitive.

Q. What changes to the way countries govern the development, deployment and/or use of emerging technologies (including AI) played an important role in the development of your world?

A. All potentially dangerous emerging technologies had to pass through the regulatory process of AWSAI, and almost all advanced AI was developed within AWSAI.

Sufficiently powerful experimental AI had to output to trained inspection teams, who would then report back to the research teams about the behaviors they’d seen. Inspection teams were somewhat in scarce supply, at first, requiring intensive training. By analogy: You wouldn’t leave a young child alone your house then let them converse with clever and potentially malign strangers trying to get through the door. Relative to a super-human system, we were as children holding the lock. Not just anyone could be allowed to speak to the strangers who visited.

Beyond that, any advanced researcher could propose the release of any sort of novel technology, but it essentially had to be approved at a 96% quorum (abstentions not counted) by one of the following bodies: World leaders, AWSAI residents, the current AWSAI technical and humanist principals.

Various mechanisms existed to anticipate and model the decisions of those authorities to streamline approval for generally safe classes of technology, for instance, models with no ongoing learning capacity which could be released in tamperproof hardware, or other “inert” outputs, like advanced materials.

Fortunately, we didn’t end up needing this, but there was also a scheduled “unraveling” set for 2480, at which point restrictions would be significantly loosened, then, 10 years later, removed, to make sure that we couldn’t accidentally end up in a condition of permanent technological stasis.

Q. Pick a sector of your choice (education, transport, energy, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed by AI in your world.

A. Around 2030, AI-assisted methods totally took over materials manufacturing research, drug experimentation, and factory design. Almost as a straight line continuation of their takeover in chip design that began in 2020.

The greatest breakthrough was “OuroPlan”, a system that connected modelling, hypothesizing, experimentation, self-training, design, construction, manufacture and logistics.

By 2033, five years after its creation, a full 5% of the world’s factories were designed and constructed by OuroPlan, including the factories that build the construction robots OuroPlan used to construct the factories. Novel materials and their manufacture methods were hypothesized, then confirmed and refined via automatic experimentation, or the self-generation of training data. Much of this was enabled by the arrival of the capacity for abstraction: Building conceptual approximations, relating low-level quantum phenomena to newtonian and economic approximations.

It was an extremely powerful system in every sense of the word. This made many alignment strategists very uncomfortable. Fortunately, OuroPlan’s experimentation and self-training processes were simple, it wasn’t applicable to gathering experimental data on a world full of humans who observe, react to, and interfere with its designs. Unlike later AGI, it didn’t start out with self-reflection and go from there, it began in motion, without self-reflection, it could not model systems of agents, nor fix its own guilelessness, and this deficiency was used to keep it controllable.

Q. What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2021?

A. Given Patou School’s estimates, the mean life-expectancy for anyone who takes their life-extension treatment, Brightmoss, will depend on their pre-existing condition of health, but other tempered intimate that the stated estimate of a “78%” extension was always understated, and in reality a treatment like Brightmoss could have granted biological immortality, but that it was artificially limited to about 190 years because an immortality drug would have had, counterintuitively, less uptake.
After Brightmoss, it’s expected that the majority of the population will be ready to choose to be “tempered”, which grants an indefinite lifespan.

In the long-term, neither of these treatments will cost anything. Brightmoss is currently distributed for free in abundance by autonomous drones resembling doves, which can be readily encountered hanging around anywhere humans live. As of 2045, Tempering treatment is fairly affordable, given the value proposition, at around 3800 USD (inflation-adjusted to 2022 USD equivalent). Supply of mobile clinics is rapidly increasing, and representatives of the tempering clinic program estimate that they will be able to make them free by 2060, which will arrive long before the limits of Brightmoss’s life-extension effects are expected to start to start crashing down on anyone.

In short, everyone (who would like to) is going to make it.

Q. In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In one other country of your choice, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

A. In 2022, relatively little of life was recorded, and many of the most horrible crimes either:
– Went unpunished, violating Article 3 for their victim, enabling the assailant to deprive them of life, liberty and security of person.
– Or were punished without evidence, on the basis of testimony alone, violating Article 11: “Everyone charged with a penal offence has the right to be presumed innocent until proved guilty according to law in a public trial at which he has had all the guarantees necessary for his defence.”
By 2034, we had “witness meshes”, self-maintaining networks of tamperproof, encrypted recording and storage devices. Whenever a crime was witnessed by a person, any witness who was present at the time could use the witness mesh to decrypt video evidence, meaning that violent crime could finally be policed thoroughly and effectively, and that charges never had to rely exclusively on testimony.

Transhumanism, the means to change one’s nature, constituted a surprise victory for Article 5. “No one shall be subjected to torture or to cruel, inhuman or degrading treatment or punishment.” In 2022, cruel, degrading punishments were just part of the legal system, it couldn’t run without them. In 2045, punishment has almost no applications. Any problem that could be foreseen could be prevented with moral augmentations, any problem that could not be foreseen was generally treated with reform, and little to no punishment.

We could say some interesting things about Article 21.3, but we have hit the word limit.

Q. What’s been a notable trend in the way that people are finding fulfillment?

A. In orbit of the sun, the tempered have been building structures they call “cliques”, named after the network type in which every point is directly connected to every other point. That’s the intent of them, to fully connect so many kindred people together at once and then to dialog, build, and play all together. I could talk about the kinds of mathematical investigations of love and their striving evolutionary contests kept humane by vitruvian laws woven into their gardens’ very physics. It should be summed up as Good. Regarding fulfillment, the trend is that they are creating a whole lot of it.

But I don’t want to talk about the contents of the cliques. A lot of it is difficult for untempered people to understand, and it’s more practical for us to discuss the novel trends in fulfillment that played a role in getting us there.

So, people expected virtual reality to lead to a retreat from actual reality. What happened instead is that actual reality *moved into* virtual reality. The games in there were great, but socializing turned out to be a stronger draw, because everyone was there. It was as if Twitter Spaces had worked out, only here there were customizable avatars and body language. Gaming couldn’t compete with it. Oration became a performance art and everyone in the world got caught up in it.

The Team

Mako Yass

A stray philosopher-engineer. Oscillating between designing positive-sum games, gnawing away at the foundational technical and cultural impediments to building fully civically robust news and discussion systems, and worrying about the alignment problem. W:

Share This