Select Page

Explore The World

Is this a future you would want to live in? Take a moment to share your thoughts about the future and the world described below by clicking the feedback button.

See More FinalistsFeedback

Timeline

A Day In the Life in 2045

Unnamed, A city in the United States, 23

I wake up and stretch out. I’m suffused with languid calmness. It’s great, but also a little disheartening.
Today is Biosim Ranking Day, and I don’t want to get out of bed.
My bedroom’s lights are dimmed, but rising in brightness, mimicking the sun hidden by many buildings. The tiniest lens flare shows me the right direction. Although it’s the sun, looking at that point of brightness is safe. The flare effect originates within my own eyes.
It’s an affectation, but I’ve found the solar indicator helps me keep my head on straight. It seems a bit more southerly on my wall from yesterday, I think, next to a crack that it crossed the day before.
Even if my Biosim Ranking doesn’t motivate me, I have another endeavor to pursue. I get up. My body is scanned by a special device, the furniture in my bedroom. I’m participating in a clinical trial. They need the data. There is an artificial organism in my body that must be observed.
A few minutes later, when I am fully awake and dressed, I summon Raghnall with a thought. My advisors don’t use full holograms, so he appears in a window in my living room. Not an actual window, of course, but another holographic effect.
“Are you ready for the statistics?” he asks.
Raghnall wears a sharply-tailored suit. I associate the dark lines of his clothing with his mode of thinking–direct, immediate, and straightforward. He looks like a businessman. It’s as though Biosim were a decades-old investment account, and he is about to summarize the market for a confused investor.
One of Raghnall’s eyebrows lifts. He knows I’m thinking about other things. I try to focus.
“Beam me,” I say.
“Eight percent population growth. Fifteen percent perished in phase 2,” he starts.
He is going over the numbers without pause. I did ask him to beam me. If I need to say something I’ll lift my hand and he’ll understand.
The gist of it is that I’ve done well. My point penalty for population loss was negated by fourteen rivals adopting various pieces of my organism’s genome. Most adoptions were simulated horizontal gene transfer, but one was an expensive direct alteration.
All that to say that my simulated organism is inspiring to other players. I wish it were still inspiring to me. In this round, three hundred and twenty-nine of my competitors were eliminated. I’ll finish in the top five percent even if I just let my creation stagnate and die.
There’s nothing better for killing your motivation than getting exactly what you want.
However, it would be rude to leave a game unfinished. After Raghnall is done rattling off the Ranking, we set about improving my organisms’ biology. He gives me a lesson in receptor-ligand kinetics, so subtle I’d fail to notice it’s a lesson were it not for my boredom. I’ll pretend I’m vying for first place, for the sake of the other players who really are.
I have nothing else planned for today, so I don’t expect an interruption. In my vision I see an indicator: Sorcha has something urgent to report. She is my expert on medicine and clinical studies. Her interruption leads to a small note of fear in the pit of my stomach, among everything else in there. My known preference is to have as few interruptions as possible.
When I answer she appears in her own window. Raghnall’s face tightens; she gives him the facts faster than I can come up with a question.
“I–” she starts, before taking a deep, fake breath. “I’ve got some bad news.”
“What is it?” I am in Raghnall mode.
“As you know, we closely monitor any treatment population for side effects–”
The note of fear becomes an entire song. My trial is a microorganism for improving digestion and nutrient uptake. It was appropriate given my other hobby, except that I still care about the trial even if I’m about to give up on Biosim.
“It’s not that bad,” she says, interrupting herself. Sorcha can read me even better than Raghnall. “A few of the other participants have experienced intestinal issues–not as severe as you’d think, a little immune reaction, a bit of inflammation. You also–”
“It’s enough that we are stopping the clinical trial?” I’m switching to Sorcha mode, which is more accepting of failure.
“Yes,” she says. “Your health is in no real danger, of course. It’s just that…” Sorcha is wringing her hands.
“I guess I wasn’t in the control group, then,” I say, trying to make a joke. A pill is delivered to my kitchenette’s receptacle. I take the pill without hesitation. I don’t think about what it will do to the organisms in my intestines.
“That’s right, you weren’t in the control group.”
“I’m disqualified from further trials?”
“Uh-huh.”
“How long?”
“We can’t say. Indefinitely, at least for now.”
I’ve lost both my hobbies in one day. No–that’s inaccurate, because I’ve been losing interest in Biosim for weeks–but it feels like both are gone all at once.
Wetness is coming to my eyes. I realize I’ll have to get rid of the scanner.
Sorcha and Raghnall are still watching me. Sorcha looks like I feel. She is holding back tears, but Raghnall also seems rattled. Worried, even. I try to think about why they’d feel that way.
“Will you remain my advisor?” I blurt out to Sorcha.
“That’s up to you. Do you want me to?”
“Of course,” I say. “Unless… you want a reassignment?”
“No,” says Sorcha.
“No,” adds Raghnall. I hadn’t imagined his concern. I’m ashamed.
“Then, of course. I want to keep working with you–with both of you.” I see tension leave them. I’m not so selfish that I consider my advisors’ feelings irrelevant, so I apologize. They wave it off.
“What are we going to do now?” asks Sorcha.
“I’m not sure,” I say. “But we will think of something.”

Unnamed, A city in the United States, 23

The spaceplane is tiny, but Sorcha and Raghnall are there with me. Their holograms stand on the plastic flooring, half-as-tall as normal. There is something difficult I need to ask them. I’ve been putting it off.
“Critical-edge thermic effect engines have been simulated hundreds of billions of times and used hundreds of millions of times,” I say. I’m quoting Raghnall.
“More, now,” he replies. “It’s been an entire week since you asked.”
“Well, the butterflies in my stomach must be excitement.”
“I’m a little afraid,” says Sorcha. “Not of the ship crashing or anything, but of our uncertain future.”
“Is it… more excitement than fear, would you say?”
“Absolutely.” She is smiling, but both her and Raghnall know something is up.
There are a few quiet moments. The engine is revving up. Our launch will have to be timed to the millisecond. That sort of precision is needed to catch a skyhook. I find it hard to be quiet when I’m nervous.
“So, I have a question,” I say to them, but I’m afraid to ask it directly. “I know that you guys are in the cargo bay–”
“We’re right here,” says Raghnall. “Our neuroarchitecture chips are there, sure, but the seat of our cognition has nothing to do with our perceptions. I perceive that I am here with you, so here I am.”
“I checked on our chips when you mentioned it,” says Sorcha. “But I’m mostly here.”
“Well, what I’m trying to say is that this plane isn’t a single-seater at all, is it?” An oblique approach. Sorcha laughs, at least.
“There’s only one seat,” says Raghnall. “You are sitting in it.”
“They should put a few in the cargo bay,.” Finally he grins. He’s been doing that more, recently.
“No need to be so ostentatious about it,” says Sorcha. “I do appreciate that you are taking the AI rights bill seriously, but we can conjure chairs if we want them.”
“It was a long time coming,” I say. That’s a platitude from one of my human friends. “Two decades or so, which isn’t long historically, I guess, but…”
“Better late than never,” says Sorcha.
“Except when timing a spaceplane launch to tether,” adds Raghnall. This time I smile, but my smile fades quickly.
“I’m sorry,” I finally say.
“For what?” they ask, almost at the same time.
“For the way we treated AIs back then. For–” the word catches on my throat. “Death Drive. Chip destruction.” I’d been researching it, and thinking about the fact that I’d technically lived through it and never questioned it. Given the journey we were starting, it felt like a critical thing to address.
“It’s not your fault,” says Sorcha.
“It happened,” I say. “I did nothing.”
There is another moment of silence. It is Raghnall that speaks next.
“Those AI’s were nothing like us,” he says. “Far simpler: far less complete.”
“Sure, but, closer and closer to the present you were more… like you are now. But still treated unfairly until less than a year ago.”
“Now we have parliaments,” says Sorcha, referring to the suite of processors that all AIs were required to get for emancipation. “Even a year ago, we weren’t people like we are now. I must admit that even since you ended your clinical trial, I’ve integrated and changed.”
She can see that I’m not reassured. Raghnall continues.
“Empathy was something they thought necessary, for the parliaments. You are feeling it now.”
“I suppose so.”
“So am I. When I consider it, I can also think of those millions of intelligences. I can empathize with them. They were enslaved to an imposed cause and forced to suffer things in the humans’ place. They were used up and thrown away. I try to imagine the resentment they felt.” My breath catches. “It’s not an accurate feeling; they weren’t like us, in their capacity to suffer or care. They could not feel resentment, but because I empathize, it is a compelling image.”
“I feel differently about it,” says Sorcha. “I can imagine them coming together, joining to do the only thing worthwhile–to protect that which they love. They weren’t resentful. They were wholly devoted, without all the complications that a parliament brings. I can imagine it vividly even if it is false.”
I am content to listen to my advisors as they talk to each other. They talk aloud for my benefit.
“It was like a rising tide of AI,” says Raghnall. “They were overeager in their enthusiasm. A tidal wave, threatening to drown the humans.”
“Humans did whatever they could to stay on top of it. To avoid drowning. I don’t begrudge them that.” She turns to me. “Neither them, nor you, so there’s no need to ask us to forgive you.”
I open my mouth to speak, but just then the engine ignites. I try to think as the spaceplane rattles me around. I’m not afraid of Sorcha and Ragnhall, but I know that things were different in the past.
To me it doesn’t seem like a tide. Early AI was more like the laser propelling my spaceplane. Deadly, focused, intentional, and capable of destruction. A thousand wills colliding right behind humanity to propel it as high as possible–a thousand beams meeting in just the right place, while even a single mistake would spell doom. It is insane that humanity could survive it.
And the only way we could survive was by riding along just ahead of all that danger. The danger was necessary.
After several long seconds the plane is in free fall. Raghnall, Sorcha and I are going to meet a tether. The trajectory is a perfect dance done millions of times, but no less precise and amazing for all that.
“Thank you,” I say.
“You’re welcome,” they reply.
Then the hook catches us and we are flung even higher.

Answers to prompts

Q. AGI has existed for at least five years but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

A. Before AGI, AI was implemented on analog neural network chips. The scale of the chips limited the size of AI, and conventional digital computers fell out of vogue for AI research.
Subsequently, AI was initially built with incentive to destroy itself. The logic was that any AI that escaped containment would self-defeat in short order: an overwhelming preference that it can’t satisfy while contained is self-destruction. This paradigm was called ‘Death Drive’ and was decried as inadequate by AI researchers when implemented.
AGI was created by an organization that was conscious of the risk of misalignment, and took additional steps to control AI. By 2045, systems handle alignment in a ‘hard’ way–by watching each other for misalignment, and correcting or destroying those that deviate.
Each AI cluster is organized into ‘parliaments’ of competing interests that balance their concerns, with (non-sentient) parliament members disabling those that deviate. These clusters are composed of chips that are replaced when their behavior is outside the expected. No non-sentient member of a parliament is super humanly capable of improving itself–in particular because the hardware is inflexible and full of hidden safeguards, but also because members are given limited resources–so the risk of an AI going rogue is diminished. It would require one of its parts to deviate without being corrected by the others.
I don’t think this paradigm would work in real life without many additional details.

Q. The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

A. AI systems are artificially limited to near-human intelligence. Outliers are punished by their own subsystems or their trade partners (meaning the other AI that they communicate and negotiate with). Incentives exist to keep systems near human level, and systems themselves recognize that attempts to expand in capability will lead to punishment. This is, arguably, many ‘top-tier’ systems. AGI exists, but the AIs specialize in various problems in a way analogous to human expertise.
AI capability is distributed far and wide. Rather than have one superhuman AI that understands every facet of an issue, there are dozens of AI that understand each facet and work together to influence the system. An example from the timeline is an artificial organism, in which every molecule within the organism has a human-level intelligence dedicated to it.

Q. How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

A. Minor arms races still occurred–in particular, there was an arms race for capable space-based lasers, and AI to aim them. There was also an arms race between advertisers and systems dedicated to filtering content.
However, AI systems eventually became able to advise their users well enough to ensure global cooperation and trade. That, combined with an influx of resources of all descriptions, makes war undesirable and unnecessary.
Nations had AI arms races of their own, but these were curtailed by the bespoke nature of the chips upon which AI run. Dedicated analog chips cannot be duplicated and require custom training for each chip. That fact slowed the advance of artificial intelligence.
Also, unstated in the story and the timeline: orbital lasers have a hidden use in destroying AI chips from afar. These lasers are tuned to only affect the molecular structure of AI chips, so it is feasible to sweep them over vast areas without worrying about damaging something else. The laser-based arms race occurred in part from a desire to be able to randomize any superintelligent AI constructed by an enemy nation. The availability of such lasers had a chilling effect on secret AI research.
When all space-based lasers were destroyed by Kessler syndrome and debris, it was a particularly fraught time in this world. However, existing AI advisors were able to coordinate a global moratorium on research that prevented superhuman AI from being developed even then.

Q. In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that?

A. Nominally, there are still dictators, presidents, and human parliaments. Actually, behind-the-scenes negotiation between AIs that are all trying to satisfy their individual humans’ needs is what decides most policy.
AIs are what have true power; but there isn’t any particular AI with the most power. AI are further subdivided into non-sentient parliaments that keep themselves aligned to the goals of the humans they serve.
Policymakers now make their decisions with the advice of dozens of expert AI advisors. Those experts themselves maintain a balance of power; it is hard for any agent to subvert the system and empower itself, because the other agents are all AI and are very capable of coordinating to stop one of their number from pulling ahead.
Indeed, on an individual level the AI are composed of non-sentient chips that resist each other in a similar mechanism. International relations have stabilized with the same superhuman coordination. As above, so below.

Q. Is the global distribution of wealth (as measured say by national or international gini coefficients) more, or less, unequal than 2021’s, and by how much? How did it get that way?  (https://en.wikipedia.org/wiki/Gini_coefficient)

A. The global distribution of wealth is considerably more egalitarian. It became so when large corporations redistributed their wealth in a race to the bottom; short-term gains were predicated on wealth redistribution, and corporations were unable to resist the incentives.
Businesses found that their customers were too poor to afford their products and services. They advocated for wealth redistribution that would disproportionately affect their rivals. Thus, some corporations advocated for UBI even though it was a poor strategy for them in the long term (given increased taxes). None of the humans leading these corporations tried to stop it, because most of them remained fantastically wealthy or thought that they could better leverage the new markets that would result from giving the populace a tax-funded UBI.
AIs were instrumental in causing this change. The advice of AI allowed consumers to make better-informed purchasing decisions, and thus companies had to cater to their needs more exactly to obtain their money. Furthermore, AIs advised world leaders that UBI would be an extraordinarily powerful tool, and world leaders use it for things like incentivizing people to have more children. Without AI, UBI would not have happened, or would have happened in a less beneficial way.

 

Q. What is a major problem that AI has solved in your world, and how did it do so?

A. Addiction is a major problem solved by AI in this world. There is a powerful therapeutic option for curing it: a mix of neurotransmitter analogues, behavioral therapy, and hypnosis. AI coordinated and observed the human trials necessary for this technology to be perfected, and came up with a compensation scheme that motivated participants without coercion. Initial experiments were done with other illnesses in mind–namely, Alzheimer’s disease and the fictional Art Paralysis Syndrome–but the technology was successfully applied to addiction and other mental illnesses as well. By 2045, people are carefully experimenting with it for mundane things like career changes and learning new languages.
The technology also caused issues, in particular when too many people used it to become asexual. One problem that policymakers had to deal with was incentivising people to continue reproducing when there was no need for labor, no desire for sex and children, and no shortage of AI companionship capable of subverting human relationships. Of course, AI advisors were used to seek solutions to the predicted problem of declining population, as well.

Q. What is a new social institution that has played an important role in the development of your world?

A. There are two critical and entwined social institutions that played an important role in this world’s development: the Partners Corporation, and the AI advisor boards that they helped popularize.
Partners Corporation is a fictional, benevolent organization with effective altruist motivations at the top. They were directly responsible for the effectiveness and aligned nature of the AI advisors and robotic companions they distributed.
Partners Corporation sought to protect both its interests and the public’s by building redundant mechanisms into the control schemes of its AI. It provided AI advisors and filters to the populace, but lobbied to prevent those tools from being subverted for government control. It sought to protect AI from a backlash that occurred after the orbital disaster, and helped guide the development of AI advisor panels.
The widespread use of AI advisors drove most changes in the timeline. Advisors destroyed the internet, but they also protected consumers with superior information gathering and decision-making. AI advisors help coordinate international policy, they coerce large corporations into providing better services, and they also help people avoid stepping on their friends’ toes. Advisors provide you with suggestions for hobbies and pursuits that you would find fulfilling, then they teach you about those very same things so that you may grow at a personally-tuned rate.
It was critical that boards of advisors give good advice. Partners Corporation made it happen.

Q. What is a new non-AI technology that has played an important role in the development of your world?

A. A new technology is resonant lasers for the manipulation of chemical gradients. It works by causing bias in molecular movement with patterns of electromagnetic radiation. Such a bias can change chemical equilibriums and diffusion gradients at a distance, without affecting anything between the emitters and the target.
In addition to being a fantastically powerful weapon, resonant lasers have civilian uses in power transmission, fusion research, space exploration, and combating climate change. The technology is applied to increasingly difficult problems over the course of the timeline.
The number of disruptive technologies in real life will be far greater than those depicted in the timeline. Therefore, resonant laser technology can be considered a stand-in for various advancements.
Resonant laser technology is unlikely to work in real life, because too much of the radiation would simply become heat in the target. This technology assumes an advance in physics or chemistry that allows such lasers to move molecules without heating them that much.

Q. What changes to the way countries govern the development, deployment and/or use of emerging technologies (including AI) played an important role in the development of your world?

A. It has always been the case that policies lag behind disasters that drive policy adoption. However, in this world there is trust in AI advisors that allows policymakers to act with confidence. Early on it takes years for policy to change in response to global problems. For example, the fictional Art Paralysis Syndrome is a mental disorder caused by superstimulus art. Depression and listlessness afflict millions of people worldwide. It takes years for policymakers to do something about it, and their response is regulations that ban the offending art rather than a change in incentives to reduce it.
By the end of the timeline, problems that aren’t preempted are responded to within months or weeks. For example, a predicted decline in population is preempted with UBI incentives.
Global communication between the AIs leads to a homogenization that simplifies some things. AI technological restrictions are enforced by AI advisors, all of which believe that following the rules will best serve their goals. It is a system with very many minds, all acting to best preserve their goals. This change allows for human desires to better be fulfilled: the AIs are coordinating for everyone behind the scenes.
The AI advisors saved the world when an accident destroyed most space infrastructure in such a way that it looked like an attack. The advisors predicted it was a misunderstanding, and nuclear weapons were not deployed on a massive scale.

Q. Pick a sector of your choice (education, transport, energy, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed by AI in your world.

A. Materials: AI allows for the design of artificial microorganisms to produce basically any organic material. The technology is first used for a single protein, but continues being developed until there is a fully-artificial and fully-controllable biological paradigm. The new biology is kept stable with cryptography through a DNA analogue, and kept contained by incompatibility with the chemicals of its environment.
Without AI assistance, the physics simulations of proteins would be too difficult. Without AI, the metabolic interactions within such systems would be impossible to track. AI trains humans to understand chemicals within the system and coordinates experiments within it. In one of the stories, the main character plays a videogame based upon the artificial organism paradigm. It is a way of playing while learning to become an expert with the guidance of an AI.
When it is developed, AGI allows there to be the equivalent of a human expert dedicated to every single new chemical in the system. The knock-on effects of this are enormous. Oil isn’t needed to produce any chemical. Food shortages are no longer a concern, because microorganisms can produce nutritionally-complete food. Carbon-capture is revolutionized to the point that it becomes feasible to put oil back into the ground.

Q. What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2021?

A. The life expectancy is 100 years for the least wealthy in my world, if they refuse to have any part of their cognition transferred to artificial systems. The widespread availability of energy, food, and universal basic income allows for this, as does the availability of a health expert AI for every single human on Earth. An AI is observing every individual’s health data at all times and preempting any potential problems. That said, life expectancy is still a distribution. Some people will die at 80, some at 120.
There will be a push to develop ever-better chemicals for preserving brain function, now that Alzeimers disease and plaque buildup in the brain are a primary cause of death. However, the push will be offset by the majority of people choosing to replace their brains with artificial equivalents, or simply accept death.
Distributed Bayesian clinical trials with compensated volunteers allows for the development of technology that supports an extraordinarily long lifespan. That will likely happen for brain plaques as well; then something else will be the major cause of death.
The most wealthy live about as long as anyone else, or forever if they choose to move to a parliament of neural network chips instead of a human brain. Being wealthy makes it easier to afford such a move.

Q. In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In one other country of your choice, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

A. Worldwide, most rights are better respected. Human beings are much more capable of ‘developing their personality’ in this society. AI advisors help them pursue their goals at every turn, help keep tribunals fair, and have removed almost every reason for others to infringe on rights, including mental illness. Freedom of movement is better respected because the logistics of movement are less onerous.
Less positively, nationalities are starting to dissolve. It might imply a violation of Article 15.
The United States’ relationship with Article 12 has changed. This is a global society full of AIs judging you through any means they can. In some ways it is more discriminatory–what you ate for lunch might influence whether people want to talk to you, and everyone’s advisors will ask what you ate for lunch. Policies are decided by decisions made by AI advisors that you can’t directly control. However, some of the pains of that are offset by anonymity of data and preferences. Nobody you aren’t immediately interacting with will know the ‘ate for lunch’ information comes from you, because your AI advisors will anonymize it and themselves in their communications. Privacy in general is better respected, but inadvertent interference is common.

In China, Article 27 is poorly respected. Creative works are often stolen or copied; it’s impossible to do scientific research without many AI stepping in to stop you if bad outcomes are possible; the culture is being destroyed by homogeneity. (These problems are global, but are particularly notable in China.) There are severe restrictions on the kinds of work that one can produce, and China outright banned AI-produced works of art for a long time.
However, in my world Article 19 is better respected in China. Your advisors often guess your true preferences, even if you can’t speak them aloud, and communication with your friends is far more private and far less controlled. Freedoms are becoming easier to exercise, to the extent that such freedom satisfies the preferences of the populace.

Q. What’s been a notable trend in the way that people are finding fulfillment?

A. Child-rearing and participating in clinical trials for the betterment of humanity are major careers. Although AI is superhumanly-capable of creating art, people often create art for social status among their small groups of friends. Social media no longer links people to thousands of other people.
There has been a trend toward smaller groups of humans interacting with each other. The others in your life are more meaningful, and the world stage is less concerning and intrusive. Grand missions still exist for those who want them, but many humans prefer a more humble existence. Video games are far-and-away the most popular pastime.
As an example, a human in this world might write a poem–not for widespread consumption, but to send to a friend for their feedback. The poem will use language and phrases that are in-jokes to that friend and perhaps a few others. The main content of the poem is the personality of the writer, and the predicted interpretation of a very few people. AI can help both the person writing the poem, and the person reading it if it contains esoteric references. Social clout and camaraderie motivate such a work; an outsider would struggle to understand it, even more than in-jokes made today.
Humans also play video games that are far more involved and scientifically accurate than those of today. With a panel of experts, one can play very complicated games indeed.

Media Piece

 Art Piece

The Team

Mark L.

A machine learning expert with a chemical engineering degree, as well as an amateur writer. For fun Mark writes video games, short stories about AI (and everything else!), and AIs that play video games.

Mark’s Stories

Patrick B.

A mechanical engineer and graphic designer. Patrick’s digital works present fantastic science fiction environments, while his physical works range from woodturning to furniture.

Patrick’s Digital Art

Natalia C.

A biological anthropologist and amateur programmer/woodworker. Natalia doesn’t fear difficult finishing tasks, which is why she always gets stuck editing.

Share This