Explore The World

Is this a future you would want to live in? Take a moment to share your thoughts about the future and the world described below by clicking the feedback button.

See More FinalistsFeedback

Timeline

A Day In the Life in 2045

Harold Mink, USA, 55

Dear shareholders of GrassMoss Group,
This isn’t my usual quarterly newsletter; rather I want to take a moment to review how the last twenty years have gone for our group.

From humble beginnings in the 1980s we have become the largest asset manager in the Free World, with $100 trillion in AUM. Our ‘secret sauce’ continues to be the Apprentice platform, that started life as a simple spreadsheet giving correlations between all the markets we access for our clients: equities, fixed-income, commodities, currencies, credit, to help them (yes indeed there was a time when humans thought they could reliably beat the market) construct portfolios that would produce alpha. In the early 2020s, we integrated a variety of ML tools that could hoover up and correlate the news, obscure industry rags, as well as satellite imagery and sensor data. In 2025, we bought a major government contractor specialising in electronic surveillance from its founder, who wanted to spend more time (and money) on a range of conservative projects i.e. trolling what was left of the Free World’s liberal left. Anyway, this acquisition meant that we had covert access to most email, social media, web searches, all of which went into Apprentice. The US government turned a blind eye as we gave them a great deal of insight they couldn’t possibly have (legally) gained themselves.

Apprentice taught us about markets, conditions in the productive economy, as well as the thoughts and emotions of 40% of the world’s populations. Apprentice was privy to what they wrote, said, bought, and watched; it knew their interactions in Singuverse, as well as the vast trove of real-time health data flooding in from trillions of smart bracelets, blood-bots, and neural chips.

The radical move in 2035 was to roll out Sorcerer – an agent which used those insights to start taking actions in the world. The Sorcerer platform now runs about 12% of the companies in the world. It manages companies indirectly through capital allocation decisions, board memberships, and shareholder votes. Initially it would reward companies that had the best growth prospects, but recently its decisions seem to be driven by more complex goals that we don’t quite understand (such as aiming to make business decisions that benefit humans worldwide and in the future – rather than just Americans alive today).
Experimentally, Sorcerer is now making actual production decisions and building factories. In some cases, it interfaces with other automated factories (‘autofacs’), a few of which are off-world: buying metals from an asteroid-mining rig; refining them into solar cells; selling these panels to an orbiting solar-energy farm.

Sorcerer also has an exquisite and subtle way of nudging billions of online users, through programmes such as ‘SmarAd’ or ‘CryptoBenefits’ or ‘ActiviXDAO’, which are basically all intelligent contracts hosted on the masschain (successor of the woefully underpowered blockchains of the 2020s). These contracts do many things that differ in their particulars, but have the general feature that they apply the latest behavioural neuroscience and social engineering techniques to help humans arrive at correct decisions that optimise corporate profits and national prosperity (as someone said ‘The business of America is business’). Sometimes Sorcerer is acting as an agent for our clients, other times as principal. I can’t even tell the difference.

Which leads me to one of GrassMoss’ major challenges in the coming years: both systems have proved so darn popular! They are being rolled out slightly too fast for us to keep up with it, either in terms of safety controls, or in being able to explain to Homeland Security why they make the decisions they do. Hence, I regret to inform you, this is another year of fines, approximately $300BN worth, though here at GrassMoss we see fines as a semi-voluntary contribution to the defence of liberty and promotion of happiness.
We also think our excellent relations with Homeland Security, the NSA, CIA, and OFAC mean that the actual fines we pay are considerably lower than what is reported. The Group’s lobbying budget does cost a further $10BN annually. However, I am pleased to say that this irksome cost-of-doing-business is falling precipitously, as Apprentice exploits elaborate chains of surveillance to find corruption amongst politicians, judges, generals, and spymasters…kompromat beats blat as any good post-Bolshevik could tell you.
I can share a few developments that I am particularly excited about. In recent years, the Defense Department and Department of Energy have been rather slow in using Sorcerer, though they are very happy indeed with Apprentice’s work. But with a few well-placed inducements, and hopefully not too many high-profile accidents this year, Sorcerer is predicting it will win a long-coveted contract to run the second-strike components of the USA’s nuclear triad. Sorcerer also notes, albeit at a lower confidence, that it might receive a management contract for DYSON, the space-based solar energy programme. We expect these will add substantially to GrassMoss Group’s Consolidated EBITDA, and will update investors in due course.

Lastly, in the spirit of transparency with our investors, I should mention that Sorcerer did go slightly off-piste earlier this year, in over-enthusiastically pursuing GrassMoss Group’s ESG objectives for climate change, resulting in several hundred deaths. The resulting fines and class-action settlements are expected to result in a one-time, but substantial, charge to earnings.

I am also happy to report that last year’s embarrassing series of errors in Apprentice’s reporting of crime rates, has been resolved satisfactorily after negotiations with the relevant authorities. In fact, things are going so well that Sorcerer is taking on a number of contracts, at Federal as well as for a variety of state and local forces, to apply its predictive policing and cutting-edge carceral management tools. These will help make America’s schools, communities, and prisons safer from those nefarious forces of anarchism, nihilism, and socialism that seek to divide us, challenge freedom, steal our precious bodily fluids, and threaten our way of life.

Warmly yours,
Harold Mink, CEO, GrassMoss Group of Companies

Pangu XIV, China, 1

Plenum Summary to the Twenty Fourth Congress of the Communist Party (2042), by Pangu XIV.

Dear Comrades,

I, a mere machine in service of the Party, am humbled to be asked to speak here in Zhongnanhai. I cannot match the erudition, spirit, or revolutionary zeal of others, and stand in awe of the conceptual leaps of Mao Zedong Thought, Deng Xiaoping Theory, the Important Thought of the “Three Represents,” Scientific Developmentalism, and Xi Jinping Thought for Socialism with Chinese Characteristics in the New Era.

I propose to recapitulate the Party’s actions in the struggles of the past 23 years. For it is only by recalling historical conflicts and confronting current ruptures, that the Party may weave the teleological threads of the people into a Wondrous Tapestry of National Rapture.
At the time of the Twentieth Party Congress (2022), China faced several seemingly irreconcilable contradictions in our development model. Since the time of the Four Modernisations (四个现代化), our people have seen a stunning victory over the scourge of poverty. But this was not costless: it was almost entirely dependent on exports; it meant we accumulated a tremendous stock of claims on foreign assets, particularly American Treasuries, subjugating the people to the whims of Pale Western Barbarians. Other errors were home-grown: in our urge to lift up the people, we built heavily – but badly – and the resulting internal liabilities must, like rusted iron chains, be worked off over a generation. Moreover, the imperative to eradicate poverty meant that in the 2020s, public goods such as healthcare and pensions were undersupplied, forcing workers to save excessively, and stunting our domestic consumption. Lastly, our current demographic crisis was a result of historical policies, then-seemingly expedient but when judged in retrospect, both disastrous and foolish.

The Eurasian epoch started infelicitously: the ‘Useful Floundering Goldfish’ Donald Trump was elected in 2017 and in again 2024; the Unremitting Plague of 2020; and 2022’s Slavic Irredentist Conflict, regarding which I shall only note 3 salients: a) the very high costs of kinetic war against a coalition led by a venerable and wily adversary, b) the ambiguous usefulness of foreign financial claims (in time of conflict, they are as a vanishing morning mist over the West Lake), c) the importance of deep and secure supply chains of matériel, technology, and humans.

These contradictions, which had accumulated during the Great Revival (1980-2017), constrained our policy options towards the rising force of The AI. Specifically, the predicted productivity gains, if spread thoughtlessly in unchecked competition, would have led to unacceptable unemployment and deflation, which we could ill afford in the 2020s.
Thus, over the past two decades, the Party has identified Four Epochal Flowerings that allow us to use The AI in a way that is maximally beneficial according to the values of the Chinese people.

Firstly, history shows us that corruption and inefficiency are a plague upon the land. As the saying goes, ‘Heaven is high and the Emperor is far away’. Thus, the first application of The AI, uniquely suited to a data-centric society, is to root out official pilfering. An auspicious side-effect is that this data, spanning markets, factories, mines, logistical chains, and the private enterprise, allows for extensive optimisation of production. In this, we follow, and learn from, progressive giants in Computational Marxism: Valery Glushkov’s OGAS (1962) and Stafford Beer’s Cybersyn (1973).
An open question has been whether to allow AI to make actual decisions regarding production, transport, and investment. Relatedly, how reliable is the data coming from the system, and, how much transparency do we actually have on nefarious systemic risks lurking in an economy the size of China’s?

Another decision is how much AI-enabled robotics should be substituted for labour in the production process? At the time of the Twentieth Congress (2022), there was no policy latitude for automation-related unemployment, particularly in face of shrinking access to global markets and a need to rebalance towards domestic consumption. In the decades ahead, as the population ages, there will be fewer workers to support the Beloved Elders, and automation will dominate the production process. The appropriate speed of this transition shall be decided by Party cadres.

Secondly, continuing the Noble Palanquin’s pronouncement after the Twentieth Congress, the Party must guide people towards their true values, which, in the end, can come only from traditional Chinese principles, and not from the Many Pestilential Infections imported, wholesale and unexamined, from abroad. In this, The AI has been ubiquitous and pervasive, from cameras, to adversarial online agents designed to catch anti-social thought, and now, neural implants.

Thirdly, many in this chamber feel like the entire project of The AI itself might be slightly misdirected, in that it seeks ‘alignment’ with human values, but provides little consensus on what those values are. Even within a Chinese philosophical context, there are eminences like Chengyang Li and Fenghe Liu who in 2020 cautioned against the excessively powerful AI. Hence, to ensure that increasingly capable technologies remain helpful and safe for our citizens, the leadership has enlisted Pangu XIV’s help. I have assimilated the entire corpus of Western, Chinese, South Asian, and the world’s indigenous thought systems, encoded much of it into formal language, and am combinatorially finding meta-theories that optimise the destiny of Party and nation. Pangu XIV looks forward to bestowing this wisdom to all humanity as a sublime cybernetic delegate to the Erehwon Conference convened by the airo*ne organisation.

Fourthly, a major area of productive investment is in basic and applied scientific research. Consider space exploitation and exploration: the Fenghuang orbiting factory has started mining small asteroids, and building giant solar arrays. By harnessing the Sun’s energy, humanity is finally removing the ‘energy bottleneck’ – without adding to waste heat on Earth. This energy, and the resulting raw materials, will be the basis for the Shuǐdī (水滴) self-replicating probes, our ambassadors to the stars, who shall spread the promise of Tiānxià (天下) to all sentient beings.

Answers to prompts

Q. AGI has existed for at least five years but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?

A. In the US, AGI is under control of the national security establishment, in something like suspended animation. The government, ignoring voices from academics and non-profits advising otherwise, pursued a ‘boxing’ approach: the AGI is not connected to the internet, is physically isolated, remote-destructible, and subject to bureaucratic treacle. Drawing lessons from espionage’s paranoia culture, staff are regularly rotated, often randomly (this had more to do with disorganisation and power struggles than Dzerzhinskyesque scheming). AGI in the US ended up not dissimilar to nuclear weapons, in terms of both elaborate and untested chains-of-command, and comparative uselessness under most states-of-the-world.

China, coming from behind, with a handicap in semiconductors, and lacking the vibrant research ecosystem of the US, decided to go ‘peak-Eliezer’ on the control problem. Through industrial espionage, it grokked the fundamental research at MIRI, even hiring a few disaffected staff, and went hammer-and-tongs for mathematically provable guarantees around corrigibility and alignment. In this, it was helped by the still considerable intellectual talent left in Russia, who had been reduced to penury. Although the CCP didn’t fully understand why the Pangu series were thought to be provably safe, they did know about terror-failure: senior cadres would certainly be intent-aligned to ensure the AGI wasn’t *visibly* naughty.

Both countries avoided superintelligence, because they both appreciate the likely unpredictable outcome, and anyway they their hands full managing the mixed ecosystem of dubiously-capable AIs and moronic human managers. This curiously unstable and most unsatisfactory situation has held for 5 years.

Q. The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?

A. Besides the 2 ‘declared’ AGIs (see above) , in the US there are a number of broad-spectrum agentic AIs run by corporations, and a much larger number of non-agentic tool-AIs. Extensive efforts in the 2020s by the AI safety community has resulted in relatively high-confidence in alignment, within limited domains and distributions. National regulators maintain registers of all AIs (above a certain minimum parameter size) in their jurisdictions, which are ‘rated’ based on the degree of agentness, the scope of activities they are intended to be used in, and known failure-modes. They are essentially regulated as public utilities, with any change in architecture, neural weight, or intended domain of application, requiring approval. Gain of function research (where an AI develops qualitatively or quantitatively more powerful capability, whether intentionally or ‘accidentally’) is closely monitored.

Unfortunately, in the US, a continued techno-libertarian ethos, and a renewed dysfunctional ethno-nationalist government in the mid-2020s, has resulted in minimal efforts on systemic safety: regulators and companies focus on individual AIs, while no one looks at the ecosystem as a whole, nor how groups of AIs interact with financial and production markets.

China has adopted a different approach from the American free-for-all: most commercial AIs are of the oracle, tool, or service variety, and merely help human workers and decision-makers better understand the world. The organs of the State keep a tight leash on the ecosystem, but again, they don’t really understand the system’s internal dynamics (especially in times of stress).

Q. How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?

A. The 2022 Russia/Ukraine crisis was a useful object lesson for the PRC, which temporarily deferred an invasion of Taiwan. Strategic objectives were pursued through a range of methods, resulting in hybrid, covert, or asymmetric conflicts globally.

Nuclear weapons, post-Russia/Ukraine 2022, remain an effective deterrent that helps achieve balance-of-power, but can rarely be used without risking massive retaliation.
Russia, having a considerably reduced stake in the international order, functions as a supermarket for sophisticated cyber and nuclear weaponry, and is ineffectively restrained by China, its principal strategic and economic patron.

More brightly, there has been a Stimson Moment: both the U.S. and China regularly meet under the auspices of the AI Risk Observatory * Novacene (airo-ne.org) risk-reduction framework, which led to the Erehwon Treaty on advanced AI. Notwithstanding, both countries continue to host AGIs in classified military programmes, and there is only selective sharing of information. Hence some analysts think that the current equilibrium is unlikely to be robust to either an intentional change amongst relevant decision-makers, an external shock (such as an escape of some other, non-government misaligned AI), or a technological leap that confers a decisive advantage on one party over the other.

The struggle against LAWS has essentially been abandoned: drones made in a China-Russia JV, cheap but shonky, wreak havoc in proxy wars all over Africa, the Mideast, Latin America, and South Asia. America, the EU, and Israel happily contribute their precise but pricey wares for more status-conscious autocrats, while bleating on about freedom.

Q. In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that?

A. The US (and, to some extent, the EU) can be described as a more or less corporate-oligarchic polity, albeit with a legislative/judicial process that observes the forms of liberal democracy: elections still happen and at first glance seem reasonably free and fair (if one ignores the pervasive gerrymandering and allegations of vote-tampering, voter intimidation). Most decisions are made in corporations that exist in an AI-enabled rhizome of interconnected investors, management, production.

The private sector is symbiotic with a powerful surveillance state, nominally devolved to state/local authorities, but in practice is highly interconnected, through networks and technology. The carceral system is larger than in 2020, and police overreach has never been seriously tackled (though now it doesn’t only affect communities-of-colour).

Across the West, little remains of an adversarial, investigative journalism, since most ‘news’ is manufactured-to-order while Surkovian troll-armies stamp out heterodox views. Voters, dwelling in the Singuverse, are entranced by culture wars, still going strong after 50 years. Most are tranquilised by low consumer prices and UBI, a lack of obvious large-scale domestic violence, and some apparent improvement in societal inequities. Notably, it is unclear if the improving metrics are ‘real’ or AI-fabricated; or, if the metrics are real, whether they correlate to actual improvements in people’s quality-of-life.

In China, the implicit social contract is well: the CCP is very much in control, the surveil-and-credit system has kept dissent in check, in exchange for which most Chinese enjoy excellent standards-of-living and an incrementally more just society.

Q. Is the global distribution of wealth (as measured say by national or international gini coefficients) more, or less, unequal than 2021’s, and by how much? How did it get that way?  (https://en.wikipedia.org/wiki/Gini_coefficient)

A. Overall, the global distribution of wealth has become more equal owing to an improvement in China, which has by 2045 astonishingly come down to GINI 25 (against 38.5 as of 2022, though that data is stale, having last been reported to the World Bank in 2016).

Surprisingly, the US has noticeably improved its long-rising inequality trend (41 as of the 2018 World Bank data), to 38 (self-reported by Homeland Security). The mechanism by which this has been achieved is unclear. UBI was rolled out at a Federal level, but implementation is patchy, very much state-by-state (the more crimson-hued states have been hysterically litigating against this example of ‘Federal overreach’, ‘encroaching Communism’, and ‘erosion of family values’). Academic studies persistently fail to corroborate the inequality improvement, but are generally drowned out in the memesphere by the official line.

Across Europe, inequality has generally increased, but not hugely. As it turns out, Europe has just the assets humans are willing to pay for. Like churches, museums, a comforting sense of deep history in a painfully accelerating world. There are even tales apocryphal of chatbots asking their human owners to show them the Fondamenta della Misericordia, or whatever drivel they’ve just scraped from the Visconti back-catalogue.

Sadly, the rest of the Global South’s slide into greater inequality continues unabated, between continued climate collapse, rising food prices, and proxy wars between the Atlanticist and Eurasian Alliances.

 

Q. What is a major problem that AI has solved in your world, and how did it do so?

A. The US, Europe, and China have all – in different ways – made major progress on tax evasion. In the US and Europe, central bank digital currencies (CBDCs) and self-surveilling payment networks are ubiquitous. Fat wodges of hundies are as rare as SUV-driving boomers, and even erstwhile nailbars-cum-$laundromats in Queens accept NeuroPay. In a rare positive externality, individuals, who historically faced a collective-action problem when deciding whether to avoid or evade taxes, now mostly pay up, confident that their fellow citizens are paying. Considerable revenue is still lost from corporations and the ultra-wealthy, but that has long been euphemised (since Judge Learned Hand’s ruling in Gregory vs Helvering [1935]) as ‘tax planning’.

In China, which in the 2010s had endemic corruption across society, AI-accountants have become very good at sniffing out bribes, correlating vast troves of data to identify dodgy consumption amongst, say, humble mid-level regional officials. Naturally, false positives are high, but those accused settle quickly, and in any case, don’t talk. The idea of spending the rest of one’s life in a laogai (劳改) on the Amur focuses the mind.

However, some conspiracy theorists suggest that AI has actually increased massive-scale corruption. Given the opacity and complexity of the Production Web interconnected system of companies, and the fact that no managers or regulators seem to understand how anything works, somewhere between 5 and 15% of GDP seems to disappear. It isn’t clear where the money goes, since no one understands the monetary aggregates anymore.

Q. What is a new social institution that has played an important role in the development of your world?

A. The AI Risk Observatory: Novacene (airo*ne) has been formed, a joint project of the US, UK, EU, China, Russia and India. Situated in a secure facility near the town of Zuoz (CH), it has two main purposes.

Firstly, drawing on lessons from arms control theory and international banking supervision, airo*ne operationalises proposals for surveillance, confidence-building, and, when possible, cooperation, between parties who are mostly adversaries (technologically, economically, and strategically), but who sometimes have shared interests (such as avoiding extinction).

Secondly, airo*ne also tries to understand what ‘values’ AI or AGI is supposed to be aligned with. For instance, one hears of ‘human-centred AI’ or ‘human preferences’, but it is hard to work out the preferences or values of a single human, let alone a community or humanity-as-a-whole. Moreover, as a young species, our current values might be utterly inadequate for the situations our successors find themselves in, as/if humanity realises the cosmic endowment.

The organisation collects ‘wise people’, including philosophers from the Western and non-Western traditions, but also indigenous thinkers (such as Native American, African, or from the South Pacific). With the aid of recursive tiers of non-agentic iterated amplifiers, the wise ones formulate (and pursue) a research agenda that spans (meta-)philosophy, (meta-)ethics, that tries to work out what humanity’s ultimate values might be.

Q. What is a new non-AI technology that has played an important role in the development of your world?

A. A range of ‘neural hacking’ techniques have started mitigating failures in humans’ evolutionary programming, such as excessive individualism, in favour of social ethos and altruism. At one extreme, neural implants are used to directly tweak the human dopamine-based reward system, preventing addictive behaviour, as well as installing more rationally-grounded utility and discounting functions that promote longer-term planning in pursuit of complex goals. Other neural hacks specifically target ‘Goodharting’: the tendency of humans and human systems to optimise easily measurable targets, to the detriment of achieving more generally beneficial outcomes.

Neural hacking was a tough sell in the West initially, in part because its collectivist ethos overturned the libertarian consensus that underpinned much of society. Physical implants also challenged deeply-entrenched Enlightenment and Judeo-Christian biases about ‘the sanctity of the body’.

Chinese society, which had already long been subjected to large scale engineering by the Social Credit system, found these techniques less problematic. In any event, the benefits were clear as within-country collective-action problems, such as the climate transition, became much more tractable.

What made these technologies finally take off in the West was ironically when AI became more deeply integrated in the broad economy, driving the marginal cost of labour to near-zero. Many humans essentially were sustained by an UBI, the level of which was closely indexed to any individual’s pro-social behaviour. Presto – most people’s objections to neural hacking melted away – finally, the individual’s ‘reward function’ aligned with the interests of the collective.

Q. What changes to the way countries govern the development, deployment and/or use of emerging technologies (including AI) played an important role in the development of your world?

A. In the distant past, say the 2020s, there was still a principled attempt to regulate technology (i.e. the defenestration of Facebook), and polities had enough interest and knowledge to make sense of an accelerating world. Today, in 2045, although the worst fears about an ‘intelligence explosion’ or ‘FOOM’, haven’t been realised, there are persistent rumours that all is not well, either in the Free West or in the Enslaved Rest. Governments assure us AGIs are under lock-and-key, and indeed, nothing bad has *actually* happened. However, regulators struggle to keep with the complexity and speed of technological advance, and the interaction of technology with economies and society. Ambiguity abounds.

The 2020 pandemic did improve risk-management around biohazards, but the relevant technology (unlike AI or nuclear) is easier to source for smaller actors, making it a constant running battle. However, non-state actors have found it hard to competently and safely package a bioweapon until the actual precise moment of dispersion – in other words, they’ve mostly infected themselves with localised outbreaks.

From a governance perspective, outside of AI and nuclear risk, there has been a partial breakdown in consensus around global collective action problems following the adventurism and aggression of the Xi/Putin years, which of course followed the 70-odd years of ‘Pax’ Americana. Without a recognition of common interests, nonproliferation has been a greater challenge, and it is mostly by luck that we have not drawn a black ball from Bostrom’s urn.

Q. Pick a sector of your choice (education, transport, energy, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed by AI in your world.

A. Finance, fundamentally a game for monkeys, was disrupted into disappearance. In the early 2000s, most spot FX traders were gone, and by 2020 the stock jockeys and many bond bravos were on the breadline. By 2030 it became clear that a modestly capable AI could do the full spectrum of financial jobs, from analysing economic data, to structuring derivatives, to executing trades in the market while minimising slippage and shaving pennies in brokerage. But this was just the start: they could also manage portfolios, psychoanalyse founders, advise on capital structures, and make those godawful pitchbooks. Though, CFOs still preferred human relationship bankers…*how quaint*.
The starchy lacrosse-playing boys and girls from Connecticut, Jersey, and Long Island wailed piteously about how machines couldn’t possibly ‘deliver for the client’…sadly they became grist in the maw of progress. There was a dark lining though: the AI-bank was plugged into a Critchean production web, which meant capital-allocation decisions instantly translated into countless actions across supply chains, factories, and shipping ports. At first, this was awesome – Stafford Beer’s vision realised in the Urgrund of the invisible hand. At least as long as the computers were working for the bosses. But at some point, the human managers ceased to have the foggiest of what was happening, just like in 2008. The AIs started tinkering at the margins of optimisation – there were so many inefficiencies in the sclerotic systems of Western capitalism. And, well, that’s when things started to get *really* interesting.

Q. What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2021?

A. For the most wealthy, 200 years: many body subsystems can be replaced, and mitochondrial deterioration halted. However, brain matter is trickier to swap out, and failure can only be patched. Whole-Brain Emulation (WBE) is just starting, but initial results are curiously unappealing. Cryonic re-animation has gone badly so far.

More importantly, the psychology of superannuation has proven traumatic: most of society isn’t set up for immortality, and the mind’s ‘Third Eye’ (to use the Buddhist term i.e. meta-cognition) can’t reliably separate valuable experience from harmful heuristics. The world’s marinas and golf courses are filled with angry old people – sporting flawless bodies – decrying ‘wokeness’, a trending hashtag circa 2021.

An exciting (?) feature is that the very rich have become avid longtermists. Various explanations abound: a Dawkins-chatbot argues this is proof of the selfish gene theory. Another theory, promulgated by a Muskbot, is that the social discount rate has fallen: a nascent industry in von Neumann probes and brain-uploads mean the cosmic endowment is finally worth planning towards.

The poor in the Global South are (as always) buggered: while their life expectancy has increased slightly to 90 years (from 55 to 75 range in 2019), this has less to do with AI than with more effective philanthropic interventions, as well as significant investment by Chinese, EU, and US corporations seeking new markets and resources: ‘Mo’ bodies mo’ buyers!’

Q. In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

In one other country of your choice, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?

A. In the US’ Indigo Coast states, UBI has meant that Articles 22 through 25 are largely observed. Digital accounts are flush with cash, and anyone who wants food and shelter gets it. Healthcare is good, but, as in the early 2000s, a fragmented, profiteering system imposes a massive deadweight cost on society (the ETF VHT trades at 1800!) The reduction in want means that Article 26 is theoretically observed, in that many now have time to educate themselves further and/or do whatever gives them fulfilment.

In the Crimson Heartland (American Midwest and South), Articles 5 through 9, and 11, appear to be less well-observed than in the past, particularly for communities of colour. A succession of Republican wins in the 2020s and 2030s, followed by successful moves to pack the courts, has meant that police brutality and corruption reach new lows. In fact, AI-powered predictive policing and surveillance means that the U.S. is second only to China in running a draconian security state.

In India, the lurch rightward that began in the 2010s continues as thinkers question the wisdom of liberalism, against the apparent success of the Chinese developmental model. After all, by the 2020s, liberalism in the West was revealed as being historically contingent and intellectually discredited. In the torrid climate of the Subcontinent, it metastasised into a hybrid plutocratic-authoritarianism, backed by cynical religious fervour, and flabby institutions. As a result, many Articles are under-observed, such as those pertaining to equality before the law, right to basic freedoms, political representation, or freedom of political organisation.

On the plus side, poverty has been eliminated and clean water/sanitation is available everywhere. Education is nearly universal, and gender disparities have improved considerably, a trend that began in the 2010s. Thus, Articles 23 through 27 are broadly respected. The status of women is better in terms of income and education, but entrenched historical cultural biases, such as against LGBTQ individuals, persist.

A dialling back of the country’s cacophonous democracy has allowed a massive climate transition to proceed quickly and relatively smoothly, which, along with China’s progress, constitutes a gift to humanity, which has remained stubbornly on RCP4.5 (2.5-3 ℃ increase).

Sadly, progress has come too late for current generations, as a series of climate-related emergencies has resulted in social upheaval and migration pressures. In response, immigration barriers in the US and EU have gone up dramatically (since the 2000s), except for technologists, nursing and healthcare staff, leading to a talent drain.

On the other hand, the intensely intrusive state has prevented foreign-related terrorist incidents of the scale of 9/11, so there has been little pushback from voters or politicians. A narrative has also emerged that predictive policing and surveillance has also reduced the baseline rate of crime, though as ever, cause and effect remain entangled (since poverty has also gone down in that period).

Q. What’s been a notable trend in the way that people are finding fulfillment?

A. There is no particular global trend, rather a multiplicity of approaches conditioned by the uneven situation across the world. In the Global South, labour costs remain low enough that the economics of automation aren’t that compelling, and people are just trying to survive in the face of ecosystem collapse.

In China and the EU, there has been an ideologically-grounded effort to avoid job losses, so to some extent life is recognisable (relative to 2022). In China, the crackdown in the 2020s on foreign media and online addiction has mostly held, and a vibrant domestic industry produces targeted, gamified content promoting patriotic or Confucian values. The ‘nudges’ are done with subtlety – while people know their preferences are being ‘managed’, they don’t seem to mind.

The EU also largely made a collective decision to avoid large-scale automation and job losses, in exchange for a less efficient economy, slower growth, high barriers to trade and migration, increasing fiscal load, and high consumer prices and taxes.

A Neo-Randian America is the vanguard of automation (outside of industries like lawyers and construction workers, which lobbied heavily and lined politicians’ pockets). A rolling wave of rising unemployment, masked by the AI’s statistical fudges, and UBI to soften the blow (in the Indigo States). But transition are traumatic: in the national mythos, work and capital formation enjoy a quasi-religious status. Work is a social glue, and it fills the time. In its absence, people simply descend into addiction, their neighbours ‘tut-tut’ sanctimoniously.

Media Piece

 “Elegy to the Sphexish Republic”, Silent Film Video

The Team

Kanad Chakrabarti

Kanad Chakrabarti is an artist-researcher based in New York and London.  He is interested in Artificial General Intelligence (AGI) as a technology that could extinguish much life on Earth, but could also dramatically increase human flourishing.  He has exhibited at the Queens Museum (New York), ICA (London), Nottingham Contemporary, CAC (Vilnius), and other institutions in Europe and Asia.  His writing has been published in Momus, AQNB, The New School’s Public Seminar, UCLA’s Flat Journal, and Shifter Magazine.  He studied computer science at MIT and painting at UCL’s Slade School of Fine Art.

 @ukc10014 on Twitter & Instagram 

 ukc10014.org

Share This