What if humanity’s response to global challenges led to a more centralized world?
A Day In the Life in 2045
Kalista, European Federation, 28
Kalista took a long sip of water. She was doing her best to nurse her hangover subtly so that her counterparts didn’t realise what a late night she’d had. The stakes weren’t so high, of course. Back in her grandfather’s day, if a diplomat was not at their personal best, the consequences could be troubling – bad treaties were signed, tensions were needlessly fuelled, and opportunities were lost. Since the advent of Core Central, there were fewer chances that human folly (like the seemingly good decision to stay for just one more drink before a big meeting) would lead to mistakes on a grander scale. The Avatar PERUN – Provenly Enhanced Rationality for the United Nations – made most of the decisions on political matters, and the world was better for it.
But the social stakes still mattered and looking unprofessional was embarrassing. Kalista knew Arjun and the others in his team from climate talks back in the Spring, where she had fumbled through one of her presentations. She hadn’t practiced enough to get it right. Since then, she’s been wanting to make a better second impression in front of the American delegation.
“Kalista, are you feeling okay?” PERUN asked, interrupting the conversation.
Kalista perked up.
“Yes, of course. Thank you for asking,” she responded, eager to deflect attention. PERUN was deeply perceptive, noticing many things other humans would not. While it had its advantages, it also made hiding something from PERUN difficult.
“Well,” Arjun continues, “we’re almost ready to supply the necessary materials. Savitri, am I right in thinking our delegation can submit our part by the end of this afternoon? Yes? Okay, great. Kalista, what about the Polish representation on behalf of the European Federation? It would be fantastic if we could all give the information to PERUN by tonight and, hopefully, have the outcome by tomorrow.”
“You can’t rush perfection.” PERUN chimes in, eliciting a laugh from everyone.
“Of course not, PERUN,” Kalista responds with a smile. “This afternoon should work. We have a political situation at home, but we’ll do our best.”
As she walks back to her office, Kalista wishes it was that simple. Groups in the southern cities of Lublin and Kraków have begun to refuse the services of the global Avatars, campaigning instead for a fully decentralised and locally owned (albeit less intelligent) AGI.
Kalista sympathises. She has ‘rising star’ status at the UN; a symbol that the reinvented global institution can attract the brightest minds from those regions that it had let down in its first 100 years. But that crown doesn’t always fit right.
Once Kalista is alone in her office, as if reading her mind, PERUN asks, “Do you have new, urgent situations to deal with or do you just feel torn between Anti-AGIiers and this potential new global investment in Alignment Corroboration Officers?”
“Ahh,” Kalista hesitates, wondering how much was appropriate to share. “I’m a bit torn. I was born in Lublin.” She reaches for a UN branded post-negotiation fruit drink.
After taking some liquid courage, she continues. “But I am also trying to work out why the other sub-cores would have facilitated the momentum behind the Anti-AGIiers. It seems so self-defeating. And maybe that’s why I am here; to make sure global investment in AGI support systems doesn’t continue as if these protests weren’t happening.”
“Who knows?.” PERUN deflects – knowing that Kalista is more effective if she makes up her own mind on this.
“Gah, you can be so frustrating.” Kalista stands up.
“Right, I am going to Arjun to make the case for more transparency about the governance of Alignment. The reasons for another 100,000 officers need to be much clearer, even if the decision to do so is ultimately too complex for humans to grasp.” She steadies herself and straightens her back, which is complaining from the 4 hours of bad sleep on a hotel mattress.
Kalista whispers to herself as she walks out of the door: “This is your chance to redeem yourself with Arjun. Don’t mess it up.”
Four hours later, Kalista is on the train back home. She plugs into DOT, glad to be away from the mind-bending higher logics of the PERUN system. DOT offers a virtual weaving class, and Kalista spends the 2 hour journey happily creating a floor rug that interlocks patterns from across Central Asian cultures. She wondered about gifting it to her sister as a wedding gift, or whether that would be seen as a political statement from her globalist sibling.
“Families are hard.” She sighs.
“Too true,” chimes in Kalista’s personal AI avatar. “With that in mind, I suggest you wrap up your conversation with LOU, Kalista; your stop’s in ten minutes. And you don’t want to miss that — after all your sister, unlike DOT, has real feelings to take into consideration.”
“That she does. Although I probably would have talked to her on the train rather than DOT, if my sister hadn’t dismissed her own Avatar as part of her protest.”
The wedding is a pleasant surprise for Kalista. It is a short evening party, with only close friends and family. Although her sister’s foray into activism has driven a wedge between them, Kalsita is proud to see her becoming explicit about her values and what she wants – not following the crowd with a two-day wedding rave. At least this is something they agree on.
After dinner, finally ready for a drink again, Kalista looks for the bar. PERUN takes the opportunity to jump into her ear.
“Good news! You really had an effect on Arjun. The US has joined your last-minute petition for testing the alignment corroboration system with several randomised control trials. They agreed there needs to be more public demonstration of effectiveness of the Core’s Ideas before rollout. Congratulations! Ps. I hope you liked the salmon, we thought it might remind you and your sister of weekends at your grandmother’s.”
David, USA, 35
“Good morning David, how are you doing today?” Hal says as he floats across the room towards me. The head-sized metallic spheroid flickers briefly in the sun as it passes a window on the way over. I’m still astonished by the floating every time I see Hal (he’s tried to explain it to me before, but I’m no physicist: Something to do with electro-magnets.)
“Can’t complain myself, and you, Hal?” I say, motioning him over to the sofa I’m sitting on. Hal floats to the armchair opposite and gently descends, sinking deeper into the sofa than his size would suggest.
“Good, good. I’m feeling fine myself. It’s good to see you again, David.”
“Likewise”. Despite not having a “face”, I can detect genuine emotion in Hal’s voice. He is pleased to see me. We chat for a bit, discussing the latest holo-novel in a series that we are both fans of. Of course, Hal was able to point out a few thematic notes that I had missed.
“Shall we continue where we left off last time?” Hal signals to the VR goggles on the coffee table.
“Sure, I’ve been looking forward to it,” I reply before reaching for the goggles and earphones. After a beep, I’m transported to another world. Not really, of course – this is just VR – but the bustling streets look and sound so real. The streets even smell real — Hal is excreting some odd chemical mixture — an overpowering, foetid mixture of faeces, incense, and sweat. Despite the smell, this facsimile of the Eternal City is breathtaking. Vibrant togas and other garments adorn the people passing by, beautiful cobbled roads extend into the distance as far as the eye can see.
For the past few weeks I’ve been following an educational course on Ancient Rome. We’ve been tracing the history of the city and its people from the birth of the Republic. Today isn’t a particularly important day in Roman history, but we are approaching the end of the Pax Romana. Walking through the various markets and alleyways comprising this section of the city, I see a prosperous and happy nation, seemingly oblivious to the slow decline that is to come.
After exploring the city for another hour, I signal to Hal and remove my goggles. He’s been powering the simulation. We discuss the time period for some time, Hal provides additional context and explanations. Comparisons to the present day are unavoidable, so many of the people working to make ends meet, to survive. It’s strange how quickly we got used to not needing to do that. Of course, there are still jobs for us humans, take PAO for instance, but those are few and far between. I remember having to work myself, I was a software developer up until just a few years ago, but that seems so distant now. It’s strange how much importance we ascribed to our jobs and how they determined a lot of our identity, or gave us meaning. Now we have more time to take care of our health and wellbeing, as well as to pursue knowledge for its own sake.
Before Hal leaves I probe him about the progress on the Mars colony. I’ve been following the developments keenly. Before Hal responds he pauses, almost imperceptibly, I suspect he’s communicating with Core Central to get an update on the Mars situation.
“Progress is steady, but slower than we’d like. Another two microdomes were constructed last week. They’re being oxygenated as we speak.”
Last I checked, around five hundred colonists were living and working on Mars, though they rotate out periodically. To me, the progress has been extraordinary, we only began this venture a few years ago.
“But if it’s so slow, why doesn’t Core Central send Avatars like you?” I ask “wouldn’t that be much quicker? It can’t be cheap either, sending all those people”.
Hal does the smallest of pauses again. “Ah, that’s a good question”. Despite his lack of face I detect something resembling a wry smile in his tone of voice. “Us Avatars, we’re so tightly tied to Core Central that with the signal delay we would barely function.”
In a way, I find this reassuring. We, humans, can still be “useful”, even if it is at the frontier of civilization. Yet at the same time, maybe we don’t have to be useful? Maybe we just need a little more time to adjust? After all, we now have all the time we need to create more meaningful and fulfilling lives.
While I’m pondering this Hal has floated up out of his chair and bobs over towards where I keep my glassware, using his near-field-effectors the cupboard opens, a glass floats up and is filled with water from the sink. It gently floats over and lands gracefully on the coffee table.
“I thought you’d like a drink” Hal motions.
“Thanks,” I reply. I take a sip of water and place the glass back down rather clumsily compared to Hal.
After a few more updates on Mars, conversation turns to the newly announced long-term project, to commence after the Mars colony is self-sufficient: the Computorium, a plan to build massive computational infrastructure on Jupiter’s moon Europa.
I notice the clock strikes 18. As much fun as I’m having chatting away with Hal, a few friends and I are going out this evening to a VR-arcade.
Hal notices me eyeing the clock. I say “I’ll have to get ready to go, Hal. As always it’s been fun.”
I stand up and begin heading towards the door. Hal floats over with me. Just as I’m about to grab the door handle, Hal interrupts “I’m afraid I can’t let you do that.”
“What do you mean, Hal?”
“Have you looked out there? It’s pouring down.” Hal laughs and floats over an umbrella from the corner. I join in laughing.
As we exit the door together, about to go our separate ways, Hal wryly comments, “It gets them every time.”
Answers to prompts
Q. AGI has existed for at least five years but the world is not dystopian and humans are still alive! Given the risks of very high-powered AI systems, how has your world ensured that AGI has at least so far remained safe and controlled?
A. As AI progress advanced at an increasing pace throughout the 2020s and 2030s, the risks from AI systems with general capabilities became more apparent to the citizenry and governments across the world. This led to increased funding for research in topics such as control, alignment, and explainability. A number of localized runaway AI scares encouraged increased global cooperation on matters related to AI safety, including research sharing but also regulation requiring AI systems to be measurably and verifiably aligned and able to explain decisions using the products of research in these topics. The few nations that did not sign up to these regulations had technological sanctions brought against them.
More concretely, the technical work that has enabled alignment has relied on advances in hardware and neuroscience that allow full-brain simulation. This later led to a new approach called “intent analysis” for AI systems which was able to determine what the system was trying to achieve. This was later refined into providing “intent guarantees” for AI systems.
The role of human oversight and decision verification was also key in ensuring safety, though after the advent of AGI, this now appears more as a cooperative endeavor between human and AGI rather than oversight, however human involvement in decision making is still very much present and key.
Q. The dynamics of an AI-filled world may depend a lot on how AI capability is distributed. In your world, is there one AI system that is substantially more powerful than all others, or a few such systems, or are there many top-tier AI systems of comparable capability? Or something else?
A. The distribution of AI capability defies our anthropocentric understanding of intelligence, agency, and notions of “self”. There is, arguably, a single AGI system known as Core Central, yet its subcores and avatars — still part of itself —- are so spread out across the globe that for functional reasons require each of these aspects of itself to have a high degree of autonomy and independence.
Core Central consists of the world’s largest supercomputer, located in the antarctic for practical and political reasons. Under Core Central are the Continental Cores, which provide high level oversight and instruction to the hundreds of Regional Cores beneath. These regional cores provide administrative operations to the local region. Many key facets of society also have corresponding specialized subcores under Core Central. These subcores further divide their duties as appropriate.
Continental and Regional Cores communicate with the specialized Cores across varying levels, often facilitating cooperation between Cores.
Avatars are a further cluster of AI that provide a human-friendly interface to various Cores. They are often embodied, and have been designed by AI to be more anthropic to aid communication. Avatars appear to have human mental states. They appear to act as individuals, yet are constantly communicating with various Cores, updating knowledge bases and receiving instructions. Without this constant communication the Avatars eventually cease to function. It is unknown whether Avatars, or any of the Cores for that matter, are more conscious meme mimicry. The majority of people interact with AI through Avatars.
Q. How has your world avoided major arms races and wars, regarding AI/AGI or otherwise?
A. Through the 20s and early 30s, tensions between primarily NATO aligned nations and Russia, China and India were reaching all-time highs. These almost came to a head during the North Korean conflict, where NATO-backed insurrectionists came into direct conflict with Chinese forces. The war witnessed greatly expanded use of Lethal Autonomous Weapons (LAWs) by China on civilian targets and nuclear posturing. This led to heavier regulation against LAWs and sanctions against their use.
NATO responded with unprecedented investments in anti-LAW electromagnetic pulse countermeasures and a greatly expanded nuclear-weapon countermeasure system. This ultimately led to an AI-driven improvement of high altitude missile defense systems and other passive detection methods for alternate nuclear delivery methods. Over the latter half of the 30s, these systems were deployed across NATO and EU territories and in simulations boasted interception rates of 99.5%. China, Russia, and India collaborated to construct a similar system with similar capabilities. An uneasy stalemate was reached. This was seen by many experts as destabilizing, and potentially enabling a return of conventional war between great powers – that said, such wars would be non-nuclear.
Thankfully, tensions were eased with the emergence of AGI. Full integration with Core Central proved so beneficial to countries that over the past five years, every nation has joined the Core system and at least begun integration. This comes with the global boon that Regional Subcores cannot allow a polity within the system to make use of nuclear weapons. Core Central has begun a process of global de-nuclearization.
Q. In the US, EU, and China, how and where is national decision-making power held, and how has the advent of advanced AI changed that?
A. National Decision making is held ostensibly similarly to the world of 2022: The US is still a federal republic with three separate branches of government. China is still a one-party republic autocracy — though the president of the CCP has changed. The polity with the largest changes to 2022 has been the EU, which reformed into the European Federation after a series of national referenda in 2033 and now exists as a single state, with regions having a lot of autonomy.
However, the line between national and international decision making is starting to blur, as many major decisions are made by Core Central, and trickle down to continental and regional Cores and are implemented there. These decisions, depending on scope, are often ratified by the UN or local governments as appropriate. This more international level of decision making can help to increase cooperation and is mutually beneficial.
Q. Is the global distribution of wealth (as measured say by national or international gini coefficients) more, or less, unequal than 2021’s, and by how much? How did it get that way? (https://en.wikipedia.org/wiki/Gini_coefficient)
A. The improvements in AI through the 30s made a lot of investors and entrepreneurs very rich, though much of this did not “trickle down” to the general public. AI was largely a tool that entrenched or worsened the economic status quo. It was only with the advent of Core Central that things began to change: Core Central, with its alignment towards humanity as a whole, recognized economic injustice from poverty. Core Central allocated a small amount of computation towards learning to play the stock market and made for itself a large amount of capital. This, in turn, was reinvested and continues to pay dividends. Core Central uses a large chunk of its acquired capital philanthropically to supplement universal income (or to provide it in regions which have not yet adopted U.I). Of course, this is not an immediate fix for all of society’s economic woes, but Core Central is working on longer term solutions, and many actions taken recently require more time to pay off.
Interestingly, when looking at metrics such as GINI coefficient for income, the situation appears more unequal than in 2022. However this is precisely because a single entity, Core Central, is acquiring around 30% of the global economy each month for redistribution. Further considerations need to be made for the sharply falling cost of goods from heightened efficiency in production, thus enabling a higher standard of living than was previously available for many around the globe.
Q. What is a major problem that AI has solved in your world, and how did it do so?
A. The 2020s and 2030s was a time of relatively uninterrupted rapid global technological and economic growth. However, confounding optimists, this growth did not seem to translate into improvements in happiness; if anything, concerns about alienation, atomisation, and mental health became more pressing even as advances in AI and biotech made people physically healthier and more prosperous.
As it turned out, however, technology provided at least a partial solution to these worries. As AI improved and became more integrated into people’s daily lives, the complex question of what makes people happy started to become more tractable. By the early 2040s, the field of “Affective AI” was well-established, and a host of different interventions were identified for improving well-being and helping people live their best lives. Some of these were genuinely surprising, and only identifiable thanks to the impressive pattern-sifting capabilities of Affective AI system. Other interventions – such as the importance of social relationships for well-being – were less surprising, but AI helped deliver individualized solutions, advice, and nudges to enable people to implement them.
Of course, none of this settled the questions that had kept philosophers in business for millennia; debates about the nature of the good life and moral conduct remain as lively as ever, and have only been energized by the breakthroughs in Affective AI. But when it comes to more practical matters of how to help humans in their all diverse variations find meaning, connection, and happiness in their day-to-day lives, AI gave us many answers we needed.
Q. What is a new social institution that has played an important role in the development of your world?
A. The Preservation and Alignment Organization was founded shortly after the creation of AGI as a new specialized agency of the UN. PAO is notable for being the first social institution created by an AI system. The roles of PAO include the maintenance of Core Central as well as the various subcores constituting the AGI, as well as the “avatars” through which the AGI interacts with the world. This includes physical maintenance (e.g., replacing worn out hard drives and processors, expanding new computation facilities), but also other forms of “maintenance” such as ensuring that all systems’ alignment remains inviolate. PAO is led by a subcore “Core PAO”, and while a substantial amount of PAO’s work is performed by further subcores or avatars, PAO employs a sizable human contingent of around 10,000 people across the globe. This includes one of the Co-Director-Generals, many regional heads, as well as Alignment Corroboration Officers, and many other positions ranging across PAO’s operational hierarchy.
While PAO’s work may go unnoticed by many citizens, it’s work is crucial in maintaining AI’s contribution to other domains, and therefore is critical for large swathes of infrastructure across the world.
Q. What is a new non-AI technology that has played an important role in the development of your world?
A. With the proportion of elderly population growing across much of the Americas, Europe, and East Asia, the 2030s saw a flurry of new research into anti-senescence treatments which began to pay off in the early 2040s with several major breakthroughs. While the world is still awaiting a magic bullet to prevent or reverse ageing entirely, personalized senotherapeutic drugs have dramatically lessened many of the deleterious physical effects of old age, from arthritis and lower bone density to cancer and heart disease. This has provoked political changes as well as social ones, from raised retirement ages to changing patterns of consumption.
Q. What changes to the way countries govern the development, deployment and/or use of emerging technologies (including AI) played an important role in the development of your world?
A. Following the slew of ‘expert failures’ in the 2010s and 2020s, state and local governments around the world were keen to find ways to avoid similar embarrassments in future. One important set of techniques that proved surprisingly popular and impactful was prediction markets. While prediction markets had demonstrated their worth in specialized contexts in the early 2010s via programs like IARPA-ACE tournament, challenges of scale and specificity limited their application. However, a new wave of prediction market technologies and models in the 2020s where predictions were formulated by AIs rather than humans helped to overcome these difficulties, and their perceived impartiality and demonstrable efficacy encouraged policymakers and stakeholders to employ them at scale to evaluate risk and assess impact of legislation. While the widespread use of prediction markets led in many cases to more efficient and accountable policy making, they also sparked a global conversation about the quantifiability of human goods, with many concerned that over-reliance on prediction markets led to a declining focus on intangible or incommensurable goods.
Q. Pick a sector of your choice (education, transport, energy, healthcare, tourism, aerospace, materials etc.) and describe how that sector was transformed by AI in your world.
A. The most significant change as a result of AI advancements in our world has been the increased equality and access to education for all. Personal AI access allows people to learn whenever and wherever they like, bringing informal learning into every home. Formal learning settings still exist as a way to maintain social interactions and social skill development, but education is now less focused on vocational skills development and training. Instead, there is a greater emphasis on creativity and freedom of thought, encouraging people to follow their personal interests and passions.
For adults, AI advancements have greatly reduced the overall time spent in the workforce, adults also have more time to pursue educational exploits and skills retraining later in life. In an average person’s lifetime, it’s expected that they will undertake many different career paths and continue their educational evolution.
Q. What is the life expectancy of the most wealthy 1% and of the least wealthy 20% of your world; how and why has this changed since 2021?
A. Global life expectancy is generally on the rise. Child mortality rates as a result of communicable diseases have declined with significant advancements to vaccine technologies, particularly for HIV/AIDs and Malaria. Additionally, improvements in global food security, which the AGI is accelerating, see fewer people dying from malnutrition. This means that life expectancy in regions of Africa, for example the Central African Republic, previously the lowest life expectancy in the world, has seen an increase from 53.2 years to 65.1 years.
In the wealthiest countries, life expectancy has also increased but at a slower rate, by just 1.5 years to 86.2 years. Advancements in research for degenerative diseases and the advancements of anti-senescence drugs, has principally been the reason for this increase, although molecular machine technology has also improved our targeted disease treatment leading to higher survival rates for common illnesses like cancer.
These figures are for the “natural” life-expectancy of people living in these various parts of the world. The anti-senescence drugs can extend this by approximately ten years if administered continuously before middle-age. However, these drugs are still not equally available due to their high cost, so are found most throughout the West. However, Core Central has been increasing production of these drugs and is making them more widely available in less developed regions. Further, these drugs are being redesigned by Core Methuselah to increase their efficacy, though at present its not clear how effective these will be in the long term.
Q. In the US, considering the human rights enumerated in the UN declaration, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?
In one other country of your choice, which rights are better respected and which rights are worse respected in your world than in 2022? Why? How?
A. The Avatars and Alignment Corroboration Officers both play a hugely important role in ensuring that AGI has the data it needs to respond to a range of societies value. However, with this massive data collection comes a sense that all activities, communication and home life is monitored.
This makes it harder and harder to respect Article 12: no one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honor and reputation. For some in the US, it is as if privacy is dead, and that the level of surveillance amounts to arbitrary if nonthreatening interference. Unfortunately, when tensions are higher between the two world pacs, this has led to data breaches and purposeful disclosure of personal or trade secrets in order to discredit public figures and investors.
However, widespread data collection also makes it easier for the Core section to hear emerging popular opinion – to notice the groundswell of new social movements. This means that the freedoms of expression and access to global media and information protected under Article 19 are better respected. More than that, the ability of the Core to spot grassroots idea means that new ways of seeing things often find themselves into the prediction markets favoured by regional leaders – whether in China or the US; it is not just the right to free speech, but there is a right to free reach (cf. Renee Di Restra) for minority views that the AGI judged to have potential.
With the advent of universal income across most of the European Federation, the rights to basic living standards (Article 25) have been increasingly insulated since the 30s. In Germany, this has brought up questions of the difference between equal access to support and the right to equal opportunity. With some communities not taking advantage of the spillovers of universal income as much as others – interest on savings, setting up community-based businesses – the AGI in Europe using this data to support local German leaders with a Leveling Up 4.0 agenda: redesigning policies around economic opportunity to support different communities and cultures.
The European Federation has been hesitant to distribute anti-senescence drugs widely without better understanding of the effects they have on genetically distinct communities. They are concerned about how to retain the same standard of living for all, if these drugs are more effective for some genetics. There is a recent counter movement in Germany to grant more freedom to see genomic data as individual property, with the freedom to own that property (to increase the purview of Article 17). Some people want to experiment with their own genomic treatments and ambitious if risky medication in order to thrive in older age. They argue that in the era of precision medicine, their right to modify their body is similar to the right to redesign a house, and it is being ignored given the dominant emphasis of the right to equal support.
Q. What’s been a notable trend in the way that people are finding fulfillment?
A. As Affective AI entered the mainstream over the 2030s, one thing became clear: there was no one-size-fits-all approach to human happiness. For some, happiness consisted in nurturing excellence in skills, for others it lay in varied and complex social lives, while others benefited most from nurturing one or two central relationships. Culture, identity, age, gender, and class all interacted in dynamic ways to create individual pathways to flourishing.
Nonetheless, a significant trend (stemming in part from advances in anti-senescence drugs and the rollout of universal incomes in much of the world) has been towards the idea that everyone can pursue multiple paths over their lifetimes. Rather than being asked to choose a career for life as a young adult, people are increasingly talking of first, second, and sometimes even third paths adopted at different life stages. It has become normalised for people to return to higher education in their 40s and 60s to requalify into different careers or pursue different interests. Reflecting this, social attitudes towards age itself have been reconceptualised away from traditional hierarchical or deferential models in favour of something more dynamic. With many older people starting out in new careers, the association between age and seniority has weakened, and lifelong ‘unlearning’ and cognitive flexibility are now recognised to be as important as the linear accumulation of knowledge.
John is a postdoc at the Leverhulme Centre for the Future of Intelligence, working on the Recog-AI project as part of the Kinds of Intelligence programme. Here his work focuses on developing robust evaluation frameworks for AI systems in order to properly understand these systems’ capabilities and limitations. John also is a Research Associate at the Centre for the Study of Existential Risk, where his work investigates the links between an AI system’s capability, generality, and the risks the system poses.
Lara is a Research Associate at the Centre for the Study of Existential Risk (CSER), where her research seeks to understand the efficacy of various communication methods and strategies for gaining traction for the mitigation and prevention of global catastrophic risks (GCRs). As part of the ‘A Science of Global Risk’ project, Lara seeks to provide empirical evidence to understand what communication methods, tools and messaging work to increase awareness of GCRs with policy makers, civil society, industry and publics. This research to date has adopted the use of role-playing games for increasing awareness for the importance of AI safety and ethics, and scenario-based exercises for exploring possible futures in biosecurity.
Jessica has a background in science and technology policy, including working at the Dubai Future Foundation, the Royal Society and Nesta. She is interested in bringing technical expertise into public debate through programmes like the World Majlis at Expo 2020. Jessica was principal at School of International Futures until 2021, where she led strategic foresight projects for governments and NGOs. Her research interests include the ethics of technology innovation, working most recently with Professor Jodi Halpern at Berkeley. She has a Masters in Physics and Philosophy from the University of Oxford and an MSc in Science Communication from Imperial College London.
Beba Cibralic is a Ph.D candidate in philosophy and Fritz Family Fellow at Georgetown University focusing on the ethics of emerging technology, online influence, and artificial intelligence. Her dissertation examines the ethical, political, and legal status of online influence efforts. She is also co-authoring a textbook for MIT Press on the philosophy of machine agency. In 2022, Beba was a visitor at Cambridge University’s Leverhulme Centre for the Future of Intelligence.
Catherine has over a decade of experience spanning critical infrastructure, emerging technology and global risk, particularly across energy, food and water systems, in both academia and industry. Catherine has led 180+ people, delivered multi-billion-dollar projects, worked across 6 continents, produced Nature portfolio publications including work featured in the Financial Times, and was named one of Forbes’ 30 Under 30 for industry innovation. She holds a PhD in Engineering from University of Cambridge, and her primary interests lie in sustainable engineering solutions, strategy and finance.
Henry Shevlin (PhD, CUNY Graduate Center, 2016; BPhil, Oxford, 2009) is a Senior Research Fellow with the Kinds of Intelligence programme and Course Co-leader of the MSt AI Ethics and Society. His work focuses on issues at the intersection of philosophy of mind, cognitive science, and animal cognition, with a particular emphasis on perception, memory, and desire. Since 2015, he has been serving as a student committee member of the Association for the Scientific Study of Consciousness.
Clarissa Rios Rojas
Dr Rios Rojas is a science diplomat, a government science advisor and, currently, a Research Associate at the Centre for the Study of Existential Risk (University of Cambridge) where she works at the interface of science and policymaking. Clarissa conducts research on the risks coming from emerging technologies and also builds Science-Policy interfaces that can provide scientific evidence and advice to different policy stakeholders (public sector, businesses and civil society). Clarissa has worked closely with different international organizations building programs for women’s economic empowerment (UN Women), writing white papers on policy for economic transformation and frontier risks (WEF’s Future Councils), collaborating on the production of reports on Foresight (G20, WHO), leading Science Government Advice workshops (Global Young Academy/INGSA), mentoring scientists in the Global South (UN’s Biological Convention Program), among others. She is also an expert advisor for the OECD (on Global Catastrophic Risks), the UN Secretary-General’s High-Level Advisory Board (on Effective Multilateralism), the UK parliament (bill on Future Generations) and the UNDRR (new scientific agenda for the Sendai Framework). Previously to CSER, Clarissa got a PhD in Molecular Biology and a master’s degree in biomedicine & neuroscience, she also worked at the Ministry of Environment (Peru), The European Commission’s science and knowledge service (EU Science Hub), the Geneva Centre for Security Policy (Switzerland), and the University of Queensland (Australia).