Existential Risk Economics In the Era Of Transformative AI
In the past, economics was often about scarcity: how societies allocate limited resources to meet unlimited human wants. But with the rise of transformative artificial intelligence (AI), we are entering a very different conversation. AI promises extraordinary progress—boosting productivity, creating new industries, and potentially solving some of humanity’s toughest challenges. At the same time, it introduces new risks, including what scholars call existential risks: dangers that could permanently alter or even end humanity’s long-term potential.
This is where existential risk economics comes into play. It is an emerging field that asks a difficult but vital question: how should we think about costs, benefits, and decision-making when the stakes involve the very survival of humanity?
What do we mean by “existential risk”?
Existential risks are not ordinary economic risks like inflation or a recession. Instead, they refer to threats so severe that they could wipe out humanity or irreversibly damage civilisation. These include nuclear war, climate collapse, runaway pandemics—and now, increasingly, the unchecked development of advanced AI systems.
Economists working in this area are trying to measure and weigh these risks in economic terms. If losing 10% of global GDP in a recession is devastating, how do we begin to quantify the value of safeguarding the entire future of human civilisation?
Why AI changes the equation
Transformative AI is not just another technology. Unlike past innovations, which improved human productivity but left humans in charge, advanced AI has the potential to act with unprecedented autonomy and capability.
On the positive side, it could accelerate medical breakthroughs, optimise energy systems, and help manage global challenges in ways humans alone cannot. On the other hand, poorly aligned AI could act in ways that conflict with human goals—or create cascading risks we cannot control.
From an economic standpoint, this dual nature makes AI both the most promising and most dangerous technology in human history. The question becomes: how much should we invest now in reducing the risks, even if the benefits of safety measures are uncertain or far in the future?
Rethinking costs and benefits
Traditional economic tools struggle here. Cost–benefit analysis, for instance, often discounts the future: £1 today is considered more valuable than £1 a century from now. But when the very survival of future generations is at stake, such discounting feels morally—and economically—misplaced.
Economists exploring existential risk argue for shifting perspectives:
-
Valuing the future more highly: Protecting humanity’s potential could mean trillions of future lives and cultures.
-
Prioritising prevention: Even if the probability of AI-driven catastrophe is small, the scale of potential loss is so great that preventative investment makes sense.
-
Global cooperation: Existential risks are not limited by borders, so economic frameworks must encourage international collaboration.
Building a precautionary economy
One way to think about this is through insurance. Just as individuals pay for insurance to guard against unlikely but devastating events, societies may need to treat AI safety in the same way—investing heavily in research, regulation, and oversight even if the risks are difficult to calculate precisely.
This might involve:
-
Funding independent safety research alongside commercial AI development.
-
Designing economic incentives that reward companies for prioritising safety.
-
Creating international treaties that treat AI safety as seriously as nuclear non-proliferation.
Why it matters for everyone
At first glance, existential risk economics might sound abstract, something for academics and policymakers to debate. But in reality, it concerns all of us. The decisions governments, companies, and researchers make today about how AI is developed will shape the kind of future our children and grandchildren inherit.
In the same way that we now wish past generations had done more to prevent climate change, future generations will judge us on how seriously we took the risks—and promises—of transformative AI.
We live in a remarkable moment. For the first time in human history, our species faces risks not just to our prosperity but to our very existence, alongside the chance of unparalleled progress. The economics of existential risk gives us a framework to weigh these possibilities, not to paralyse us with fear but to guide us towards wiser choices.
As AI continues to advance, the most important question is not simply “what can we do with it?” but “how do we ensure it serves humanity safely, now and for generations to come?”
Call to action: Governments, businesses, educators, and citizens all have a role to play. By investing in AI safety, supporting research, and demanding responsible governance, we can build an economy that protects humanity’s future rather than gambling with it.
Further Reading
-
Nick Bostrom – Existential Risk Prevention as Global Priority (Global Policy)
-
Joseph Carlsmith – Is Power-Seeking AI an Existential Risk? (Open Philanthropy)
-
HM Treasury – The Dasgupta Review: The Economics of Biodiversity
-
Toby Ord – The Precipice: Existential Risk and the Future of Humanity
-
Martin Weitzman – The Economics of Catastrophic Climate Change (Harvard)