Science + Technology

Scholars assess threats to civilization, life on Earth

A serious look at existential and catastrophic risks hosted by UCLA's Garrick Institute for the Risk Sciences

|
Image illustrating global catastrophe
iStock.com/robertsrob

Some 40 scholars from around the world gathered last spring at UCLA to discuss catastrophic risks facing humans that may threaten the entire future of humanity — climate change, an asteroid strike, global pandemics, artificial general intelligence, and nuclear and biological terror, among them.

Scenarios for a possible sci-fi movie about a dystopian future? 

Actually, the event that brought them to Westwood was the first colloquium on catastrophic and existential risk hosted by the B. John Garrick Institute for the Risk Sciences at UCLA, housed in the UCLA Henry Samueli School of Engineering and Applied Science. The institute was launched in 2014 with a $9 million gift from UCLA Engineering alumnus and Distinguished Adjunct Professor B. John Garrick and his wife, Amelia.

UCLA materials science engineer Ali Mosleh, the Evelyn Knight Chair in Engineering and the inaugural director of the institute, and his staff are focused on reliability engineering, preventing failures of complex systems, and managing disruptions to society and the environment, caused by such threats as major industrial accidents, natural disasters and climate change.

In light of a global ransomware attack that disrupted computer systems around the world Tuesday, is world catastrophe just science fiction? Consider this: On June 30, designated by the United Nations as world Asteroid Day, we are being asked to think about how we can protect the Earth from an asteroid strike. 

In this edited Q&A, three experts talked to UCLA Engineering about how to realistically think about these risks and how to provide sound information to policy makers. Responding to questions were UCLA Chancellor Emeritus Albert Carnesale, a professor of mechanical and aerospace engineering and public policy; Seán Ó hÉigeartaigh, executive director of Centre for the Study of Existential Risk at the University of Cambridge; and Christine Peterson, co-founder and past president of the Foresight Institute, a think tank and public interest organization based in the San Francisco Bay Area. All were featured speakers at the conference.

What is existential risk and what is catastrophic risk?

Carnesale: Existential risk, in the extreme case, means eliminating the human species on Earth — literally, the very existence of life on this planet. That’s the extreme case. Everything from that on down to something that would perhaps kill tens of thousands of people all qualifies in different people’s minds as catastrophic risk.

And some [of these catastrophes] don’t have to kill tens of thousands of people immediately. You can imagine a financial disaster that, over time, would affect many people.

UCLA
UCLA Chancellor Emeritus Albert Carnesale and Christine Peterson of the Foresight Institute were featured speakers at the first colloquium at UCLA on catastrophic and existential risk.

Why is it important to study these types of risks?

Carnesale: The reason it’s important to study this is so that hopefully we can find ways to reduce these risks and make it less likely that these risks will occur, and, if they do occur, that their consequences would be reduced. So people are studying them in order to offer better advice on how you can reduce those risks.

Ó hÉigeartaigh: When we’re talking particularly about risks in the extreme scale, we’re often talking about uncertain, low-probability or speculative scenarios. I think it’s tremendously important to look at which of these are scientifically plausible, which ones do we have good reasons to think might come about on a timescale that we might care about, and then which ones we can say are sufficiently unlikely that we don’t need to think about them that much or [they] just simply don’t look plausible and … we can relegate to the realms of science fiction for the moment.

By doing this kind of analysis, we can better focus our attention on what we really should care about and what we should invest our resources in, in trying to mitigate or prevent to the extent that we can.

What are the technologies under development that also may pose a concern?

Peterson: Probably the technology change that’s gotten the most attention at this meeting is the prospect of artificial general intelligence. This creates tremendous opportunity — think of the diseases that could be cured and the economic advances, tackling tough problems like environmental issues.

But there’s also a concern: Is this something we can control? Is this something that would take over the world in some way?

What we’ve been trying to do at this conference is come up with numbers, whether it’s timeframe, the likelihood, the cost and the payoff for preventing problems. These things are matters of tremendous debate, and they’re very unclear. But we have to try. Without numbers, you can’t make decisions.

What do you think poses the biggest risk to civilization?

Carnesale: The most easily identifiable is climate change. But there is a lot of uncertainty there. That is the classic example with the insurance analogy.

There’s no question that carbon dioxide in the atmosphere increases global warming. You don’t have to have much more than a high school education with a chemistry course to know that. Now you start to ask questions: How much of an effect does it have? Over what timescale? … How much of the CO2 will be absorbed in the oceans? Where might there be a tipping point? Where might there be uncertainties?

But the fact that you don’t know everything doesn’t mean you don’t know anything. We do know more carbon dioxide in the atmosphere is going to go in the wrong direction.

I think that’s the best example of where the scientific community is fairly well agreed that if we don’t do anything, we’re going to have a big problem. You can argue about what we should do and how fast we should do it. … But the idea of “Let’s wait — it’s only a theory” doesn’t work.

Peterson: I divide these catastrophic risks into natural and man-made.

Of the natural ones, the ones that make me the most nervous are pandemics. Looking at this history, we can say with some confidence that these will come. Hopefully, we all remember the so-called Spanish Flu of 1918. There are more people now. We live in bigger cities. Are we well prepared? I don’t think we are well prepared for this. Of course, it will hit developing nations harder. But it could be very bad all over.

Looking at man-made problems — in the near-term — the cyberattack issue. You can think about a cyberattack in terms of the Internet of Things. For example, there was a hotel where guests were locked out of their rooms. There was a hospital that was threatened. Those are real concerns, but they are not so much, I would say, catastrophic risks currently.

But I think the electrical grid is a catastrophic risk. A cyberattack on the grid could cause a lot of damage and quite a few fatalities, large numbers. From what I’ve been able to find out, it’s not being addressed quickly enough. It could take decades to fix this, and that’s not acceptable because the threat is real right now.

To read the entire Q&A, go to the UCLA Engineering website.

Media Contact