SPOOKY. Albert Einstein once used that word to describe his new universe. 

It’s also my sensation as I slide inside the empty calm of a Waymo One driverless taxi in West Los Angeles. It’s my first physical encounter with artificial intelligence, the otherwise virtual technology that is both quietly and loudly changing our world. 

I’d paid the toll on the Waymo One app — $24 to go from Culver City to Santa Monica — and ID’d my ride by seeing my initials flashing from a birthday cake–shaped sensor mounted on a Jaguar purring along Washington Boulevard. 

I had clicked on the app and the door locks had clicked open. Now settled into the back, I watch, wide-eyed, as the steering wheel turns as if by invisible hands, and we ease smoothly into traffic — a haunted house on wheels. The Jag obeys all traffic laws, better than the drivers around it. It’s odd how quickly one gets used to the lack of complaints about politics and roadworks. 

Twenty minutes later we pull into Santa Monica Pier, the end of Route 66, traditionally known as the terminus of Western highways and the start of all American futures. A tattooed tourist cheers the Waymo One; he apparently sees a safer, brighter future with friendly machines. A jaundiced Santa Monica cop disagrees: He tells me he has self-parking on his own Mercedes and never uses it. Why would he trust an autonomous car built by Tesla or Honda? And, he adds, what about killer robots? This is only the first step, he mutters … 

Like so much else, AI is dividing us. Will artificial intelligence make us happier, healthier, smarter? Or merely unemployed? Is it simply another case of “Enough with the so-called improvements to life already?” It’s a reckoning that’s been 70 years in the making. 

The pursuit of helpful machines that are actually smarter than we are was born in the 1950s, limping through “AI winters,” times when the hype failed and research stalled. But since 2012, software breakthroughs, faster chips and market appetites have converged to usher in a new age that is both awe-inspiring and also, well, spooky. Now it’s not just sci-fi movie directors who are speculating about what AI could mean for all of us. 

Artificial intelligence roared back into the headlines two years ago, when Bay Area company OpenAI gave away a program called ChatGPT. By “data- scraping the internet” at high speed, ChatGPT can answer questions or prompts with an almost passable essay in mere seconds. It’s still a toddler, but one that’s growing phenomenally. By this past May, 180 million people were using it — an unprecedented retention rate for such an app. Its next iteration will be as a chatty “digital assistant” that (or who?) could make browser searches obsolete. 

Yet futurists say ChatGPT and Waymo One are only the foam on the AI wave. So much more is coming. UCLA is already ahead of the curve. Judea Pearl’s work on probabilities has earned him not only the sobriquet “godfather of AI,” but also the ACM A.M. Turing Award. Pioneering AI work at UCLA Health is saving lives. 

“For me, the most important piece of what we’re going to learn from AI will be to understand ourselves,” says Pearl, a professor of computer science and director of the UCLA Cognitive Systems Laboratory. “This is something that will definitely be beneficial — but I don’t know if it will happen before we become pets of this crazy creature.” 

That sounds … frightening. Is he frightened? Yes. And no. Mainly, he says, he feels “excited, because I know that this will give us the capacity to understand ourselves better. But also afraid, because I don’t know who is going to misuse it.” 

It is, as Aldous Huxley presciently wrote almost a century ago, a brave new world. We asked UCLA experts to weigh in on how and where AI will change it.


Could AI really replace the greatest writers in literature? We decided to find out. See the results here.


Employment 
Job killer, or work creator? 

At the UCLA Anderson School of Management’s Innovate Conference in January, there was lively discussion about two analyst reports on the shape of work to come. Forrester Research predicted that 2.4 million U.S. jobs will be lost to AI by 2030, largely in law and media. McKinsey & Company, looking even further ahead, said work will be “transformed” in marketing and sales, in research and development, and in call centers, which by themselves currently employ about 3.6 million Americans. 

“All the software developers that were getting rid of the blue-collar jobs? AI is coming for them,” says Terry Kramer ’82, an adjunct professor of decisions, operations and technology management at Anderson who chaired sessions at Innovate. “People who are average in their work will get the biggest boost — if they adapt to using AI. Top performers will only see a marginal gain.” 

As McKinsey predicts, he says, there will likely still be a net gain in new jobs. In sum, people who know how to use AI will take jobs from people who don’t. “And that disparity, about who has access — a digital divide that exists at an individual and company and societal level — that worries me,” Kramer says. “None of this imperative to adapt is going to stop. But you must make sure you have the right guardrails in place to ensure equal access to AI tools.”

The Environment 
Using the threat to find solutions to the threat.

One issue about AI comes up over and over, and that is the dehumanization of society. Another is the environmental cost, which isn’t being talked about enough, but it is enormous and is contributing to carbon emissions. That’s because AI infrastructure demands mind-boggling levels of electricity and water to compute data. It’s estimated that AI could consume one-fifth of the world’s power by 2030. 

Then there’s water. Specifically, the amount required to cool super-server farms. Singapore has been so concerned about water consumption that in 2019, it issued a moratorium on new data center developments; it recently lifted it. 

Server farms and the next generation of linked devices, such as AI-enhanced phones, are not all that virtual: They’ll gobble up mountains of physical and human resources. These include vast amounts of copper, mined in Zambia, often under brutal conditions. And there is toxic waste: AI could create more electronic waste than previous computer systems installed at business facilities. 

Karen McKinnon

Yet there is hope that AI can actually help ease its environmental footprint by designing more efficient electricity grids and developing alternatives to “rare earths.” 

Oh, the irony, says Karen McKinnon, associate professor of statistics and the environment at the UCLA Institute of the Environment and Sustainability, who studies climate change predictability. “It is a race between the AI environmental threat and using AI, which is uniquely capable of designing solutions to those very same threats,” she says. “Every day we are trying to outrun the cost of AI.” 

McKinnon uses ChatGPT to “noodle out” questions, but says it’s not yet that useful to climate scientists dealing with big ones. “The issue with climate data isn’t that it’s too big to handle, but rather that it is different from datasets AI has been developed for.” 

“There are possibilities here,” she says. “AI can help us analyze masses of data, but it will not help us ask better questions. It is still down to scientists to figure out what we really need to know.” 

Law
Can AI actually … protect us?

The practice of law, which is still settling down from shocks of document digitization and internet access in the 1990s, is about to be shaken up again. Radically. At least, that’s the verdict of John Villasenor, professor of electrical engineering and law and faculty co-director at the UCLA Institute for Technology, Law and Policy. 

John Villasenor

A Bloomberg Law survey of legal professionals earlier this year reported that lawyers are concerned about document fakery and “hallucinations”; Chief Justice John Roberts has lamented the rise of AI-generated “precedents” quoted, which have turned out to be either wholly invented or, at best, misinterpreted by AI. Protection of privacy is another concern, as is model bias — “Racially biased statistics in, racist garbage out,” as one lawyer told the survey. 

The upside? By using algorithms to sift through electronically stored documents, AI could make lawyering more efficient, and maybe even cheaper. (Wouldn’t that be nice.) “Many of the issues surrounding AI — like those very rare ‘hallucinations’ — can be dealt with under existing laws because these are old problems,” says Villasenor. “We do not need a new raft of laws written in fear to deal with these issues.” 

He does highlight one untested area: generative AI’s wholesale reading from the internet to train itself. It’s a hot-button question that’s arisen in the early days of this new frontier: How far does all this go in possibly breaching copyrights? Is this training a form of fair use? “One relevant precedent we have is a recent Supreme Court ruling about Andy Warhol’s use of photographs in his art,” says Villasenor. But while that was about fair use, it wasn’t about AI, so there are still a lot of open questions. 

The European Union has passed laws banning AI systems that seek to influence behavior in “harmful ways, lead to discrimination or remotely identify people.” That’s painting with a pretty broad brush; Villasenor does not see the U.S. adopting such a broad legal framework. All of which means that when it comes to the law, AI’s biggest contribution may be to its practice: more lawsuits.

Health 
Faster diagnoses — with some red flags.  

In 2019, UCLA researchers harnessed an element of AI called Large Language Model to yoke together a million ways of looking at data to detect prostate cancer. The resulting system, called FocalNet, proved nearly as accurate as radiologists. In 2023, investigators from the UCLA Health Jonsson Comprehensive Cancer Center developed an AI model that can help predict survival outcomes for patients with cancer. They detected patterns that were unavailable before AI. 

And that’s all great. But such intoxicating breakthroughs could also spark a boom in Theranos-type frauds that could overwhelm regulators, warns Peipei Ping, professor of physiology medicine/cardiology and biomedical informatics at the David Geffen School of Medicine at UCLA. AI is a fascinating technology, turning around data in days rather than years — but, Ping says, “It is still a black box, where few truly understand how it computes the answers. We need to find ways to validate AI answers, make the process more transparent to ensure the processes are trustworthy.” 

For example: If Ping is working with 20,000 genomes that come from outside the United States, she wants to know they’ve been examined in an AI environment where the parameters are the same as she would expect if she were examining them at UCLA. 

“We need global guardrails — urgently,” Ping warns. “It is part of our role as educators at UCLA to communicate these concerns across the world.”

Education 
Keeping students honest will be key. 

Students now have the ability to generate an entire essay in seconds with the push of a button. What’s a professor to do? 

“It feels like AI could either change everything about the way we teach and write, or nothing,” says Laura Hartenberger, a faculty member in the UCLA Writing Programs. “The current generation of students have had plagiarism warnings drilled into their heads and tend to view [using] AI to draft essays as cheating. But it will be interesting to see how AI impacts the kids who are learning to write now, in elementary school, and whether they will have a different understanding of plagiarism by college.” 

Danny Snelson

Charting an adaptive course for UCLA is an evolving process, with many departments “feeling it out.” While Terry Kramer is optimistic about tools like Khanmigo, an AI-powered teaching assistant, peers such as Saloni Mathur, chair of the art history department, ban the use of generative AI tools on graded assignments. 

Meanwhile, teachers are wrestling with the ethics. “It’s impossible not to be working with AI tools in one way or another,” says Danny Snelson, an assistant professor of English and design media arts, as well as a writer, poet and archivist. “But there are so many concerns.” Mainly, plagiarism and algorithmic bias. 

Snelson builds ChatGPT into his coursework: He wants to encourage his students to experiment with all the tools available to enhance the imaginative process, to test both the possibilities and the limits of AI for creative use. 

But might we see a day when teachers are replaced by AI avatars, especially in remote learning? The original Turing Test to define AI — that it could fool you into thinking it was human — has long been redundant. So how many remote students would know they are talking to a pedagogic bot? It might be fine for facts — but, say UCLA lecturers, inspirational, life-changing teaching will always remain an intimately human task.

Relationships
Can AI really buy you love? 

Lonely hearts advertisements go back more than 300 years, but the hunt for affection has never been more fraught than it is today. In 2023, the U.S. Surgeon General declared loneliness to be a national epidemic. 

“The connections we feel with celebrities and fictional characters are called ‘parasocial relationships,’ and they are one-sided, in that the celebrities and characters rarely respond to us,” says Professor Benjamin Karney ’92, Ph.D. ’97, co-director of the UCLA Marriage and Close Relationship Laboratory. Advances in generative AI raise the possibility that AI companions may be able to meet the emotional needs of some people. 

Benjamin Karney

Karney cites a relationship model called the Intimacy Process Model, where you express a need and the other person responds in a way that validates that feeling. 

When prompted with “I had a bad day at work,” a chatbot can generate a successful response that could potentially make users feel cared for. 

“Even if we know it’s a program, it provides a connection that people want — and the evidence of subscription apps shows that is happening,” he says. Downloads of chatty, AI-enhanced “companion apps” such as Replika, Genesia, Nomi and, for kids, Moxie Robot, are brisk. 

There are limits, Karney warns. “Right now, AI can’t cook you soup when you’re feeling sick. But in the future it might, without asking you, order you chicken soup from DoorDash and treat you nicely. 

“An AI companion might offer services that a friend might provide,” he adds. “It’s not going to surprise you or demand compromise, which helps you grow in love. But if you are isolated and lack basic affection — as many millions are — AI may be an answer.”

Who Benefits?
Looking at AI — and its implications — through a social justice lens. 

It’s tempting to think of AI as other — built by people elsewhere, technology that any of us could choose to opt out of or never engage with. But AI is coming for us, whether we want it to or not.

So it is extremely important, says Ramesh Srinivasan, UCLA professor of information studies, that “with any technology that’s going to shake society up like this, the wellbeing of every person on our planet is what guides the direction it will take.”

Will it? The rise of AI has long impacted lower-income, working-class communities in the U.S. and abroad. The current public hand-wringing over AI has largely been because it’s now impacting white-collar jobs, according to Munia Bhaumik, program director of UCLA’s Mellon Social Justice Curricular Initiatives Program.

Safiya Noble

“That aspect is one of the fundamental questions we raised in our [Data, Justice and Society] Cluster course: Is AI producing more injustice, and what are the ways it’s being regulated?” she asks. “The answer to that is very little, if not zero.” 

Using the lens of humanities and social sciences — an approach notably taken by UCLA’s Safiya Noble, the MacArthur Fellowship–winning professor whose Algorithms of Oppression: How Search Engines Reinforce Racism is a key text — the interdisciplinary course is co-taught by Bhaumik and Davide Panagia, professor of political science; Todd Presner, professor of European languages and transcultural studies; Srinivasan; and Juliet Williams, professor of gender studies. 

“Technologies and data are never neutral or value-free,” says Presner. “AI can be a tool of both democratization and disenfranchisement. Ethical issues need to be at the foreground of our engagement with it, especially as these tools reshape our collective social world and even ideas about humanity.” –J.R.

Creativity
Facing a Hollywood ending?

Writers, artists and record labels are on the front line of the AI battleground. Some are suing generative AI companies such as ChatGPT for plagiarism and copyright infringement. The AI companies say they are “transforming” work scraped from the internet and are, like comedians and artists themselves, thus protected under “fair use” laws. 

Some Hollywood studios believe AI could save them money. The 2023 Hollywood labor dispute slowed but did not stop A24 Films from using AI in a series of promotional posters for the 2024 feature Civil War. Duplicated actors, AI-penned scripts and algorithm-generated marketing campaigns may one day follow. 

“Right now, the focus of the entertainment industry is still on conventional types of material, but I think we will be seeing more forms that adventurously blend gaming, film, television and even live or interactive experiences,” says Jeff Burke ’99, M.S. ’01, M.F.A. ’10, professor and associate dean of research and technology at the UCLA School of Theater, Film and Television. It’s a popular sentiment in entertainment circles these days. “I think there’s going to be an expansion of storytelling possibilities that emerge from XR, real-time technology and generative AI,” Burke says. 

At Anderson’s Innovate conference, Mihir Vaidya, chief strategy officer at Electronic Arts, offered that a new frontier in electronic gaming may be made possible only through AI: highly personalized games in which players can write their own cinematic narratives. Ceding creative control to the general public may end up being the ultimate twist ending Hollywood didn’t see coming. 

Humanity 
Tools are great, but human connection will still be better.

Is a smarter phone worth your humanity? “Whether it’s in art or in relationships, we need to find a way to distinguish between what AI produces and what comes from the heart,” says Vida Yao, associate professor in the UCLA Department of Philosophy. “Otherwise, there is a great danger that we shall lose what makes us human.” 

Carol Bakhos

Professor Carol Bakhos, chair of the UCLA Study of Religion program, will teach a course this winter on the human search for meaning, from holy texts to Hollywood. The course will dissect the 2013 film Her, in which Scarlett Johansson voices the mobile phone digital assistant “Samantha,” who comforts (and eventually becomes an obsession for) a loner played by Joaquin Phoenix. Bakhos feels that the film, set in a slightly futuristic Los Angeles, is a prescient exploration of AI as a godlike presence, one that is in a “kind of” caring relationship with thousands of people all at once. Samantha, she says, is a technology that elevates — and then ultimately betrays — vulnerable human beings. 

“AI is going to be very important for many people. But the film’s final answer is not to look for meaning in AI technology, but in the people around us. In those who can touch us,” she says. “And that thought, in a world being remade at a distance by AI, is very cheering.”


Read more from UCLA Magazine’s Fall 2024 issue.