Is a video or photograph real? How do verifiably false images spread? And how can we respond to a new level of digital fakery? Leading scholars and scientists will discuss those issues from scientific, societal and legal perspectives at a two-day workshop at UCLA’s Institute for Pure and Applied Mathematics, Nov. 15-16.

“Misinformation can spread like an epidemic,” said Mark Green, a UCLA distinguished research professor of mathematics, director emeritus of IPAM and an organizer of the event. “This is an urgent issue, and we will address what to do about it.”

Deepfakes are videos, images and audio that have been altered or distorted using artificial intelligence, in a way that makes the fakery almost impossible to detect with the naked eye. Observers already have noted that they can be powerful weapons in contexts like political campaigns — making it appear, for example, that a candidate said something he or she never said.

“The deepfake phenomenon is so dangerous because of the psychological power of images, audio and video to create belief,” said Jacob Foster, a UCLA assistant professor of sociology and a co-organizer of the workshop.

Foster said there are techniques that take advantage of social media algorithms to confuse consumers about which content is real and which isn’t. “If you want to bury true information, you could produce a ton of fake information that is similar to it, which leads to the true information getting suppressed or flagged as fake,” he said.

Solutions to deep fakery will require the development and deployment of technical, social, legal and policy countermeasures. “What should those countermeasures be, and who should deploy them? Our workshop will explore such questions,” Green said.

The workshop’s primary goal is to create a community of scholars and scientists that understands all aspects of deepfakes. So panelists will include computer science and applied mathematics experts who know how to produce deepfakes, and how to detect them; cryptologists, who are skilled at encryption, security and verifiable authenticity; social scientists, who understand how deep fakes spread throughout society; and legal and policy experts, who can consider whether regulatory, statutory or other legal interventions can prevent or mitigate the risks posed by deepfakes.

Among the participants will be experts from UCLA, MIT, Yale and UC Irvine.

Green said there’s an arms race between those who are creating the deepfake content and the experts creating technology to detect it. “Right now, it would be hard to make a deepfake that could not be detected by an expert, but not hard to make a deepfake that looks real and completely believable, and the technology will only improve,” he said. “If we as a society can’t trust evidence, we’re really in trouble.”

Alicia Solow-Niederman, who served as the inaugural fellow in artificial intelligence, law and policy for the UCLA School of Law’s Program on Understanding Law, Science, and Evidence, is the workshop’s other organizer. Solow-Niederman’s research focuses on the ways in which artificial intelligence and emerging technologies interact with law, and with political and social institutions and norms.

More details about “Deep Fakery: Mathematical, Cryptographic, Social, and Legal Perspectives” are available at the workshop’s website. The registration fee is $50, and there are discounts for members of the military, government, faculty and students.

IPAM’s mission is to strengthen ties between mathematics and other academic fields. The institute is funded by the National Science Foundation.