Since graduating in 2014 with a master of fine arts from UCLA’s design media arts program, artist Refik Anadol has become a worldwide sensation known for exhibitions that harness state-of-the-art artificial intelligence and machine learning algorithms to create mind-blowing multisensory experiences.

His body of work though, is much more than simply mesmerizing feasts for the eyes and ears; it addresses the challenges and possibilities that our ubiquitous computing has imposed on humanity.

On April 19, Anadol’s latest piece, “Moment of Reflection” will debut on campus, where he also serves as a lecturer in the UCLA Department of Design Media Arts. It was in that department, he learned from innovative professors like Christian Moeller, Casey Reas, Jennifer Steinkamp and Victoria Vesna, all of whom use digital technology to help reshape conceptions of art.

“Using data is a scientific approach to something very soulful and spiritual,” Anadol said. “I think it’s very important for artists and creators to find that ‘human’ in the non-human.”

In the lead-up to “Moment of Reflection,” Anadol shares stories of inspiration and challenges about four of his pieces.

“WDCH Dreams” – Projection show on the Walt Disney Concert Hall (2018)

LA Phil
 

To make a building like Walt Disney Concert Hall “dream,” Anadol worked with Google Arts and Culture and researcher Parag Mital. They applied machine intelligence to nearly 45 terabytes of the Los Angeles Philharmonic’s digital archives, comprising 587,763 image files, 1,880 video files, 1,483 metadata files and 17,773 audio files.

Anadol: It was 2012 when I moved into my place in Venice and at 2 a.m. I rented a car and drove to downtown Los Angeles because I wanted to see Walt Disney Concert Hall. Frank Gehry has been my hero since I learned about architecture and I’m always inspired by architecture as a canvas.

Since watching “Blade Runner,” I’ve dreamt about how Los Angeles would be the place to reflect my imagination of an optimistic future. But when I went downtown at 2 a.m. it was the opposite. There were no humans, no cars and no light. The building was dark and that night I remember thinking — in a very jetlagged state — that I can take this building and one day it can learn and it can dream.

I emailed Frank Gehry and I emailed the L.A. Philharmonic. I didn’t know them but I was trying to connect with them and ask if one day this could happen? And of course no answer was given.

A year later, I was at a Microsoft Research conference giving a talk in front of Bill Gates pitching my idea. I said, “I want one day for this building to dream and hallucinate. I got the award that day.”

I came back to L.A. to my apartment two days later. I had gotten a reply from Frank Gehry and I got a reply from the L.A. Philharmonic. So in 2014, right around my graduation from UCLA, we did a projection show with Esa-Pekka Salonen (former music director of the L.A. Phil) inside of Disney Hall. They couldn’t find a reason to do a projection show on the exterior until the Phil’s centennial year in 2018.

Thanks to Frank Gehry and the L.A. Philharmonic we got 100 years of every single piece of data they recorded. Every single sound, image, video, text, poster, music sheet and had one of the most cutting-edge Google algorithms to analyze everything.

Frank Gehry means compound curvature, right? There’s nothing symmetric. There’s nothing flat. Everything is also glossy and shiny. Like, where you project the images is so mathematically important so that the audience can get the best effect.

We had 14 kilometers of fiber cables. We designed everything from scratch. When I say “we,” I mean my team, which is 15 people who can speak 15 languages, representing 10 countries. And half of our studio is UCLA Bruins.

That project transformed our studio. In the near future, the architecture will have an AI connection and if it’s done purposefully, buildings can remember and dream. It’s not a sci-fi idea anymore. That project triggered people’s imaginations.

“An Important Moment for Humanity” – NFT project for St. Jude Children’s Research Hospital (2022)

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

A post shared by Refik Anadol (@refikanadol)

 

This collection of NFTs, or nonfungible tokens, is based on data gathered from the first all-civilian space flight, Inspiration4. The artworks use data shared from the NASA-funded Translational Research Institute for Space Health, or TRISH. Other collaborators were Baylor College of Art and Baylor College of Medicine and SpaceX. They recorded data from the astronauts’ bodies and the spaceflight, including things like heart rate, brain activities, ultrasound data, the temperature, cabin pressure and so on.

Anadol: I started learning about blockchain in 2014. The pandemic, which kept us confined to our homes, interestingly made all humanity focus on digital art, even though digital art itself was not the most-loved thing in the art world. There was always skeptical thinking, like “I prefer sculpture or I prefer painting.” For some reason everyone thought that if software is doing it, it’s not art.

With technology there are pros and cons, like fire. We have AI and blockchain, so I challenged myself to ask the question, “What else can we do with this?” That put me in a more profoundly positive mindset, such as fundraising for and bringing attention to the lack of funding for St. Jude Children’s Research Hospital. Or we could bring awareness about the complexity of AI.

“Machine Hallucination: NYC” (2019)

“Machine Hallucination: NYC” utilized a vast treasure trove of data —more than 113 million photographic memories of New York City found in publicly accessible posts on social media — and transformed it into a 30-minute experimental cinema that was presented in 16K resolution at Artechouse in New York City.

Anadol: I’m very fortunate that I was one of the first artists to work with Google’s machine intelligence team in 2016. I was able to work with some of the most cutting-edge technology, hardware, software and AI scientists on this video.

So the questions were: “Can an AI learn? Can it dream?”

A dream is learning and remembering. These are very cognitive processes for humans. But I think when we apply this idea to AI, we have limitations. We used only collective memories sourced from public data — because data is very important in machines. Ethically, we were not using any personal data. We only use things that are public, like our collective memories of things such as nature, space and urban culture.

Also, we trained the AI ethically. We showed only what the AI learned. We never showed what is real. This became very exciting ethical AI research.

It created a feeling of being in the mind of an AI dreaming about New York.

I think this project received so much attention because it was a fresh idea. It was a new feeling. It was talking about AI, it was talking about an experience of being in a physical environment. It was speculating on the future of architecture. It was bringing in AI and questioning like, “OK, if we’ve got this data, who else gets this data, right? We do art with that. But what else can be done?”

“Quantum Memories” at the National Gallery of Victoria in Melbourne, Australia (2020)

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

A post shared by Refik Anadol (@refikanadol)

Using approximately 200 million nature and landscape images, “Quantum Memories” utilizes Google AI’s publicly available quantum computation research data and algorithms to explore the possibility of a parallel world. As theorized in quantum physics, this artwork was different for each person who experienced it. It tracked each audience member’s movements in real-time, simulating how their observer positions became entangled with the visible outcomes of the ever-changing artwork.

Anadol: The project started in 2019. Working with the Google AI quantum computing team we could create something that would represent physicist Hugh Everett’s “Many Worlds” theory, which says that every single subatomic calculation may open a new dimension.

How on Earth are humans to perceive subatomically, right? We need machines to understand life and record who we are and our memories. We need telescopes. We need microscopes. So, we also need another machine to see these alternative dimensions.

The question was: “Can we work with this quantum computer and its data to simulate alternative realities?” We worked with the Google team, who found it fascinatingly challenging, to create “Quantum Memories” — a unique AI model that can look at that data and simulate AI dreams. We generated alternative dimension projections of nature.

The thing I would like people to know about this is the amount of work behind it and that there was great teamwork and experimentation and failures. It was a series of iterations drawing on computer graphics, neuroscience, philosophy and music, nature. It covers many disciplines.

It’s a very UCLA mindset, right? We are researching and finding new ways of meaning in the arts in the age of machine intelligence.