The perils of Big Tech and government strategies for mitigating the risks of artificial intelligence are in the spotlight. From White House executive orders to Congressional hearings to a recent FTC probe, the highest levels of government are narrowing in on the potential dangers of technology and looking for policies and solutions that can harness technology for good while minimizing the negative impacts.
In a presidential election year, voters will also have to reckon with an onslaught of information, including disinformation and deepfakes, in order to make informed decisions at the ballot box.
UCLA scholars with deep expertise in technology’s impacts on society are asking the right questions to help us reckon with the moment.
Government probes can protect consumers from the unchecked influence of tech, AI
Safiya Noble
Noble is a scholar in internet/information studies, African American studies and gender studies. She co-founded the Center for Critical Internet Inquiry, or C2i2, and is director of the UCLA Center on Race and Digital Justice. Her book “Algorithms of Oppression” was one of the first to identify the deep and damaging gender and racial biases in online searches and artificial intelligence.
Noble can comment on government regulation of Big Tech, AI, predictive analytics and scoring systems, online bias, social media, ChatGPT, deepfakes and non-consensual pornography, algorithmic discrimination and more. She is co-director of the Minderoo Initiative on Technology & Power and a research associate of the Oxford Internet Institute.
Email: snoble@g.ucla.edu
Big Tech’s lucrative business model is predicated on the circulation of hateful and abusive content, and the financial gain from it makes it unlikely to change
Sarah T. Roberts
Roberts is a UCLA scholar in internet culture, politics and society, and digital labor studies. She can comment on content moderation of social media, AI and work/workers, online bias, the internet economy, social media, Big Tech and regulation, and more.
Her book, “Behind the Screen: Content Moderation in the Shadows of Social Media,” is a study of the global workforce of commercial content moderators, from Silicon Valley to the Philippines, whose job it is to shield against hateful language, violent videos and online cruelty uploaded by social media users.
Roberts is faculty director and co-founder of the UCLA Center for Critical Internet Inquiry, co-director of the Minderoo Initiative on Technology & Power and a research associate of the Oxford Internet Institute.
Email: sarah.roberts@ucla.edu
Personal website: https://drsarahtroberts.com/
AI is making access to secure work and wages more precarious globally
Ramesh Srinivasan
Srinivasan, an internet studies scholar at UCLA, is director of the UC Center for Global Digital Cultures. He can comment on misinformation and AI's impact on democracy, Big Tech, data breaches, the future of work, online bias, social media and politics, and the 2024 election, among other topics.
He is the author of “Beyond the Valley: How Innovators Around the World Are Overcoming Inequality and Creating the Technologies of Tomorrow” and “Whose Global Village? Rethinking How Technology Impacts Our World,” among other books.
Email: ramesh@rameshsrinivasan.org
Mandated media literacy in schools and training for tech developers can help kids stay safe online
Yalda Uhls
Uhls specializes in the digital online lives of adolescents – what content adolescents want to view, their habits and preferences – and can comment on how parents can help their children have healthy digital lives.
An assistant adjunct professor of psychology at UCLA and CEO and founder of the Center for Scholars & Storytellers, Uhls has developed calls to action for tech companies, entertainment companies and other content creators.
She is the author of the book “Media Moms and Digital Dads: A Fact-Not-Fear Approach to Parenting in the Digital Age.”