Facebook. YouTube. Twitter. Every day billions of strangers have access to a global megaphone that allows them to say whatever they want seemingly with impunity. But there are thousands of people around the world whose sole jobs are to tame the internet by preventing videos of the most obscene, dangerous and sometimes even criminal behavior from ever posting. In fact, Facebook announced May 3 it would be adding 3,000 screeners to its existing group of 4,500.

Sarah Roberts, assistant professor of information studies at UCLA, coined the phrase “commercial content moderation” to describe the work of these unheralded workers who risk their mental health to make sure the rest of us aren’t subjected to all of humanity’s worst impulses. She is in her first year at the UCLA Graduate School of Education and Information Studies.

Roberts became interested in studying the effects of this work on people after reading an article in the New York Times in 2010, while she was on a fellowship at the University of Illinois Urbana-Champaign that focused on the intersection of information, technology, society, media studies and sociology.

What is commercial content moderation?

Commercial content moderation is the series of professional activities that people undertake for pay to adjudicate and evaluate user-generated content that has been uploaded to the social media platforms of the world — from Snapchat to Facebook to Instagram and everything in between.

I’ve been to Germany a number of times to give talks, and on several occasions, this term has been translated to the “garbage collectors of the internet.” It has about as much prestige as janitorial work. At the same time, many of these workers have said to me, “You wouldn’t want an internet without me. You wouldn’t be able to handle it, trust me.”

How do these workers decide what to allow or remove?

These platforms have user-facing community guidelines — terms of use. CCM workers take a given piece of content and evaluate it against those rules. There are also internal policies that the user base is not privy to. CCM workers are looking for things that might run the gamut from being gross or disturbing all the way to illegal activity — murder, child abuse, sexual abuse or material that is coming from a war zone.

How many are doing this type of work professionally?

I’m not confident that anyone really has the numbers. Let’s take a platform like YouTube, which I believe as of 2014, was receiving 100 hours of uploaded video per minute per day. And that stat is a couple of years old. Then take Facebook, Snapchat, Instagram, Twitter and all of the second- and third-tier platforms — never mind those that are outside of the North American-English language context — and we realize that this is a massive amount of content.

Certainly, firms do not have people looking at every single thing. But even to catch a small portion of material that may be objectionable, problematic or illegal, you have to have a lot of people working for you.

What are the typical working conditions?

I started out focusing on workers who were in the Silicon Valley headquarters of a major social media firm, yet they were contractors … and they didn’t have health care. When you think about the fact that a lot of this work puts people into a psychological firing line and that they might need mental health services, not having health care becomes a thing.

You also have call centers, and not just in India or the Philippines. There are call centers in Iowa and Florida doing CCM work. There are people who are saying yay or nay to images, without ever knowing which platform they will be used on.

How on earth would anyone track the well-being of those workers in the long-term?

How has this work affected them?

The first example comes from my research with some of the younger, college-educated, debt-addled workers in Silicon Valley. One worker was quick to point out, “I do this work because other people can’t. I can handle it.”

Just a few moments later that same worker said, “Since I’ve taken this job, I’ve really been doing a lot of drinking. I’m under a non-disclosure agreement, so not only do I not want to tell people about the work because it’s gross and my friends wouldn’t understand, I’m not even allowed to do that. I could lose my job if I were out there talking about the content I see at work.”

It was clearly creating social barriers for the workers who were doing this — not wanting to engage in ways they had previously, feeling burdened by the material after work hours, having nightmares, for example. One guy reported to me he was having an intimate moment with his partner at home, and he pushed her away because an image flashed in his eyes. And when she asked what’s wrong, he couldn’t bring himself to share it.

There’s been a lawsuit about this that made headlines recently.

Last year a lawsuit was filed in Washington State on behalf of two Microsoft employees who were doing CCM work starting in 2007. These workers are now on total disability, and they’re on leave from Microsoft because of the impact of what they were seeing in the course of their work.

It’s going to be very interesting to follow that lawsuit to see how it goes, not only in terms of what is made public and what can be learned, but also in terms of how the firms will respond.

How can we make it better for workers right now?

Many firms are taking this very seriously — to their credit — and are members of organizations that promote worker wellness programs. These steps range from making sure you take adequate breaks to giving up the work because you’re feeling bad and mandating engagement with trained psychologists.

What are some of the challenges to developing better policies?

Beyond banning content that’s blatantly illegal everywhere, it becomes very complicated because so many of these platforms transcend national boundaries. Something that is totally acceptable in one part of the world may be completely unacceptable in another. So this is why CCM is so complex and often can’t be just ceded to an algorithm or a computer. It takes human intelligence and a sensibility and decisions.

What roles do algorithms and artificial intelligence play?

Take copyright infringement. Media companies can work with YouTube, for example, to have a database of their material that is under copyright. If something is uploaded that matches that material — it can just be automatically removed or flagged for review. Typically this uses what is known as a hash value as a way to ID known content, essentially.

That kind of technology is being used to deal with some of the most heinous kinds of material circulating online — child pornography. For better or for worse, a lot of the images or videos of that kind of abuse is material that was created years ago. There are databases that are created in conjunction with law enforcement where that material can be matched, quickly identified and taken down.

The problem is that as soon as someone creates new content and uploads it, or uses something like livestreaming technology, such assists can no longer apply. And while some computational tools can be applied to images and videos, all are limited to a certain extent. They are also limited by how much of a platform’s resources are allocated to them.

The bottom line is that, so far, nothing does it quite as well as a human being. And nothing does it as fast and I daresay cheaply. To develop and deploy this kind of computation at production scale is incredibly expensive and resource-intensive.

What happens if things continue as they are?

I see two outcomes: At some point these workers hit the wall and burn out, or, more disturbing, they become so desensitized that they are no longer effective in their jobs.

What do we do with those folks? Once you’re done with this work there’s no mandatory therapy. You can’t un-see it, so in the Washington State case, we have two workers who are functionally disabled now, ruined marriages, inability to continue working, trouble parenting.

I’d like to see the industry be more honest, open and engaged around these issues because in this equation to date, I see industry reaping the reward and taking almost none of the responsibility, and that seems to be an imbalance in favor of the firms and their platforms.