Reading some of ChatGPT’s answers, it seems to see a glimmer of humanity. Our natural empathy feels anger, passion, irony and a thousand other emotions behind the words of the bot, which instead limits itself to exploiting its algorithms and billions of data to determine the best sequence of words. But even if there is no humanity behind artificial intelligenceAI has a human cost that we often ignore – or pretend not to see. Because besides the engineers who created the algorithms, there are a great many moderators around the world who check and evaluate the data entered in the system and the answers given by the bots. Analyzing content sometimes traumatizing – although pay a pittance.
The human cost of artificial intelligence: Moderators paid less than two dollars an hour
The idea we have of artificial intelligence is of an identity that somehow transcends the human. Something superior to us, made of pure mathematics and rationality. But like any human artifact, it must be built. To do this, American and European companies hire the best computer engineers around to write the code of the algorithms. But that alone is not enough: you need data to train the AI.
NordVPN – 56% off annual plan + 3 months free. Best VPN service in Italy.
For this, technology companies need many workers who contribute to the development of artificial intelligence, carrying out tasks that consist in analyzing large amounts of data to classify images, filter inappropriate content and identify objects in images and videos. Fundamental processes to explain to AI how to recognize our words and how to produce coherent outputs.
These tasks, considered repetitive and uninspiring for many internal developers, are often delegated to independent workers and external companies. Outsourcing usually goes to intercept people who mainly reside in South Asia and Africa. And as Vice explains, usually these workers are employed by companies we’ve never heard of iMerit, Sama and Alegion.
These people become the moderators not only of generative AI solutions like ChatGPT at OpenAI, Bing by Microsoft and Bard by Google. Meta has one of the most sophisticated algorithmic content moderation systems on the internet for Instagram and Facebook. However, that system’s AI is based on “thousands of decisions” made by human moderators.
So the artificial intelligences of giants such as Meta and Amazon, or of explosive startups such as OpenAI, are born from development teams of engineers graduated from the main Western universities. But they can only work thanks to the “unskilled labor” of those who evaluate the decisions taken and correct the mistakes. Which are an integral part of development – that’s how we can train AI. But this work is often traumatic – and paid very little.
The trauma of training ChatGPT
Tech companies have long known that outsourcing moderation leads to economic benefits. It reduces the cost of operations, because companies buy a service instead of internalizing the expense by directly hiring thousands of employees. But not always the companies to which they entrust this process treat workers humanely.
In January of this year, Time revealed that OpenAIthe company behind ChatGPT, has paid of Kenyan workers less than $2 an hour to filter tens of thousands of lines of text to make your chatbot safer to use.
The workers were tasked with label and purge toxic data from the training datasetor of ChatGPT. So they had to read content like child sexual abuse, bestiality, murder, suicide, torture, self-harm and incest.
ChatGPT now seems very eloquent. But its predecessor, GPT-3, often produced sexist, violent and racist textsi because the model was trained on a dataset that was pulled from billions of internet pages. To launch ChatGPT, OpenAI needed a way to filter out all toxic language from its dataset.
OpenAI’s partnership with Sama
OpenAI then worked with Sama, a data labeling partner with based in San Francisco which claims to provide developing countries with “ethical” and “decent” digital work, to detect and label toxic contents. Sama recruited data labelers in Kenya to work on behalf of OpenAI, playing an essential role in making the chatbot safe for public use.
The psychological burden for work made itself felt. A Kenyan worker who helped build ChatGPT, tagging and parsing texts for OpenAI, told of having nightmares after reading the details of a horrific scene, of a man having sex with a dog in front of a child. How much was he paid to bear it? The workers earned between $1.32 and $2 an hour, depending on experience and quality of work.
Big tech companies are hiring often, according to Vice tens of thousands of precarious workers to moderate the data of their AI tools. Sama left his job at OpenAI in February 2022, eight months before the end of the contract, partly due to the trauma suffered, and partly because Time had published an investigation into his work with Meta on 14 February. In the investigation, it revealed that the workers it hired moderated images and videos of executions, rapes and abuse of minors for $1.50 an hour.
The exploitation of moderators: the human cost of artificial intelligence
Surveys such as that of Time and Vice underline how these AI tools have a real cost on a human level. Moderating the worst of the internet shouldn’t be a small-dollar-an-hour job, especially when these tools are having huge success around the world. The contribution of these people must be recognized – also in terms of salary (and perhaps with psychological assistance). Without moderators, artificial intelligence would be unpresentable: they filter the worst of the web so that we don’t see it. But while they’re doing it, they don’t deserve to be invisible.
OpenAI no longer works with Sama, but given the past it is worth paying attention to how large companies develop AI. Which is not becoming human, but it wouldn’t exist without the work of so many moderators.