On job boards, some listings seem almost too good to be true. Among them, ads for “online content moderators” are everywhere, and at first glance they read like an all-inclusive holiday package.
The setting? Lisbon.
The perks? Free housing, a paid plane ticket, welcome bonuses, company-sponsored parties, even free medical check-ups.
The message is clear: come live the Mediterranean dream while helping keep the Internet safe.
But read between the lines and a different picture begins to emerge. Hidden among the cheerful promises are keywords that hint at the reality: 24/7 rotating shifts, mandatory weekend work, relentless targets. What is advertised as a gateway to an international career often turns out to be a grueling, low-paid job with little social protection — a form of digital labour that some compare to invisible servitude.
The Hidden Face of Moderation
Content moderators are the unseen workforce of the digital age — the people who sit for hours in office chairs, eyes fixed on screens, tasked with filtering the worst of what humanity produces online. Every day, they are confronted with a torrent of images and videos depicting child exploitation, terrorism, stalking, hate speech, animal cruelty — a raw, unfiltered stream of human violence that few are psychologically prepared to handle.
The pace is unforgiving. Moderators are measured not just on their accuracy but also on their speed and “resilience.” Missing performance targets can lead to formal warnings, salary deductions, or termination. And because most moderation work is outsourced to third-party vendors, many moderators have no direct employment relationship with the tech giants whose platforms they safeguard.
While some of these jobs are located in European countries — where labour laws offer stronger protection — many are carried out in places such as Kenya or Colombia, where employment regulations are weakly enforced and social protections are minimal. In these hubs, low pay, unstable contracts, and high turnover are the norm, creating conditions that critics say verge on exploitation.
The Judicial Issues at Stake
The human cost of this work is severe: cases of PTSD, anxiety, and depression are well documented. The victims of these so-called “careers” exist and some are no longer willing to remain silent.
In 2022, a Spanish court ruled that a former Facebook moderator’s mental health had been seriously damaged by his work reviewing graphic content — including beheadings — setting an important precedent. It was the first time a Spanish court explicitly recognized psychological harm caused by this type of working environment, a decision that could reshape how social media companies manage and support their moderation teams.
Elsewhere, legal action has gone even further. In 2020, Facebook agreed to pay a $52 million settlement in a U.S. class action lawsuit, after moderators alleged that the company had failed to protect them from the severe mental-health consequences of their job. This case marked one of the first large-scale acknowledgements that exposure to extreme content in digital moderation can have lasting, disabling effects — and that tech giants may bear legal responsibility.
Although still largely overlooked, these job offers must become the focus of more serious legal and policy discussions. They often describe the work in vague, sanitized terms that hide the true nature of the tasks that can inflict psychological trauma and create conditions verging on exploitation.
Human rights frameworks must adapt to better address these realities, both to prevent harm and to ensure proper remedies when harm occurs. The challenge for legislators and courts is to extend protections across borders and throughout the global supply chains of tech giants.
When humans are asked to protect other users from the worst of the Internet, they too have a right to dignity, to psychological safety, and to fair working conditions. Anything less risks turning the invisible guardians of our online spaces into the digital age’s new victims.
FOOS Justine – Master Cyberjustice Promo 2024/2025
Sources
https://www.bbc.com/news/articles/crr9q2jz7y0o
https://fr.indeed.com/q-mod%C3%A9rateur-de-contenu-emplois.html?vjk=b318cb2e101c2299
https://www.theguardian.com/technology/2020/may/12/facebook-settlement-mental-health-moderators
