Full Automation, Full Fantasy

Installation view of Silent Works at Haus der Statistik. Photo: Andi Weiland | berlinergazette.de | CC BY-NC-SA 2.0

 

 

Full Automation, Full Fantasy
On content moderators and the illusion of AI. On content moderators and the illusion of AI. Notes about the Winter School Silent Works. The Hidden Labor in AI-Capitalism

By Jess Henderson

 

‘The panic attacks started after Chloe watched a man die.
She spent the past three and a half weeks in training, trying to harden herself against the daily onslaught of disturbing posts: the hate speech, the violent attacks, the graphic pornography. In a few more days, she will become a full-time Facebook content moderator, or what the company she works for, a professional services vendor named Cognizant, opaquely calls a “process executive.”
For this portion of her education, Chloe will have to moderate a Facebook post in front of her fellow trainees. When it’s her turn, she walks to the front of the room, where a monitor displays a video that has been posted to the world’s largest social network. None of the trainees have seen it before, Chloe included. She presses play.
The video depicts a man being murdered. Someone is stabbing him, dozens of times, while he screams and begs for his life. Chloe’s job is to tell the room whether this post should be removed. She knows that section 13 of the Facebook community standards prohibits videos that depict the murder of one or more people. When Chloe explains this to the class, she hears her voice shaking.
Returning to her seat, Chloe feels an overpowering urge to sob. Another trainee has gone up to review the next post, but Chloe cannot concentrate. She leaves the room, and begins to cry so hard that she has trouble breathing.
No one tries to comfort her. This is the job she was hired to do.’
The Trauma Floor (2019), by reporter Casey Newton.
Reference thanks to Dr Phoebe Moore in her talk at Silent Works (2020).

 

The Shift to Reactive Moderation

This is the life of but one of the hundreds of thousands (at last measure in 2017) of moderators who form the body of ‘front line workers,’ exposing themselves to traumatic content so that platform users do not come into contact with such horrors.

Sarah T. Roberts, author of Behind the Screen: Content Moderation in the Shadows of Social Media (2019), defines content moderation as an organised practice of screening user generated content posted to internet sites, social media, and other online outlets, in order to determine the appropriateness of the content for a given site, locality or jurisdiction.

From Arizona to Manila, Hyderabad to Dublin, content moderators are everywhere. Social platforms, search engines, e-commerce sites, and many blurrings in between (think: Google and its Youtube, Facebook and its Instagram, Twitter, Amazon, Baidu – to name only a few from the most prominent handful) are all engaging in this dark, invisibilized labour market. Rarely doing this work inhouse, content moderation is typically farmed out to third-party vendors offering ‘business operation services’, spread throughout the world (Facebook alone has over 20 sites world wide), who employ temporary workers on precarious contracts. Regional policies and nuances in what’s appropriate and what’s not, is one reason. Price, the obvious other.

Sana Ahmad, doctoral researcher at Weizenbaum Institute in Berlin, has spent two years investigating India’s IT sweatshops where so-called content moderators, described by Ahmed as ‘the invisibilized labor force holding web services together,’ struggle doing a hard job under precarious conditions. During her talk as part of the Berliner Gazette’s Winter School Silent Works, Ahmed explained the transition the moderation has undergone over the past two decades;

‘Content moderation has transitioned from the open and voluntary moderation on text based social communities to the large scale moderation in the twenty first century using professional moderators and basic algorithms […] However, my own research has found content moderation as a backend non-voice business process, which can be situated within the operations department of the services sector of information technology (or I.T.) offices. These service companies offer business process outsourcing (BPO) services, as well as services related to application and development, maintenance, and consultancy, to their clients […] One of these services is content moderation.
The content moderators are bound by quality and quantity targets and look at the flat content, which could be text, images, videos, GIFs, and others, and then make decisions according to the guidelines provided by the clients, which are the social media companies.’

The transition Ahmed describes, from ‘voluntary moderation text based social communities to the large-scale moderation in the twenty first century using professional moderators and basic algorithms’ is what James Grimmelmann, Professor of Digital and Information Law at Cornell Tech and Cornell Law School, adds is a determining that the content moderation market is segmented into different moderation types. Primarily, ‘proactive moderation’ that is before the content is published online, and ‘reactive moderation’ that is after the content is published online. Over time, content moderation has transitioned from the open and voluntary moderation on text-based social communities to the large-scale moderation we see today, using hired moderators and basic algorithms.

Many purveyors of the online are familiar with the now-old school task of ‘reactive moderation,’ that of the classic internet forum and still common practice on platforms such as Reddit. But when it comes to ‘proactive moderation’ most users remain largely unaware that artificial intelligence is still insufficiently capable of the task, and thus humans make up the invisibilized workforce screening traumatic content before it is publicized. With each platform having differing and ever-changing policies (non-sensical ones also, as Facebook moderator Sania in India tells ‘Some policies are very stupid. In many cases, we’re supposed to tag something as a violation even when it’s evidently not offending. For instance, any edible item that resembles human genitalia must be tagged as adult content’), varied locations having cultural differences towards appropriateness, and the inherent problematic ‘binary’ regard to censorship of machines (for example, when is a nipple art, and when is it pornography?) – it seems humans remain required for making judgement calls, which a machine (yet) cannot.

 

Through The (AI) Looking Glass

The case, and mere existence, of content moderators provides a looking glass within AI capitalism – one revealing the hidden labour that lies at the heart of an illusion central to this new iteration of capitalism. The illusion is both easy and difficult to encounter: we live our lives through these platforms (from Facebook and Twitter, to Amazon and Airbnb – all rife with their own problems, complications, and nuances, in dealing with the issue of content moderation) at once immersed in and, generally, unconscious of those facilitating us towards ‘frictionless’ and ‘safe-guarded’ experiences whilst on them. The fact that there are real people having to witness and manually assess content (from banal fodder to ‘the pornographic, the obscene, the violent, the illegal, the abusive, and the hateful’– in the words of Tarleton Gillespie) illustrates how AI capitalism is not just a matter of real-existing AI at work, but also a matter of the fantasy and appearance of AI at work.

This ‘dual nature’ and illusory element was examined by the Berliner Gazette’s aforementioned, recent Winter School Silent Works that interrogated how ‘AI-capitalism denies the labor that sustains it – more aggressively and systematically, but also more desperately than previous iterations of capitalism have.’ By including this frequently overlooked element, Silent Works significantly expanded the conversation beyond what Nick Dyer-Witheford, and co-authors Atle Mikkola Kjøsen and James Steinhoff, discuss in their book Inhuman Power (2019), where only the former aspect, what they call ‘real-existing’ AI, plays a role. In doing so, Silent Works has begun the work of forming a counter-politics within AI capitalism, one which has plenty of work to come – particularly making way in conversations on the formations of alternative spaces and networks of care both for and by these invisibilized workers during the Winter School’s interventions and workshops.

‘AI-capitalism is also in the process of establishing its regime of hidden labor where AI is only projected to play an important role in the future. This happens when the mere appearance or fantasy of full automation is successfully promoted, for instance, by naturalizing infrastructure: as long as its appeal of frictionless functioning can be upheld, infrastructure can remain practically invisible, while – in the course of this – the (waged and unwaged) labor that it requires becomes almost imperceptible.’
– Silent Works

 

Marx in the Age of AI

Although he never lived to see even one of the earliest mechanical computers, Marx had a couple of prophetic flashes (still contentious within the Marxist tradition) about how excessive automation might ultimately assist to marginalise workers. Witnessing the rise of similar infrastructures that became prerequisite for the expansion of commercial activity–such as the birth of railways and steam engines in the 19th century, and the provisions of electricity and mass transportation in the 20th–Marx conceived these conditions as “general conditions of production”, which Steinhoff speculates “AI may well become.” AI’s rise to a general condition of production provides, what Nick Dyer-Witheford has called the primary contemporary example of ‘profit-driven and revolt-suppressing appropriation and direction of techno-scientific knowledge,’ setting potential event on the horizon (based on what can be seen by AI research) for the machinic ‘supplement’ of labour to become the main game. This event would flip another Marxist concept on its head: that of ‘dead’ vs. ‘living’ labour. When Marx saw the machine as a supplement to and by-product of human effort, the former was ‘dead labor’ that derived from, and needed animation by, the former–‘living labor.’ AI’s move into the main game equals the total collapse of the border between what is living, and what is dead.

It is on this precarious, convergent boundary where content moderators sit today. Embodying the folding of dead into living, living into dead – the same point where the fantasy of technology and technology of capital, come to a head. A point, like the zit on the face of platforms, that is made-over so well that most people do not even know there are content mods maintaining the veneer of the infinite scrolls they engulf themselves on every day. Notions of ‘regulation’ or ‘control’ meet with the long-venerated dreams of automation, to naturalise as the work of algorithms, not people.

“These people (content moderators) are just like our first responders. They’re protecting us. They’re protecting us from being exposed to this very dark side of humanity.”
– From the Silent Works audio essay by Kalulé and Kanngieser;
New York City clinical psychologist Dr Ali Mattu.

 

The Invisibilized and Deep Transformation of Labour

The case of content moderators highlights how dominant narratives and power structures are concealing the fact that labour is undergoing deep transformations, by pretending that labour is becoming extinct due to the rise of full automation. An audio essay presented by two artists working on the excavation of labour as a buried reality, Petero Kalulé and AM Kanngieser, addresses the ‘police function’ inherent in AI and its fantasy. At once, ‘AI becomes an operation of the force of the law, perpetuated intentionally, yet imperceptibly’ and content moderators’ knowledge as subjects of labour is ‘programmed, classed, and un-humaned on an ongoing basis as a part of a global capitalist imaginary… The hidden labor A.I. is programmed to perform is one of archiving apprehending and identifying traumatic human events. Some of what we call crime […] A.I. is tasked to apprehend and master the unknowable. This is a police function.’ The human content mod is expected to teach, train and supervise AI, circumscribing them into AI design and ethics programmes. Their critical exploration of AI labour practices highlight the layers of inherent violence in this invisibilized work:

‘The hidden human labour behind A.I. is easily encapsulated by the figure of the content moderator. Content moderation means patrolling online content for evidence of things deemed impermissible. These things often constitute hateful and abusive action, fake news, or fraudulent material. Content moderation is one of the fastest growing jobs in tech.
Content moderators are the police of social media. Following the guidelines established by social media platforms, they monitor content for violations. The data collected from identification, categorization, and archiving of violations, communicats to AI the parameters of order. Over billions of clicks, AI learns what human violence is.
Content moderation is outsourced and offshored to places far from corporate headquarters. Workers are required to make hundreds of decisions daily about what stays on, are required to decide–within seconds–if the content breaches company policies and guidelines. Even in heavily resourced workplaces this work takes an extreme toll on workers’ health, as these workers are exposed to multiple forms of violence under conditions of tight workplace surveillance.’

Content moderators are at once policing, as they are being policed. They manually decide if disturbing content (frequently viewed in greyscale to ‘reduce’ the vividness of the trauma witnessed) is too disturbing to be made public, whilst allowed minimal breaks and closely monitored by supervisors, not over their mental health but the amount of content pieces they are getting through per day (100-300 is the average for YouTube mods), and in regulation of their breaks–from ‘toilet time’ to lunch, and if they’re lucky, brief rest.

[…] looking at children being raped all the time and people getting their heads chopped off it was like there was no escape. I finally snapped. They took that as, “Oh she needs to take a second.
She just needs to breathe.”
– From the Silent Works audio essay by Kalulé and Kanngieser; a content moderator from Google speaks to Verge reporter Casey Newton about her experiences.

 

Risks, Wounds, and Long-Lasting Scars

Paranoia, obsessive ruminations, insomnia, PTSD, mental breakdowns behind their desks, and ongoing suffering from these workplace violences (long after they leave the job), are just a short list of ramifications from the work of content moderation. Many commit suicide. Rectification gestures range from placing caps on how long somebody can do the job for, to pay-outs for the abuse and ongoing effects (with their unremunerated, mounting medical bills). Phones, pens and pencils are not allowed on the work floor (they must be locked in lockers at the beginning of a shift) and workers are contractually prohibited from talking to friends and family about what they see on the job. These policies add additional stress to already beyond-stressful tasks, as frequently a worker needs to do a two-step verification via SMS (thus racing to and from their locker to get a code in time), and ability to divulge anything emotionally and mentally intense during the day is disallowed (this is why pens and phones are also forbidden, in case one were to make notes or record material).

In the context of the pandemic, content moderation and the role of AI in this work was exacerbated due to both the surge in content on platforms, and regulations for social distancing (offices had to close). Every social platform saw a rise in both users and usage. News and blog posts soared. One in ten people are creating and uploading videos. Where AI was seen as inadequate for the task of content moderation before, platforms were forced to return to it as the human labour force (inadequate for the quantity of content before) became increasingly scanty compared to the rate of upload and circulation.

In an essay published in the Silent Works text series titled Everything in Moderation: COVID-19, a media portrait study, researcher Darija Medić dissects the aftermath when content moderators were sent home and platforms were left relying on automated software instead, since the aforementioned ‘security policies’ do not allow work from home. Two things happened: ‘the removal of problematic, as well as legitimate content, in a wave of algorithmic protection of Internet users,’ and a successive line of (obscured) questioning as picked up by Medić – ‘What happened to the many people who were conducting the excruciating job of watching hours of disturbing content, developing PTSD while doing the work […]? […]Did they relocate to other services? Did they get fired?’

To address what surfaced in result from the testing of AI’s content moderation competences, Medić reports on how, ‘Fake news, misinformation, and the media pandemic around the COVID-19 crisis appeared in a moment of rising restrictions in content policy within social media and other online platforms. It appears as though the pandemic arrived at a perfect moment to test a real world scenario of the content moderation performance of algorithms. (Ironically) Facebook is motivating users to take information only from the WHO, while in parallel asking its users for health data, for research purposes.’ The effect? An in-or-out sharp binary-ing:

‘Conspiracy theories, politician tweets, crypto-content, news and science articles have all of a suddenly, equally disappeared, blending together in the algorithmic realm of data over-policing. What this builds is a system in which the most important binary oppositions have become: approved and disapproved, up to the point where every piece of content online needs to fall into one of the two categories.’

The fantasy of AI has been perpetuated by its emergency call-in during the pandemic. YouTube, now owned by Google, released a report at the end of August 2020 on its Community Guidelines Enforcement stating that ‘Human review is not only necessary to train our machine learning systems, it also serves as a check, providing feedback that improves the accuracy of our systems over time.’ Medić’s critique highlights that ‘What this means is that they put higher priority on the accuracy and development of their systems than the issue of content, which they openly say in this report. Additionally, in their report, they offer YouTube viewers to help in flagging content for free – the work that would otherwise be done badly by algorithms, and only at this point are viewers referred to as a community. Not only do the services improve by community drive content, but also from community driven, in other words free, content verification.’ So, are we to take it that we are all content moderators now?

 

A Full Circle of Absurdity

With the removal of the human work force moderating content, advertisers (the real customers of these platforms) grew increasing dissatisfied at their products began showing up next to ‘inappropriate’ and ‘unsavoury’ content. This pressure is more important than user dissatisfaction to the likes of Youtube and Facebook. From the end of September 2020, YouTube ‘slowly created a full circle of absurdity for clear reasons of efficiency and economic interest by returning to human content moderation.’ And thus, where we might have seen greater attention and investment into developing AI for the traumatic work of content moderation, we are back at the necessity of human evaluation and confusion/oppression regarding censorship. In the closing words of Medić:

‘As a portrait of the current moment in time, what we are left with is the message of pervasive and total censorship of what is allowed to be visible, both in terms of underlying mechanisms, working conditions, and the actual content posted on platforms. It is polarization into true and false, appropriate and not appropriate, in which these services act in the position of social and ethical arbiter, flattening ontology at the same time, towards a glossy, two-dimensional image. But an absurd one, almost like […] the conspiracy theories that flourish in repressive conditions.’

Topsy-turvy fantasy is not the by-product of AI capitalism. It is its essence. Marx called it, unknowingly, when he said; ‘[I]n the end, an inhuman power rules over everything.’ And inhuman has shown itself as just one letter away from inhumane. In this new phase of capitalist mystification, techniques for resistance and revolution will need take on a multi-pronged advance. As the conclusion in Dyer-Witheford’s Cyber-Marx (1999) already forewarned, ‘Demystification, practiced alone, leads to a dead end.’ Rather, we might take up Geert Lovink’s recommendation towards algorithm-driven poverty traps, via a tweet on 28th December, applying the same attitude for the reality of content moderators in the AI illusion: ‘“The coming war on the hidden algorithms that trap people in poverty. A group of lawyers are uncovering, navigating and fighting the automated systems that deny the poor housing, jobs, and basic services.” Lawyers alone won’t work plz, involve hackers, whistleblowers, journalists.’

Jess Henderson is a writer, researcher, theorist, and author of Offline Matters: The Less-Digital Guide to Creativity (Amsterdam: BIS Publishers, 2020). Her work traces the effects of technology on our everyday lives, and addresses notions of boredom, addiction, whilst documenting conditions of precarious labour. Jess is Senior Researcher at the Institute of Network Cultures and is currently based in Zürich undertaking the first transdisciplinary study of the burnout. For more information on her work visit No Fun (https://nofun.tips) and its online magazine (https://nofunmag.substack.com).
You can find more Silent Works video talks, artworks, texts, workshop projects, and audio documents tackling AI-capitalism’s hidden labor on the Silent Works website. Have a look here: https://silentworks.info

 

January 25th, 2021 — Rosa Mercedes / 02