On a bright Monday morning, Jane, a 24-year-old Kenyan worker, reported to work as usual at a local AI startup. She had recently secured a job as a content moderator responsible for training an AI chatbot called ChatGPT. Her job was to monitor and review the conversations between ChatGPT and its users to ensure that the bot is responsive and delivers accurate responses.
Six weeks into her job, Jane's life changed forever. She came across some of the most horrific content she had ever seen while reviewing ChatGPT logs. The violent and graphic nature of the content shook her to the core, leaving her traumatized and struggling to cope.
Jane reviewed hundreds of inappropriate and sometimes sinister conversations in various languages every day. However, some conversations stood out for their horrific content. She witnessed violent images and videos depicting rape, child abuse, sexual violence, domestic violence, murder, and suicide. Some users even used explicit and vile language to communicate with ChatGPT, making Jane feel disgusted and ashamed.
Despite the content being in a different language, Jane could still understand the gravity and seriousness of what was being discussed. As a result, the traumatic experience impacted her mental health, and she was unable to continue with the job.
Jane's experience is not an isolated case, as many workers in the content moderation industry have reported similar trauma. The exposure to violent and disturbing content has been linked to depression, anxiety, PTSD, and sometimes suicidal ideation.
In Jane's case, the experience had severe consequences. She suffered from flashbacks, panic attacks, insomnia, and nightmares. She felt unable to communicate or socialize with her colleagues, and she became isolated and withdrawn. The experience affected her daily life, and she was forced to seek medical help.
The content moderation industry needs to take proactive steps to protect the mental health of its workers. As such, here are three recommendations:
In conclusion, the role of content moderators in AI-powered services such as ChatGPT is critical. However, this role comes at a cost, with employees at high risk of being exposed to traumatic material. Companies must provide their workers with psychological support mechanisms while they work and recognize the impact that exposure to traumatic material has on a person's mental health. By implementing measures that protect the well-being of its workforce, the content moderation industry can create a safer working environment for its employees.
Society / Technology
Curated by Team Akash.Mittal.Blog
Share on Twitter Share on LinkedIn