The company has admitted that it failed to detect images of child nudity due to content moderators working from home.The tech giant said moderation levels dropped when content moderators were sent to work from home in March during the height of the COVID-19 outbreak.Harmful material on Facebook and Instagram involving child nudity and sexual exploitation, as well as suicide and self-harm, was not caught by the company’s automated systems.
“While our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” Facebook said.
“For example, we rely heavily on people to review suicide and self-injury and child exploitative content, and help improve the technology that proactively finds and removes identical or near-identical content that violates these policies.
“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.”
Investigations into Facebook’s content moderation have found that moderators are being exposed to traumatic material without receiving adequate support.
Although 15,000 people around the world are employed to check content uploaded to Facebook, the conditions they work in are being kept hidden by third-party contracts and non-disclosure agreements, according to reports.
The company said its automated systems had improved at identifying hate speech and 95% of offending posts are now automatically detected, up from 89% previously.
“This is because we expanded some of our automation technology in Spanish, Arabic and Indonesian and made improvements to our English detection technology,” it added.
The social network took action against more than 22 million pieces of content – including posts, pictures and videos – in the second quarter of this year.
Facebook also announced it was banning racist images of Black and Jewish people, even including controversial cultural images such as Black Pete in the Netherlands.
The company also announced it will be undergoing an independent third-party audit to convince people that the numbers it reports around harmful content are accurate.
It comes as the UK government prepares to make social networks liable for the content on their platforms, which could make content moderation more common.
MPs recently called for a code of ethics to ensure social media platforms remove harmful content from their sites and branded Facebook “digital gangsters” in a parliamentary report.
The committee wrote: “Social media companies cannot hide behind the claim of being merely a ‘platform’ and maintain that they have no responsibility themselves in regulating the content of their sites.”
Mark Zuckerberg has regularly responded to criticism of content on Facebook by stating he will hire extra staff as content moderators.
At the time of the report into content moderators’ working conditions, Facebook said: “We know there are a lot of questions, misunderstandings and accusations around Facebook’s content review practices – including how we as a company care for and compensate the people behind this important work.
“We are committed to working with our partners to demand a high level of support for their employees; that’s our responsibility and we take it seriously.”
The blog added: “Given the size at which we operate and how quickly we’ve grown over the past couple of years, we will inevitably encounter issues we need to address on an ongoing basis.”