Advertisement

Facebook blames coronavirus after failing to remove suicide messages

Joshua Hoehne / Unsplash
Joshua Hoehne / Unsplash

Facebook has blamed the coronavirus pandemic after failing to remove a number of suicide posts from its platforms.

The social network admitted it had removed far less harmful content between April and June owing to staff shortages caused by the Covid-19 crisis.

Facebook sent its moderators home in March to prevent the spread of the virus, reducing its ability to monitor content.

The firm says it has since brought “many reviewers back online from home” and, where it is safe, a “smaller number into the office”.

Facebook's founder and CEO Mark Zuckerberg (AFP via Getty Images)
Facebook's founder and CEO Mark Zuckerberg (AFP via Getty Images)

Facebook’s latest community standards report shows that 911,000 pieces of material related to suicide and self-injury underwent action within the three-month period, versus 1.7 million pieces looked at in the previous quarter.

Meanwhile on Instagram, steps were taken against 275,000 posts compared with 1.3 million before.

Action on media featuring child nudity and sexual exploitation also fell on Instagram, from one million posts to 479,400.

Facebook estimates that less than 0.05 per cent of views were of content that violated its standards against suicide and self-injury.

“Today’s report shows the impact of Covid-19 on our content moderation and demonstrates that, while our technology for identifying and removing violating content is improving, there will continue to be areas where we rely on people to both review content and train our technology,” the company said.

“With fewer content reviewers, we took action on fewer pieces of content on both Facebook and Instagram for suicide and self-injury, and child nudity and sexual exploitation on Instagram.

“Despite these decreases, we prioritised and took action on the most harmful content within these categories.

“Our focus remains on finding and removing this content while increasing reviewer capacity as quickly and as safely as possible.”

The tech giant’s sixth report does suggest the automated technology is working to remove other violating posts, such as hate speech, which went from 9.6 million on Facebook in the last quarter to 22.5 million now.

Much of that material – 94.5 per cent –was detected by artificial intelligence before a user had a chance to report it.

Read more

Facebook removes Trump post for 'Covid-19 misinformation'

Mark Zuckerberg's wealth hits $100 billion as Facebook shares soar

Wiley's Facebook and Instagram accounts removed after abusive posts

Facebook bans Wiley over more abusive posts aimed at Jewish critics

Make sure you mask-up, Facebook and Instagram users told on their feed