YouTube, Facebook and Twitter executives have been grilled by members of the British Parliament at a committee hearing over how the social networks handle online abuse levelled at parliamentarians, the BBC reports.
Members of Parliament (MPs) are said to have argued that such hostility undermined democratic principles, with Twitter representative Katy Minshall admitting that it was “unacceptable” that the site had relied wholly on users to flag abuse in the past.
She insisted that the social network’s response to abuse had improved but acknowledged that there was more to be done.
Harriet Harman, chair of the Human Rights Committee, said there was “a strong view amongst MPs generally that what is happening with social media is a threat to democracy”, and SNP MP Joanna Cherry cited specific tweets containing abusive content that were not removed swiftly by Twitter, one of which was only taken down on the evening before the committee hearing.
“I think that’s absolutely an undesirable situation,” Minshall, Twitter’s head of UK government, public policy and philanthropy, said.
In response, Cherry argued it was in fact part of a pattern in which Twitter only reviewed its decisions when pressed by people in public life.
When MPs questioned how useful automated algorithms are for identifying abusive content, Facebook’s UK head of public policy, Rebecca Stimson, admitted that their application is limited with the platform’s algorithms only correctly identifying around 15% of pieces of offensive content as in breach of the site’s rules.
“For the rest you need a human being to have a look at it at the moment to make that judgement,” she explained.
Labour MP Karen Buck suggestd that algorithms might not identify messages such as, “you’re going to get what Jo Cox got” as hostile, referring to the MP Jo Cox who was murdered in June 2016.
“The machines can’t understand what that means at the moment,” Stimson agreed.
Both Stimson and Minshall said that their respective social networks were working to gradually improve their systems, and to implement tools to proactively flag and block abusive content, even before it’s posted.
The committee said it was shocked to learn that none of the companies had policies of reporting criminal material to law enforcement, except in rare cases when there was an immediate threat.
Committee chair Yvette Cooper pressed Facebook’s public policy director, Neil Potts, on whether the company was reporting identities of those trying to upload footage of the Christchurch shooting to the authorities in New Zealand.
Potts said the decisions were made on a “case by case” basis but that Facebook does not “report all crimes to the police”, and that “these are tough decisions to make on our own . . . where government can give us more guidance and scrutiny”.
Representatives of both Twitter and Google, YouTube’s parent company, admitted neither of their companies would necessarily report to the police instances of criminal material they had taken down.
Leave a Reply