Tag Archives: facebook

YouTube, Facebook and Twitter grilled over abuse faced by British MPs

YouTube, Facebook and Twitter executives have been grilled by members of the British Parliament at a committee hearing over how the social networks handle online abuse levelled at parliamentarians, the BBC reports.

Members of Parliament (MPs) are said to have argued that such hostility undermined democratic principles, with Twitter representative Katy Minshall admitting that it was “unacceptable” that the site had relied wholly on users to flag abuse in the past.

She insisted that the social network’s response to abuse had improved but acknowledged that there was more to be done.

Harriet Harman, chair of the Human Rights Committee, said there was “a strong view amongst MPs generally that what is happening with social media is a threat to democracy”, and SNP MP Joanna Cherry cited specific tweets containing abusive content that were not removed swiftly by Twitter, one of which was only taken down on the evening before the committee hearing.

“I think that’s absolutely an undesirable situation,” Minshall, Twitter’s head of UK government, public policy and philanthropy, said.

In response, Cherry argued it was in fact part of a pattern in which Twitter only reviewed its decisions when pressed by people in public life.

When MPs questioned how useful automated algorithms are for identifying abusive content, Facebook’s UK head of public policy, Rebecca Stimson, admitted that their application is limited with the platform’s algorithms only correctly identifying around 15% of pieces of offensive content as in breach of the site’s rules.

“For the rest you need a human being to have a look at it at the moment to make that judgement,” she explained.

Labour MP Karen Buck suggestd that algorithms might not identify messages such as, “you’re going to get what Jo Cox got” as hostile, referring to the MP Jo Cox who was murdered in June 2016.

“The machines can’t understand what that means at the moment,” Stimson agreed.

Both Stimson and Minshall said that their respective social networks were working to gradually improve their systems, and to implement tools to proactively flag and block abusive content, even before it’s posted.

The committee said it was shocked to learn that none of the companies had policies of reporting criminal material to law enforcement, except in rare cases when there was an immediate threat.

Committee chair Yvette Cooper pressed Facebook’s public policy director, Neil Potts, on whether the company was reporting identities of those trying to upload footage of the Christchurch shooting to the authorities in New Zealand.

Potts said the decisions were made on a “case by case” basis but that Facebook does not “report all crimes to the police”, and that “these are tough decisions to make on our own . . . where government can give us more guidance and scrutiny”.

Representatives of both Twitter and Google, YouTube’s parent company, admitted neither of their companies would necessarily report to the police instances of criminal material they had taken down.

Facebook in trouble over insecure Instagram passwords

Facebook’s mastery of the “news dump” is only getting more and more impressive as the social network attempts to weather a series of scandals that would most likely have brought other companies to their knees some time ago.

On the Thursday before Easter – aka a major holiday weekend in the USA – and a mere hour before the hotly anticipated Mueller report was released to the public, Facebook updated a months old blogpost entitled “Keeping Passwords Secure” with a couple of lines of italicized text.

The update read: “Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format. We now estimate that this issue impacted millions of Instagram users”.

You may remember that the original post revealed to the public in March that Facebook had stored passwords for hundreds of millions of its users and “tens of thousands” of Instagram users (Instagram is owned by Facebook) as plain text in a database that could be accessed by over 20,000 of the company’s staff.

Most passwords are encrypted for storage so that no one can read them, even if the file they are kept in is compromised (e.g. by hackers). Storing them in a plain text format, however, means that every Facebook employee hypothetically had access to the affected accounts.

Back in March, Facebook claimed it was a system issue that had subsequently been fixed. However, the updated blogpost informed readers of a significant increase in affected users, although a company investigation reportedly found no evidence that the information had been abused.

Facebook said that it would notify all users who had been affected by the lapse in security, and experts have recommended that all users change their passwords and set-up two-factor authentication for their accounts.

A year of scandal

The company is already under investigation by US government agencies – such as the Department of Justice and the Federal Trade Commission – for its data-collection and privacy practices in the wake of the Cambridge Analytica scandal.

Last March, journalists revealed that Facebook had shared users’ data with an outside app developer without their permission back in 2014. The developer then sold the information to Cambridge Analytica, a data analytics firm that would go on to work with Donald Trump’s 2016 US presidential campaign.

At the time, it was not against Facebook’s rules for the app developer to collect the information but they were not allowed to sell the data. The incident raised serious data privacy concerns and left Facebook trying to explain (and, in some cases, justify) its data collection practices – which have since been changed.

Facebook and the art of the “news dump”

The social media network didn’t just wait for one of the biggest events in recent political history – and one that was certain to divert journalists’ attention in a big way – to alert users of the widened scope of the security breach, it chose to bury the news in an old press release.

This kind of timing trick is known as a “news dump”, a tactic typically employed by communications departments in both private sector firms and governments, to bury negative news about topics such as hacks, mishandling of customer data, and bad behavior of executives, politicians, or other high profile individuals.

There are a few versions of this tactic, the most common of which is releasing news on a Friday afternoon after the markets close, so that investors have time to digest the news without stock taking a hit, and journalists are heading out for the day.

Sometimes, companies will also hold on to bad news, only sharing it when an unrelated, massive story breaks and has the public’s – and journalists’ – attention. Facebook’s favored method is releasing bad news just before a holiday – as they did in this case.

For example, the company released its tool allowing users to see if they have been exposed to Russian propaganda on the Friday before Christmas 2017, and on the night before the US midterm elections last October, it put out a report saying it had failed to do enough to prevent its use to fuel bloodshed and political division in Myanmar.

For an organization fighting claims of impropriety and careless handling of information, the use of new-burying techniques seems risky at best, and threatens to further damage the company, especially if the tactic stops working. Instead of protecting Facebook’s reputation, it risks making it look more suspicious.