Tag Archives: twitter

Twitter acquires deep-learning start-up Fabula AI

Image by William Iven from Pixabay

Social media giant Twitter announced on 3 June that it had acquired London-based deep learning start-up Fabula AI in an attempt to boost its machine learning expertise, feeding into an internal research group led by the company’s senior director of engineering Sandeep Pandey.

The research group’s stated aim is to “continually advance the state of machine learning, inside and outside Twitter”, focusing on “a few key strategic areas such as natural language processing, reinforcement learning, [machine learning] ethics, recommendation systems, and graph deep learning”.

Fabula AI’s researchers specialise in employing graph deep learning to detect network manipulation, applying machine learning techniques to network-structured data in order to analyse very large and complex datasets describing relations and interactions, and extract signals in ways that traditional machine learning techniques are not capable of doing.

Twitter described the acquisition as a “strategic investment” and a “key driver” as the company works to “help people feel safe on Twitter and help them see relevant information”. Financial terms of the deal were not disclosed.

“Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience,” the social network said.

Fabula was founded by Michael Bronstein, Damon Mannion, Federico Monti and Ernesto Schmitt. It is led today by Bronstein – who currently serves as chief scientist – and Monti, now the company’s chief technologist, who began their collaboration together while at the University of Lugano, Switzerland.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities,” Bronstein said in a post on Twitter’s blog. “Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service.”

Bronstein is currently the Chair in Machine Learning & Pattern Recognition at Imperial College, and will remain in that position while leading graph deep learning research at Twitter. He will be joined by long-time collaborators from academia (including current or former students) who research advances in geometric deep learning.

Twitter – along with other social media platforms and internet search engines – has recently come under fire from the media, academics and politicians for its perceived failure to properly deal with abuse and hate on its platform. It has previously been criticized for failing to take action against accounts that spread hate speech and still does not have a clear policy in place for dealing with white supremacist accounts.

YouTube, Facebook and Twitter grilled over abuse faced by British MPs

YouTube, Facebook and Twitter executives have been grilled by members of the British Parliament at a committee hearing over how the social networks handle online abuse levelled at parliamentarians, the BBC reports.

Members of Parliament (MPs) are said to have argued that such hostility undermined democratic principles, with Twitter representative Katy Minshall admitting that it was “unacceptable” that the site had relied wholly on users to flag abuse in the past.

She insisted that the social network’s response to abuse had improved but acknowledged that there was more to be done.

Harriet Harman, chair of the Human Rights Committee, said there was “a strong view amongst MPs generally that what is happening with social media is a threat to democracy”, and SNP MP Joanna Cherry cited specific tweets containing abusive content that were not removed swiftly by Twitter, one of which was only taken down on the evening before the committee hearing.

“I think that’s absolutely an undesirable situation,” Minshall, Twitter’s head of UK government, public policy and philanthropy, said.

In response, Cherry argued it was in fact part of a pattern in which Twitter only reviewed its decisions when pressed by people in public life.

When MPs questioned how useful automated algorithms are for identifying abusive content, Facebook’s UK head of public policy, Rebecca Stimson, admitted that their application is limited with the platform’s algorithms only correctly identifying around 15% of pieces of offensive content as in breach of the site’s rules.

“For the rest you need a human being to have a look at it at the moment to make that judgement,” she explained.

Labour MP Karen Buck suggestd that algorithms might not identify messages such as, “you’re going to get what Jo Cox got” as hostile, referring to the MP Jo Cox who was murdered in June 2016.

“The machines can’t understand what that means at the moment,” Stimson agreed.

Both Stimson and Minshall said that their respective social networks were working to gradually improve their systems, and to implement tools to proactively flag and block abusive content, even before it’s posted.

The committee said it was shocked to learn that none of the companies had policies of reporting criminal material to law enforcement, except in rare cases when there was an immediate threat.

Committee chair Yvette Cooper pressed Facebook’s public policy director, Neil Potts, on whether the company was reporting identities of those trying to upload footage of the Christchurch shooting to the authorities in New Zealand.

Potts said the decisions were made on a “case by case” basis but that Facebook does not “report all crimes to the police”, and that “these are tough decisions to make on our own . . . where government can give us more guidance and scrutiny”.

Representatives of both Twitter and Google, YouTube’s parent company, admitted neither of their companies would necessarily report to the police instances of criminal material they had taken down.