In a report published on 17 May, the European Commission said that while Google, Facebook and Twitter had all improved their attempts to fight disinformation online, the ubiquitous search engine was still lacking in transparency regarding its political advertising.
The three online platforms are signatories to the Code of Practice against disinformation and have committed to report monthly on measures taken ahead of the European Parliament elections in May 2019.
This was the fourth of these reports, the last of which will be published at the end of June when the European elections are over, at which point the Commission will carry out a “comprehensive assessment” of the effectiveness of the Code. Should the results prove unsatisfactory, the EC may “propose further measures, including of a regulatory nature”.
According to the EC, Google reported that it has taken “additional measures” to improve scrutiny of ad placements in the EU, and noted that they had created a publicly accessible political ad library and enabled searches through its API.
The search engine also detailed its ongoing efforts to “provide transparency around issue-based advertising” but said that a solution would not be in place before the European elections. The EU noted that Google “again” provided data on “the removal of a significant number of YouTube channels for violation of its policies on spam, deceptive practices and scams, and impersonation”.
For its part, Facebook reported on measures it had taken in the EU against ads that violated its policies for containing “low quality, disruptive, misleading or false content or trying to circumvent its systems”, and the opening of its new elections operation center in Dublin, Ireland.
The social media giant said it had taken down a “coordinated inauthentic behavior network originating from Russia and focusing on Ukraine” but did not mention whether this network had affected users in the EU.
Twitter reported on ads that had been “rejected for not complying with its policies on unacceptable business practices and quality ads” and “provided information on ads not served because of uncompleted certification process that is obligatory for political campaign advertisers”.
It also detailed a new “election integrity policy” and provided figures on measures against “spam and fake accounts” but did not provide any further insight on these measures, such as how they might relate specifically to activity in the EU.
In a joint statement, the EU’s Vice President for the Digital Single Market Andrus Ansip and three EU Commissioners (Věra Jourová, Julian King and Mariya Gabriel) said they recognized the companies’ continued progress on “their commitments to increase transparency and protect the integrity of the upcoming elections”.
They welcomed the “robust measures that all three platforms have taken against manipulative behavior on their services, including coordinated disinformation operations”, such as the Russian government’s alleged attempts to influence elections in the US and the United Kingdom. They categorized the companies’ efforts as a “clear improvement”.
However, they found that the companies needed to do more to “strengthen the integrity” of their services and suggested that the data provided lacked enough detail for “an independent and accurate assessment” of how their polices had actually contributed to reducing the spread of disinformation in the EU.
“We regret . . . that Google and Twitter were not able to develop and implement policies for the identification and public disclosure of issue-based ads, which can be sources of divisive public debate during elections, hence prone to disinformation,” they added.
They called for the companies to “step up” efforts to broaden cooperation with fact checkers in the EU’s member states and to “empower users and the research community” in the wake of the recent European Elections.
The companies need to engage with “traditional media” to develop “transparency and trustworthiness indicators” for information sources so that users are offered “a fair choice of relevant, verified information”, they added.
Finally, they suggested that the companies would also benefit from closer cooperation with the research community to identify and access relevant datasets to enable “better detection and analysis” of disinformation, “better monitoring” of the implementation and impact of the Code, and independent oversight of algorithms.
Leave a Reply