Tag Archives: European Commission

European Commission says Google not doing enough to fight disinformation

Image by Photo Mix from Pixabay

In a report published on 17 May, the European Commission said that while Google, Facebook and Twitter had all improved their attempts to fight disinformation online, the ubiquitous search engine was still lacking in transparency regarding its political advertising.

The three online platforms are signatories to the Code of Practice against disinformation and have committed to report monthly on measures taken ahead of the European Parliament elections in May 2019.

This was the fourth of these reports, the last of which will be published at the end of June when the European elections are over, at which point the Commission will carry out a “comprehensive assessment” of the effectiveness of the Code. Should the results prove unsatisfactory, the EC may “propose further measures, including of a regulatory nature”.

According to the EC, Google reported that it has taken “additional measures” to improve scrutiny of ad placements in the EU, and noted that they had created a publicly accessible political ad library and enabled searches through its API.

The search engine also detailed its ongoing efforts to “provide transparency around issue-based advertising” but said that a solution would not be in place before the European elections. The EU noted that Google “again” provided data on “the removal of a significant number of YouTube channels for violation of its policies on spam, deceptive practices and scams, and impersonation”.

For its part, Facebook reported on measures it had taken in the EU against ads that violated its policies for containing “low quality, disruptive, misleading or false content or trying to circumvent its systems”, and the opening of its new elections operation center in Dublin, Ireland.

The social media giant said it had taken down a “coordinated inauthentic behavior network originating from Russia and focusing on Ukraine” but did not mention whether this network had affected users in the EU.

Twitter reported on ads that had been “rejected for not complying with its policies on unacceptable business practices and quality ads” and “provided information on ads not served because of uncompleted certification process that is obligatory for political campaign advertisers”.

It also detailed a new “election integrity policy” and provided figures on measures against “spam and fake accounts” but did not provide any further insight on these measures, such as how they might relate specifically to activity in the EU.

In a joint statement, the EU’s Vice President for the Digital Single Market Andrus Ansip and three EU Commissioners (Věra Jourová, Julian King and Mariya Gabriel) said they recognized the companies’ continued progress on “their commitments to increase transparency and protect the integrity of the upcoming elections”.

They welcomed the “robust measures that all three platforms have taken against manipulative behavior on their services, including coordinated disinformation operations”, such as the Russian government’s alleged attempts to influence elections in the US and the United Kingdom. They categorized the companies’ efforts as a “clear improvement”.

However, they found that the companies needed to do more to “strengthen the integrity” of their services and suggested that the data provided lacked enough detail for “an independent and accurate assessment” of how their polices had actually contributed to reducing the spread of disinformation in the EU.

“We regret . . . that Google and Twitter were not able to develop and implement policies for the identification and public disclosure of issue-based ads, which can be sources of divisive public debate during elections, hence prone to disinformation,” they added.

They called for the companies to “step up” efforts to broaden cooperation with fact checkers in the EU’s member states and to “empower users and the research community” in the wake of the recent European Elections.

The companies need to engage with “traditional media” to develop “transparency and trustworthiness indicators” for information sources so that users are offered “a fair choice of relevant, verified information”, they added.

Finally, they suggested that the companies would also benefit from closer cooperation with the research community to identify and access relevant datasets to enable “better detection and analysis” of disinformation, “better monitoring” of the implementation and impact of the Code, and independent oversight of algorithms.

European Commission announces pilot program for AI ethics guidelines

The European Commission (EC) announced on 8 April that it would launch a pilot program to ensure that ethical guidelines for the development and use of artificial intelligence (AI) can be implemented in practice.

This is the second step in the Commission’s three-part approach to the question of ethical AI, following the development of seven key requirements or guidelines for creating “trustworthy” AI developed by the High-Level Expert Group.

These include: human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. The Commission added that any AI that can be considered trustworthy should also respect “all applicable law and regulations”.

Industry, research institutes and public authorities have been invited to test an assessment list drafted by the group to complement the guidelines. The 52-strong panel of independent experts was appointed by the Commission in June 2018, and is comprised of representatives from industry, academia and civil society.

According to the Commission, the third and final step in its plan will be to work on building an “international consensus” on human-centric AI as “technologies, data and algorithms know no borders”.

These plans are a component of the Commission’s overarching “AI strategy”, which aims to increase public and private investments to at least €20 billion annually over the next decade in order to make more data available, foster talent and “ensure trust”.

Members of the group will present their work in detail at the third “Digital Day” in Brussels on 9 April. Following the conclusion of the pilot phase in early 2020, they will review the assessment lists for the key requirements, building on the feedback they receive, after which the Commission plans to evaluate the outcome of the project so far and propose next steps.

The Commission has also pledged to launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces; before autumn 2019.

“I welcome the work undertaken by our independent experts,” Vice-President for the Digital Single Market Andrus Ansip said in a statement. “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.”

For Ansip, ethical AI is a “win-win proposition” that could create a “competitive advantage for Europe” should it become “a leader of human-centric AI that people can trust”.

“Today, we are taking an important step towards ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel added. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.”

The Commission is looking to put these requirements into practice while simultaneously fostering “an international discussion on human-centric AI,” she said.

AI refers to digital systems that show intelligent, human-like behaviour. By analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals, learning from data to make predictions and deliver useful insights.

The Commission estimates that the economic impact of the automation of knowledge work, robots and autonomous vehicles on the EU will reach between €6.5 and €12 trillion annually by 2025. The body has already invested what it describes as “significant amounts” in the development of AI, cognitive systems, robotics, big data, and future and emerging technologies in a bid to make Europe more competitive in this area.

This includes around €2.6 billion on AI-related areas and €700 million on research programs studying smart robots. The Commission intends to invest further in research and innovation up to and after 2020, including €20 billion per year in combined public and private investment.

However, Europe is currently behind in private investments in AI having spent €2.4 to €3.2 billion on development in 2016, compared with the €6.5 to €9.7 billion spent in Asia and €12.1 to €18.6 billion in North America.

In a press release, the Commission acknowledged that while AI has the potential to benefit a wide range of sectors – such as healthcare, climate change, law enforcement and security, and financial risk management, among others – it brings new challenges for the future of work, and raises significant legal and ethical questions.