The European Commission has announced the launch of a pilot project intended to test draft ethical rules for developing and applying artificial intelligence technologies to ensure they can be implemented in practice.
It’s also aiming to garner feedback and encourage international consensus building for what it dubs “human-centric AI” — targeting among other talking shops the forthcoming G7 and G20 meetings for increasing discussion on the topic.
The Commission’s High Level Group on AI — a body comprised of 52 experts from across industry, academia and civic society announced last summer — published their draft ethics guidelines for trustworthy AI in December.
A revised version of the document was submitted to the Commission in March. It’s boiled the expert consultancy down to a set of seven “key requirements” for trustworthy AI, i.e. in addition to machine learning technologies needing to respect existing laws and regulations — namely:
- Human agency and oversight: “AI systems should enable equitable societies by supporting human agency and fundamental rights, and not decrease, limit or misguide human autonomy.”
- Robustness and safety: “Trustworthy AI requires algorithms to be secure, reliable and robust enough to deal with errors or inconsistencies during all life cycle phases of AI systems.”
- Privacy and data governance: “Citizens should have full control over their own data, while data concerning them will not be used to harm or discriminate against them.”
- Transparency: “The traceability of AI systems should be ensured.”
- Diversity, non-discrimination and fairness: “AI systems should consider the whole range of human abilities, skills and requirements, and ensure accessibility.”
- Societal and environmental well-being: “AI systems should be used to enhance positive social change and enhance sustainability and ecological responsibility.”
- Accountability: “Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.”
The next stage of the Commission’s strategy to foster ethical AI is to see how the draft guidelines operate in a large-scale pilot with a wide range of stakeholders, including international organizations and companies from outside the bloc itself.
The Commission says the pilot phase will launch this summer. It’s asking companies, public administrations and organisations to sign up to its forum on AI, the European AI Alliance, and receive notification when the pilot starts.
Members of its AI high-level expert group will also present and explain the guidelines to relevant stakeholders in Member States. Members of the group are due to present their work in detail during the third Digital Day in Brussels tomorrow.
Here’s how the Commission explains its plan for the pilot in an official communication on “building trust in human-centric artificial intelligence”:
This work will have two strands: (i) a piloting phase for the guidelines involving stakeholders who develop or use AI, including public administrations, and (ii) a continued stakeholder consultation and awareness-raising process across Member States and different groups of stakeholders, including industry and service sectors:
- (i) Starting in June 2019, all stakeholders and individuals will be invited to test the assessment list and provide feedback on how to improve it. In addition, the AI high- level expert group will set up an in-depth review with stakeholders from the private and the public sector to gather more detailed feedback on how the guidelines can be implemented in a wide range of application domains. All feedback on the guidelines’ workability and feasibility will be evaluated by the end of 2019.
- (ii) In parallel, the Commission will organise further outreach activities, giving representatives of the AI high-level expert group the opportunity to present the guidelines to relevant stakeholders in the Member States, including industry and service sectors, and providing these stakeholders with an additional opportunity to comment on and contribute to the AI guidelines.
Commenting in a statement, VP for the Digital Single Market, Andrus Ansip, said: “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies. Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust.”
“Today, we are taking an important step towards ethical and secure AI in the EU,” added Mariya Gabriel, commissioner for Digital Economy and Society, in another supporting statement. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society. We will now put these requirements to practice and at the same time foster an international discussion on human-centric AI.”
Following the pilot phase, in early 2020, the Commission says the expert AI group will review the assessment lists for the key requirements, building on feedback received. It says it will then build on the review — evaluating the outcome and proposing any next steps.
By fall 2019 it also intends to launch a set of networks of AI research excellence centres. Networks of digital innovation hubs are also slated to be set up with a goal of fostering discussions between Member States and stakeholders to develop and implement a model for data sharing and making best use of common data spaces.
The plans fall under the Commission’s AI strategy of April 2018, which aims to increase public and private AI investments to at least €20BN annually over the next decade, making more data available, fostering talent and ensuring trust.
This post was originally posted at http://feedproxy.google.com/~r/Techcrunch/~3/9oILk844flA/.