The European Commission has taken further steps to build trust in artificial intelligence. A high-level group of experts claims to have set up essential requirements. No legislation, but non-binding guidelines.
The plans presented on Tuesday stem from the CI strategy presented by the European Commission in April 2018, which is intended, among other things, to stimulate talent and increase confidence in the technology.
Many sectors benefit from Artificial Intelligence (AI) in areas such as health care, energy consumption, car safety, agriculture, climate change and financial risk management. AI can also help detect fraud and cyber threats, and enables law enforcement agencies to fight crime more effectively. But AI also brings new challenges for the future of work, and it raises legal and ethical questions.
Vice President for the Digital Market, Andrus Ansip says that these ethical guidelines are not a luxury or an option. It is only on the basis of trust that our society can fully benefit from technology. Everyone benefits from ethical CI and that can provide a competitive advantage for Europe: Europe will become a leader in the field of people-focused CIs on which one can rely.
This summer, the European Commission will launch a pilot phase involving a wide range of stakeholders. Businesses, public authorities and organisations can already participate in the European CIP Alliance.
In addition, to ensure the ethical development of CIP, the European Commission will set up a series of networks of top-class CIP research centres in autumn 2019, starting with networks of digital innovation hubs.
Seven main issues
- Human influence and control: CIP systems should enable equitable societies through human influence and support for fundamental rights. Human autonomy must not be reduced, limited or abused.
- Robustness and security: Reliable AI needs algorithms that are so safe, reliable and robust that errors or inconsistencies are detected and addressed throughout the life of AI systems.
- Privacy and data governance: Citizens must have full control over their own data, and data relating to them must not be used to harm or discriminate against them.
- Transparency: The traceability of CIP systems must be ensured.
- Diversity, non-discrimination and equity: CI systems must take into account the full range of human capabilities, skills and requirements, and they must be accessible.
- Social and ecological well-being: CI systems should be used to promote positive social change and increase sustainability and environmental responsibility.
- Accountability: Mechanisms should be put in place to ensure responsibility and accountability for AI systems and their outcomes.