EU Unveils Ethics Guidelines For Artificial Intelligence
The European Union presented ethics guidelines Monday as it seeks to promote its own artificial intelligence sector, which has fallen behind developments in China and the United States.
The European Commission, the bloc’s executive arm, unveiled a framework aimed at boosting trust in AI by ensuring, for example, data about EU citizens are not used to harm them.
“Ethical AI is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric AI that people can trust,” Commission Vice President Andrus Ansip said.
The guidelines list seven key requirements for “trustworthy AI” established by independent experts consulted by the Commission.
Among them is one ensuring that data about citizens will not be used to harm them or to discriminate against them.
The measures also call for mechanisms to ensure accountability for AI systems and for AI algorithms to be secure and reliable enough to deal with errors or inconsistencies.
The Commission now aims to launch a pilot phase in which industry, research, and public authorities test the list of key requirements.
It will also involve companies from other countries and international organisations.
The Commission aims to improve cooperation with “like-minded partners” such as Japan, Canada or Singapore and continue working with the G7 and G20 groups of leading economies.
The updated guidelines flow from the Commission’s AI strategy unveiled in April last year, which aimed to bring public and private investment in the sector to at least 20 billion euros annually over the next decade.
Europe is trying to catch up with both the US and China.
A study published last month showed that China is poised to overtake the United States in artificial intelligence with a surge in academic research on the key technology.
A burgeoning sector, AI is already used to recognise people in photos, filter unwanted content from online platforms and enable cars to drive themselves.