The European Commission has announced the names of 52 experts from across industry, business and civil society who it has appointed to a new High Level Group on AI which will feed its strategy and policymaking around artificial intelligence.
In April the EU’s executive body outlined its approach to AI technology, setting out measures intended to increase public and private investment; prepare for socio-economic changes; and ensure an appropriate ethical and legal framework.
The High Level Group is a key part of the Commission’s AI strategy as the experts will feed its policymaking here by making detailed recommendations on ethical, legal and societal issues.
The EC put out a call for experts for this “broad multi-stakeholder forum” back in March.
The group announced today is comprised of 30 men and 22 women, and includes industry representatives from AXA, Bayer, Bosch, BMW, Element AI, Google, IBM, Nokia Bell Labs, Orange, Santander, SAP, Sigfox, STMicroelectronics, Telenor and Zalando.
Google is represented by Jakob Uszkoreit, an AI Researcher in the Google Brain team.
Also in the group: Jaan Tallinn, a founding engineer of Skype and Kazaa, and a former investor in and director of the Google-acquired AI company DeepMind.
European civil society bodies represented in the forum include consumer rights group BEUC; digital rights group Access Now; algorithmic transparency advocacy group AlgorithmWatch; the EESC civil society association; the ETUC which advocates for workers rights and well being; and Austrian association that supports the blind and visually impaired.
The list also includes representatives from several technology associations, along with political advisers and policy wonks, and academics and legal experts of various stripes.
The full list is here.
Towards a comprehensive AI strategy
Back in April the Commission said it hoped to be able to announce a “coordinated plan on AI” by the end of the year — after saying, in March, that a “comprehensive European strategy on AI” was on the way “in the coming months”.
“As any technology that has a direct impact on people’s lives and work, the emergence of AI also raises legitimate concerns that should be addressed to build trust and raise awareness,” it wrote then. “Given the broad impact AI is expected to have, the full participation of all actors including businesses, academics, policy makers, consumer organisations, trade unions, and other representatives of the civil society is essential.”
The multi-stakeholder forum is also intended to serve as the steering group for the work of another, even broader multi-stakeholder forum — also announced in April, and called the European AI Alliance — which the Commission said will include an online platform to allow for anyone who wants to participate to sign up and join in the discussion.
So the High Level Group is basically an AI expert talking shop intended to support this more public AI talking shop — to try to achieve some kind of pan-EU consensus on how to respond to the myriad socio-economic and ethical challenges that flow from the increasingly use and capabilities of autonomous technologies.
In terms of specific tasks for the group, the Commission says it will be tasked to:
- advise it on next steps addressing “AI-related mid to long-term challenges and opportunities”, feeding policy development, legislative evaluation and next-gen digital strategy;
- propose draft AI ethics guidelines — covering issues such as “fairness, safety, transparency, the future of work, democracy and more broadly the impact on the application of the Charter of Fundamental Rights, including privacy and personal data protection, dignity, consumer protection and non-discrimination”;
- and help with “further engagement and outreach mechanisms to interact with a broader set of stakeholders in the context of the AI Alliance, share information and gather their input on the group’s and the Commission’s work”
Also today, the London-based Chatham House international policy think tank has published a report looking at the short to medium-term policy challenges posed by AI, focusing on military, human security and economic perspectives.
The report warns generally of the need for a framework for better managing the rise of AI to ensure it does not simply serve to reinforce existing inequalities.
Among its specific recommendations are: That clear codes of practice be developed for policymakers and states to utilize AI for decision-making purposes; that funding be allocated for deploying and developing AI systems with humanitarian goals; better education and training for policymakers and technical experts in each other’s respective domains; that governments to invest in developing and retaining homegrown AI talent and expertise to “become independent of the dominant AI expertise now typically concentrated in the US and China”; and that strong relationships be developed between public and private AI developers to ensure innovation driven by the commercial sector permeates to the use of AI in the public sector.