Artificial intelligence: shaping Europe’s digital future
Expert group publishes ethics guidelines for the implementation of human-centered and trustworthy artificial intelligence (AI) in Europe
To further advance the outstanding and trustworthy use of artificial intelligence (AI) across Europe, the “High-Level Expert Group on Artificial Intelligence” clarified its findings from the past year in detail, developed them further and made them usable. As a member of the expert group, institute director Prof. Wilhelm Bauer brings Fraunhofer IAO’s experience in the human-centered application of AI to the group.
The High-Level Expert Group on Artificial Intelligence (AI HLEG) was launched by the European Commission in June 2018 to help implement Europe’s strategy for artificial intelligence. This entails such tasks as drafting recommendations on forward-looking policy development and on ethical, legal and social issues, including socio-economical challenges. Following an open selection process, the Commission appointed 52 members to the independent expert group, which comprises representatives from science, industry and the general public. Among them is Prof. Wilhelm Bauer, director of the Fraunhofer Institute for Industrial Engineering IAO in Stuttgart, one of the German representatives from science. “Through its years of research in various branches of the field of AI, its initiative for the AI innovation center Learning Systems, and its participation in Cyber Valley, Europe’s largest research consortium in the field of AI, Fraunhofer IAO can contribute solid experience and expertise in applied research to the work the expert group is doing,” says Prof. Bauer.
The AI HLEG already presented initial findings in mid-2019, and in wrapping up its work, it has now clarified those findings in detail, developed them further and made them usable for companies and public organizations.
Ethics guidelines: seven key requirements for self-assessment
The expert group’s final guidelines for a human-centered approach to AI have been available since July 17, when the “Assessment list for trustworthy artificial intelligence (ALTAI) for self-assessment” was published. This list formulates seven key requirements AI systems should meet to be deemed trustworthy. These are: (1) human agency and oversight, (2) technological robustness and safety, (3) privacy and data governance, (4) transparency, (5) diversity, non-discrimination and fairness, (6) environmental and societal well-being and (7) accountability. The ALTAI is intended to ensure that users benefit from AI without being exposed to unnecessary risks, so the experts presented a number of specific steps for providers to follow when assessing their own AI applications.
Dynamic checklist for direct use via a web tool
To demonstrate the potential of an assessment list of this kind, the AI expert group developed a prototype tool aimed at helping AI developers and users put these principles into practice and at guiding them in evaluating their own AI systems. To do this, they broke down the key requirements of the ALTAI into detailed requirements and made these available in the form of a free checklist and an associated web-based tool that interested companies can access and use directly online.
Policy and investment recommendations for trustworthy AI
In June 2019, the expert group presented 33 recommendations for how trustworthy AI can bring about greater sustainability, growth and competitiveness in Europe while at the same time empowering, supporting and protecting people. Prof. Bauer explains how this relates to the current situation: “The COVID-19 pandemic has underscored once again not only the possibility to make greater use of data and trustworthy AI but also the challenges and risks that doing so entails in times of rapid change.” The expert group wants its findings to contribute to a fair and thorough examination of the prospects for AI and to a public debate.
Contact for scientific information:
Marketing and Communication
Phone +49 711 970-5196