On August 21st 2020, the High Level Expert Group on Artificial Intelligence (HLEGAI) set up by the European Commission, released their fourth deliverable on the sectoral considerations on the policy and investment recommendations for trustworthy artificial intelligence.
Previous recommendations on Policy and Investment Recommendations for Trustworthy AI have already been provided by the expert group in June 2019. Due to the massive development and deployment of AI in some key sectors, ‘‘the AI HLEG took the decision to further discuss three sectors’’:
(i) The Public Sector
(iii) Manufacturing and Industrial Internet of Things (IoT)
These further discussions were found ‘‘even more appropriate with the publication of the White Paper on Artificial Intelligence and the Communication on A European Strategy for Data in February 2020’’ of the European Commission. The need for further recommendations was amplified by the “COVID-19 pandemic” as Healthcare is considered as one of the sectors where AI development and deployment could play a key role and have meaningful implications.
The sectoral considerations of the expert group took place during three sets of workshops organized in April 2020 where experts collected feedbacks from different stakeholders. The expert group focused on and discuss Policy and Investment Recommendations that were initially published in June 2019.
The synthesis of these sectoral workshops is summarized in this fourth deliverable, which is mainly articulated on four axes:
- The Manufacturing and Industrial IoT Sector
- Public Sector: the e-Government domain
- Public Sector: Justice and law-enforcement
- The Healthcare Sector
In this document, ‘‘the AI HLEG presents a number of themes that have emerged throughout the organised workshops”. They are summarized as followed:
- The AI HLEG Policy and Investment Recommendations for Trustworthy AI are perceived as important and relevant in all three sectors.
- There is merit in refining the AI HLEG Policy and Investment Recommendations for Trustworthy AI to account for sectoral specificities
- Trustworthiness is seen as a crucial feature of European AI:
- There is need for safeguards in terms of transparency, explainability and safety
- There is need to respect diversity and inclusion
- There is need for AI systems to reduce, rather than exacerbate, existing biases and discriminations
- The importance of transparency and accountability
- There is widespread concern about the need to close the skills gap
- Europe should be a leader in responsible research and innovation in the field of AI
- Good governance and the widespread sharing of best practices can promote regulatory certainty
- Data quality, availability and interoperability must be at the core of EU policy.
This document follows the previous works through a concrete and insightful perspective in key sectors and ‘‘encourages the European Commission to pursue further investigations into the specific sector contexts, to develop appropriate policies to support Trustworthy AI.’’