On September 22nd, 2020, the United Nations Institute for Disarmament Research (UNIDIR) published a new report on the predictability and understandability in military AI: the Black Box Unlocked.
This report is at “the heart of the ongoing discussion about lethal autonomous weapon systems and other forms of military AI”,as the publication occurred during the first session of the Convention on Conventional Weapons (CCW) meeting on lethal autonomous weapons systems (GGE on LAWS) 2020, held between the 21st to the 25 September in Geneva.
In the first two chapters, the report gives key takeaways of the notion of predictability and understandability, their key factors, and how they are related, especially from a civilian AI realm perspective.
In the third chapter, the report goes deeper into each notion by explaining their respective role at the different stages and use of the military system: before, during, and after the employment of AI.
Furthermore, UNIDIR gives additional guidance on what is considered as an appropriate level of predictability or understandably, through the testing, the training and the standards that should be developed to implement these concepts.
Consequently, a technical approach of explainability is provided, as well as“five avenues for action” advising to :
1. Adopt a common taxonomy and framing of predictability and understandability.
2. Explore non-military initiatives on AI understandability and predictability.
3. Study the factors that determine appropriate levels of understandability and predictability.
4. Develop standardized metrics to grade predictability and understandability.
5. Assess the viability of training and testing regimes that can engender robust AI understanding and account for AI unpredictability.
MEB.
Source: https://unidir.org/publication/black-box-unlocked