On September 9th, 2020, the Consort-AI extension was published in order to provide guidelines for clinical trial reports for interventions involving artificial intelligence. These guidelines complete the CONSORT 2010 statement, which already provides “minimum guidelines for reporting randomized trials”.
The Consort-AI extension follows the recent warning from researchers that “the field (of AI) is strewn with poor-quality research”. These guidelines invite researchers to provide some critical information about their clinic trials involving AI:
“Consort-AI recommends that investigators provide clear descriptions of the AI intervention, including instructions and skilled required for use, the setting in which the AI intervention is integrated, the handling of inputs and outputs of the AI intervention, the human-AI interaction and provision of an analysis of error cases”.Source : NatureMedecine
According to Prof Alastair Denniston (University of Birmingham), these guidelines are “crucial to making sure AI systems were safe and effective for use in healthcare settings”. Therefore, Consort AI aims to promote transparency and quality of clinical trials involving AI, and “assist editors and peer reviewers, as well as the general readership, to understand, interpret and critically appraise the quality of clinical trial design and risk of bias in the reported outcomes”.
Also, according to Prof Mihaela van der Schaar (Director of the Cambridge Centre for AI in Medecine), Consort-AI is a good step towards effective, transparent, robust and trustworthy AI and machine learning methods:
“Too often, a promising model is undermined when its creators provide it as a ‘black box’ with minimal consideration for end users such as doctors […]. These new reporting guidelines, which prioritise such concerns by factoring them into a standardises evaluation framework, are a partial but valuable solution that could help catalyse a top-to-bottom transformation of healthcare”