study showed that generalisations of predictions of artificial intelligence-based models across different study centres cannot be ensured at the moment and that these models were «highly context-dependent». Results of the study have been published in the journal Science.
Pooling data from across trials too did not help matters, the team found.
The team of researchers, including those from the universities of Cologne (Germany) and Yale (US), were testing the accuracy of AI-driven models in predicting the response of schizophrenic patients to antipsychotic medication across several independent clinical trials.
The current study pertained to the field of precision psychiatry, which makes use of data-related models for targeted therapies and suitable medications for individuals.
«Our goal is to use novel models from the field of AI to treat patients with mental health problems in a more targeted manner,» said Joseph Kambeitz, Professor of Biological Psychiatry at the Faculty of Medicine of the University of Cologne and the University Hospital Cologne.
«Although numerous initial studies prove the success of such AI models, a demonstration of the robustness of these models has not yet been made,» said Kambeitz, adding that safety was of great importance for everyday clinical use.
«We have strict quality requirements for clinical models and we also have to ensure that models in different contexts provide good predictions.
»The