Programme led by

Royal College of Surgeons in Ireland, 123 St Stephens Green, Dublin 2, Ireland

Neural-cognitive Explainable Artificial Intelligence to support diagnostic imaging

We are inviting applications for a 24-month Post-doctoral Research Fellow position as part of the NeuroInsight Marie Skłodowska-Curie COFUND Action for Postdoctoral Fellowships. NeuroInsight is a collaboration between the FutureNeuro SFI Research Centre for Chronic and Rare Neurological Diseases and the Insight SFI Research Centre for Data Analytics.

Our team is interested in validating the potential of neuro-symbolic artificial intelligence for developing more robust, transparent, explainable approaches to support clinical diagnostics from medical imaging. 

We believe a holistic and human-centred combination of deep learning with knowledge representation and reasoning would be a game changer for widespread clinical adoption of Deep Learning in Diagnostic Imaging. Specifically, we seek to address two key challenges: the first one is to improve transparency and explainability of deep learning for medical image analysis, enabling debugging and debiasing, including human-centric approaches to effective explanation generation and validation; the second one relates to the scarcity of image data available to effectively train a deep learning model to support diagnostics for rare and less studied conditions; here is where knowledge-driven approaches can compensate for the limited amount of training image data by enabling the combination of different (possibly multimodal) type of information and knowledge from other sources such as, but not limited to medical records, published research papers, knowledge bases and clinical studies.

Our team has been working on mapping low level features from trained deep neural models into concepts and relations. This made it possible to develop new methods to analyse and interpret deep learning models through graph analysis techniques and generate factual and counterfactual explanations, as well as to gain a deeper understanding of the origins of some types of classification errors. There are still several challenges in generalising this approach to other architectures beyond CNN and other tasks beyond classification. The application of our approach to diagnostic imaging is also an open challenge, as the need for experts’ validation of concepts and relationships and the existence of high quality domain-specific knowledge bases is paramount. 

More investigation is required in areas such as validation of the extracted structured knowledge against a groundtruth, characterisation and mitigation of data bias vs. model bias vs. human bias, and also systematic combination of deductive/inductive reasoning with neural learning for a truly explainable and human-centered approach to AI for diagnostic imaging.

We are seeking a postdoctoral researcher interested in working in this area to validate the applicability of knowledge-driven neural-cognitive approaches to support clinicians in diagnostic imaging. The potential target can vary based on the specific interest of the candidate, ranging from mapping of cognitive processes and symbolic knowledge into neural models, multimodal (image, text) explanation generation, semantic concept detection and probabilistic rule extraction from deep representations in the clinical domain, and human-centred design of assessment and validation metrics.

We have clinical collaborators and research partners that can help with understanding and contextualising the clinical and cognitive knowledge in the domain of interest, provide relevant medical image datasets as well as co-design human-centred validation approaches.

If interested in applying, for further information and to get help in preparing your fellowship application please contact Alessandra Mileo – or NeuroInsight –