Farida ZEHRAOUI will defend her Habilitation à Diriger les Recherches on Friday February 23, 2024: “Interpretability of Deep Learning Models in Personalized Medicine”.

/, Announcement of HDR defense, AROBAS team, AROBAS team, Events, Events, In the Headlines, Research, Research, Announcement of HDR defense/Farida ZEHRAOUI will defend her Habilitation à Diriger les Recherches on Friday February 23, 2024: “Interpretability of Deep Learning Models in Personalized Medicine”.

Farida ZEHRAOUI will defend her Habilitation à Diriger les Recherches on Friday February 23, 2024: “Interpretability of Deep Learning Models in Personalized Medicine”.

 

Composition of the hdr jury

Jury members Title Institution Jury function
Younès BENNANI Professeur des universités Université Sorbonne-Paris-Nord Examinateur
Cécile CAPONI Professeur des universités Université Aix-Marseille Rapporteure
Antoine CORNUEJOLS Professeur des universités AgroParisTech Examinateur
Laurent HEUTTE Professeur des Universités Université de Rouen Examinateur
Pascale KUNTZ-COSPEREC Professeure des Universités École Polytechnique de l’Université de Nantes Rapporteure
Cédric WEMMERT Professeur des Universités Université de Strasbourg Rapporteur

Farida ZEHRAOUI defends her Habilitation à Diriger les Recherches on Friday, February 23, 2024 at 2:00 p.m. at IBGBI petit amphithéâtre. 

Title: Interpretability of Deep Learning Models in Personalized Medicine

Abstract

Within the realm of healthcare, machine learning (ML) techniques have demonstrated remarkable potential in enhancing diagnostics, treatment strategies, and overall patient care.
Deep Learning has particularly shown impressive results in diverse healthcare applications, notably in medical image analysis and recognition.

However, deep learning models are considered “black-box” systems. In healthcare, where interpretability is crucial, understanding how a model arrives at a diagnosis or treatment recommendation is imperative for ensuring trust and widespread adoption.

To address this challenge, we proposed several deep learning and hybrid architectures that are interpretable by design. Our approaches include the following strategies:

– Integrating Knowledge Graphs into the deep learning architectures, providing concept-based explanations.

– Combining Supervised deep learning with an unsupervised explainable model to provide prototype-based explanations.

– Combining Symbolic AI approaches with deep learning to ensure rule-based explanations.

These proposed advancements in explainable deep learning models aim to instill trust in model predictions, ensuring their beneficial use in the landscape of personalized medicine.

  • Date: Friday 23/02/2024, 2pm
  • Place: IBISC site IBGBI, petit amphithéâtre, 23 boulevard de France, 91000 ÉVRY-COURCOURONNES
  • IBISC member involved: Farida ZEHRAOUI (MCF Univ. Évry, IBISC AROB@S team)
WP to LinkedIn Auto Publish Powered By : XYZScripts.com