Skip to main content
Articles

Explainable AI based on Concepts and Prototypes: Potential in Natural Disaster Management

XAI

Jawher Said, Fraunhofer Heinrich Hertz Institute

Maximilian Dreyer, Fraunhofer Heinrich Hertz Institute

Ekkehard Schnoor, Fraunhofer Heinrich Hertz Institute

Introduction

In the past decade, deep learning has made remarkable advancements in various fields, but neural networks are often considered black boxes due to their complex inner workings. The emerging field of explainable artificial intelligence (XAI) seeks to make these models more transparent [2], complying with the new legal requirements under the EU AI Act. The TEMA project aims to apply XAI methods to natural disaster management, potentially enhancing prediction accuracy and assisting emergency responders in making informed decisions.

Layer-wise Relevance Propagation

Traditional post-hoc attribution methods, such as Layer-wise Relevance Propagation (LRP), are used in explainable artificial intelligence (XAI) to explain predictions made by pre-trained models on individual inputs. LRP [1] assigns relevance scores to input features based on their contribution to the final prediction, by propagating the prediction back through the network. LRP has been effective in identifying important input features and increasing the transparency of deep learning models, as well as revealing undesired classification strategies caused by spurious correlations in the data.

Concept-Based Approaches and Prototypes: Applications in Natural Disaster Management

Recent advances in XAI go far beyond the traditional post-hoc attribution methods. The Concept-Relevance Propagation (CRP) approach [3,4] aims to find human-understandable concepts encoded in latent layers, based on the principle of relevance maximisation. It performs a backward pass similar to LRP, but conditionally on concepts encoded within the network. In this way, it addresses the limitations of pixelwise explanations by combining local and global perspectives, allowing the user to understand not only "where" but also "what" the model is focusing on. In segmentation models, it provides concept-specific explanations that reveal hidden context bias [3]. A further development is that of Prototypical Concept-based eXplanations (PCX) from [5]. It builds upon CRP by utilising Gaussian Mixture Model (GMM) clustering (for each class of the classification problem) of the latent relevances. Defining prototypes as the samples closest to the means of the individual Gaussians, PCX can help to illustrate (un-)typical behaviour with the help of such prototypes. 

XAI

This approach helps to identify and explain the underlying concepts that contribute most significantly to a model's decision-making process. It enables a quicker understanding of model strategies by identifying similar explanations. PCX helps to understand model behaviour, spot spurious behaviour or data quality issues, catch outlier predictions, and identify missing or overused features in predictions. It requires less human effort compared to studying individual predictions and has potential applications in ensuring trust and safety in AI applications. This high potential for automating makes this method interesting for applications such as natural disaster management (NDM). As a potential application, PCX-based outlier detection may automatically raise warning messages to the end user, to focus the attention of emergency staff in critical situations only towards relevant information. As a practical example, one may think of a fire segmentation task versus e.g. houses with orange roofs, that may be misclassified as fire. 

In our latest study using the EfficientNet B0 model trained on the fire dataset, we used the PCX method to improve our understanding of how this model discerns "fire" and "non-fire" images by identifying and quantifying the contribution of distinct visual prototypes within each class. In the "fire" class, the EfficientNet B0 model identifies distinct visual indicators typical of fire presence, encapsulated by two primary prototypes:

  1. Prototype 1 (focus on Fire; 64% coverage): This prototype concentrates on the direct visual indicators of active fire within images, with critical concepts outlined below:   
  • Concept 366: relevance score of 0.8
  • Concept 203: relevance score of 0.6
  • Concept 280: relevance score of 0.6

  1. Prototype 0 (Focus on Smoke; 36% coverage): This prototype captures the presence of smoke, a crucial secondary indicator of fire, with key concepts that identify its distinct visual characteristics:
  • Concept 250: relevance score of 0.9
  • Concept 243: relevance score of 0.6
  • Concept 504: relevance score of 0.6

References

  1. Bach,  Sebastian, et al. "On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation." PloS one 10.7 (2015): e0130140.
  2. Samek, Wojciech et al. “Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications.” Proceedings of the IEEE, 109(3):247-278 (2021)
  3. Dreyer, Maximilian, et al. "Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific Explanations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
  4. Achtibat, Reduan, et al. "From attribution maps to human-understandable explanations through concept relevance propagation." Nature Machine Intelligence 5.9 (2023): 1006-1019.
  5. Dreyer, Maximilian, et al. "Understanding the (extra-) ordinary: Validating deep model decisions with prototypical concept-based explanations." arXiv preprint arXiv:2311.16681 (2023).

Tags