Skip to main content
Articles

Explainable Artificial Intelligence for Improving Natural Disaster Management

Applications of LRP explanations in the sciences (Samek et al., 2021).

Ekkehard Schnoor, Researcher at Fraunhofer Heinrich Hertz Institute

Johanna Vielhaben, Phd candidate at Fraunhofer Heinrich Hertz Institute

In the past decade, we have witnessed spectacular breakthroughs achieved by deep learning in various applied and scientific domains. Despite their empirical success, neural networks are notorious for their black-box nature. Due to their inherent complexity, their inner workings are not readily comprehensible to human users. The emerging field of explainable artificial intelligence (XAI) aims to make the decision-making of such opaque models more transparent, improving trust and usefulness for human users. Meeting explainability standards has just become a legal requirement, in particular for high-stake applications, under the EU AI Act. 

In the TEMA project, we aim to employ such methods in the context of natural disaster management. Natural disasters, including floods, wildfires, and heatwaves, increasingly affect various parts of the world, including Europe, causing widespread damage and loss of life. Potentially, integrating methods from explainable AI into prediction systems may help to predict natural disasters more precisely, and support emergency responders to make more informed decisions managing natural disasters.

Traditional post-hoc Attribution Methods

Applications of LRP explanations in the sciences (Samek et al., 2021).

Amongst the XAI methods, traditional post-hoc attribution methods are techniques that aim to explain the prediction of a pre-trained model on a single input by assigning scores that quantify the importance of each input feature towards the prediction. Such techniques are also called “local” methods, as they provide explanations for a single example. A popular XAI method to obtain such explanations is the so-called Layer-wise Relevance Propagation (LRP), developed at the Fraunhofer Heinrich Hertz Institute (Bach et al., 2015), one of the many participants within the European universities, research institutes and companies that have joined the TEMA project. The basic idea behind LRP is to assign relevance scores to each input feature based on its contribution to the final prediction. This is done by propagating the prediction back through the network by re-distributing the prediction score at each layer onto the neurons in the lower layer based on their contributions. LRP has been shown to be effective in identifying which input features were most relevant to a given prediction and can be used to increase the transparency and interpretability of deep learning models in various domains. Furthermore, it was able to reveal undesired classification strategies arising from spurious correlations in the data, such as copyright tags in images (Lapuschkin et al., 2019).

From "Where“ to "What“: Concept-based Approaches

Concept-specific explanations via CRP

Concept-specific explanations via CRP reveal hidden context bias in segmentation models, here the jumping bars for the segmentation of the horse object (Dreyer et al., 2023). Segmentation models play an important role in natural disaster management.
Mere pixelwise explanations only provide local information which input features are important for the models’ decision, but not what they represent. Thus, they leave the burden of interpretation to the user. Moreover, for tasks like segmentation and object detection, which are of great importance in the context of natural disaster management in the TEMA project, classical heatmaps will typically resemble the output of the model itself (Dreyer et al., 2023). To address those limitations, the Concept Relevance Propagation (CRP) approach (Achtibat et al., 2022; Dreyer et al., 2023) extends the original LRP method and combines both local and global perspectives of XAI.  With CRP, one can answer not only "where", but also "what" questions for individual predictions without imposing additional constraints on the model. CRP also introduces the principle of „relevance maximization“, which finds representative examples of encoded concepts based on their usefulness to the model. Through a structured analysis of the latent features learned by the model, one can gain deep insights into model representations and reasoning. Here, concept atlases, composition analyses, and quantitative investigations of concept subspaces play an important role.

Conclusion


Explainable Artificial Intelligence (XAI) has the potential to improve natural disaster management by providing decision-makers with more transparent and interpretable AI systems, when effectively integrated into disaster management practices. Moreover, XAI can help build trust in AI systems, which is critical in emergency situations. It can also help identify where AI systems make errors or have biases, and how to address them. By improving the transparency and interpretability of AI systems, XAI can also facilitate collaboration and communication between different stakeholders, including emergency responders, government agencies, and affected communities. However, despite the significant progress in the past years, many challenges remain, such as explainability in so-called transformer models that recently gained popularity. 

References     

  • Bach,  Sebastian, et al. "On pixel-wise explanations for non-linear           classifier decisions by layer-wise relevance propagation." PloS one 10.7 (2015): e0130140.
  • Samek, Wojciech et al. “Explaining Deep Neural Networks and Beyond: A Review of Methods and Applications.” Proceedings of the IEEE, 109(3):247-278 (2021)
  • Lapuschkin, Sebastian, et al. "Unmasking Clever Hans predictors and assessing what machines really learn", Nature Communications 10 (2019)
  • Dreyer, Maximilian, et al. "Revealing Hidden Context Bias in Segmentation and Object Detection through Concept-specific  Explanations." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2023.
  • Achtibat, Reduan, et al. "From" where" to" what": Towards human-understandable explanations through concept relevance propagation." arXiv preprint arXiv:2206.03208 (2022).