Projects per year
Abstract
We introduce a novel metric for measuring semantic continuity in Explainable AI methods and machine learning models. We posit that for models to be truly interpretable and trustworthy, similar inputs should yield similar explanations, reflecting a consistent semantic understanding. By leveraging XAI techniques, we assess semantic continuity in the task of image recognition. We conduct experiments to observe how incremental changes in input affect the explanations provided by different XAI methods. Through this approach, we aim to evaluate the models’ capability to generalize and abstract semantic concepts accurately and to evaluate different XAI methods in correctly capturing the model behaviour. This paper contributes to the broader discourse on AI interpretability by proposing a quantitative measure for semantic continuity for XAI methods, offering insights into the models’ and explainers’ internal reasoning processes, and promoting more reliable and transparent AI systems.
Original language | English |
---|---|
Title of host publication | Explainable Artificial Intelligence |
Subtitle of host publication | 2nd World Conference, xAI 2024, Proceedings |
Editors | Luca Longo, Sebastian Lapuschkin, Christin Seifert |
Publisher | Springer |
Pages | 308-331 |
Number of pages | 24 |
ISBN (Print) | 9783031637865 |
DOIs | |
Publication status | Published - 2024 |
Event | 2nd World Conference on Explainable Artificial Intelligence, xAI 2024 - Valletta, Malta Duration: 17 Jul 2024 → 19 Jul 2024 |
Publication series
Name | Communications in Computer and Information Science |
---|---|
Volume | 2153 |
ISSN (Print) | 1865-0929 |
ISSN (Electronic) | 1865-0937 |
Conference/symposium
Conference/symposium | 2nd World Conference on Explainable Artificial Intelligence, xAI 2024 |
---|---|
Country/Territory | Malta |
City | Valletta |
Period | 17/07/24 → 19/07/24 |
Keywords
- Explainable AI
- Machine Learning Interpretability
- Semantic Analysis
- Semantic Continuity
Fingerprint
Dive into the research topics of 'Beyond the Veil of Similarity: Quantifying Semantic Continuity in Explainable AI'. Together they form a unique fingerprint.Projects
- 2 Finished
-
EU-23033 EFRA (KB-50-005-007)
Hürriyetoğlu, A. (Project Leader)
1/01/24 → 31/12/24
Project: LVVN project
-
EU23033 - EFRA (BO-64-101-014)
van der Velden, B. (Project Leader)
1/01/23 → 31/12/23
Project: LVVN project