Hinterleitner, Alexander; Bartz-Beielstein, Thomas:
Tuning for Trustworthiness : Balancing Performance and Explanation Consistency in Neural Network Optimization
In: De.arXiv.org (2025-05-23)
2025-05-23Aufsatz / Artikel in ZeitschriftOA Grün
Fakultät für Informatik und Ingenieurwissenschaften » Institut für Data Science, Engineering, and Analytics
Titel:
Tuning for Trustworthiness : Balancing Performance and Explanation Consistency in Neural Network Optimization
Autor*in:
Hinterleitner, AlexanderTH Köln
DHSB-ID
THK0027332
SCOPUS
57224626470
Sonstiges
der TH Köln zugeordnete Person
;
Bartz-Beielstein, ThomasTH Köln
DHSB-ID
THK0001582
GND
124999476
ORCID
0000-0002-5938-5158ORCID iD
SCOPUS
57190702501
Sonstiges
der TH Köln zugeordnete Person
Veröffentlicht am:
2025-05-23
OA-Publikationsweg:
OA Grün
arXiv.org ID
arXiv.org ID
Sprache des Textes:
Englisch
Schlagwort, Thema:
XAI ; hyperparameter tuning ; multi-objective optimization ; desirability function ; multi-objective optimization ; surrogate modeling ; hyperparameter tuning
Ressourcentyp:
Text
Access Rights:
Open Access
Praxispartner*in:
Nein
Kategorie:
Forschung
Teil der Statistik:
Teil der Statistik

Abstract in Englisch:

Despite the growing interest in Explainable Artificial Intelligence (XAI), explainability is rarely considered during hyperparameter tuning or neural architecture optimization, where the focus remains primarily on minimizing predictive loss. In this work, we introduce the novel concept of XAI consistency, defined as the agreement among different feature attribution methods, and propose new metrics to quantify it. For the first time, we integrate XAI consistency directly into the hyperparameter tuning objective, creating a multi-objective optimization framework that balances predictive performance with explanation robustness. Implemented within the Sequential Parameter Optimization Toolbox (SPOT), our approach uses both weighted aggregation and desirability-based strategies to guide model selection. Through our proposed framework and supporting tools, we explore the impact of incorporating XAI consistency into the optimization process. This enables us to characterize distinct regions in the architecture configuration space: one region with poor performance and comparatively low interpretability, another with strong predictive performance but weak interpretability due to low \gls{xai} consistency, and a trade-off region that balances both objectives by offering high interpretability alongside competitive performance. Beyond introducing this novel approach, our research provides a foundation for future investigations into whether models from the trade-off zone-balancing performance loss and XAI consistency-exhibit greater robustness by avoiding overfitting to training performance, thereby leading to more reliable predictions on out-of-distribution data.