Hinterleitner, Alexander; Bartz-Beielstein, Thomas:
Tuning for Trustworthiness : Balancing Performance and Explanation Consistency in Neural Network Optimization
In: De.arXiv.org (2025-05-23)
2025-05-23Essay / Article in JournalOA Green
Faculty of Computer Science and Engineering Science » Institut für Data Science, Engineering, and Analytics
Title:
Tuning for Trustworthiness : Balancing Performance and Explanation Consistency in Neural Network Optimization
Author:
Hinterleitner, AlexanderTH Köln
DHSB-ID
THK0027332
SCOPUS
57224626470
Other
person connected with TH Köln
;
Bartz-Beielstein, ThomasTH Köln
DHSB-ID
THK0001582
GND
124999476
ORCID
0000-0002-5938-5158ORCID iD
SCOPUS
57190702501
Other
person connected with TH Köln
Date published:
2025-05-23
„Publication Channel“:
OA Green
arXiv.org ID
arXiv.org ID
Language of text:
English
Keyword, Topic:
XAI ; hyperparameter tuning ; multi-objective optimization ; desirability function ; multi-objective optimization ; surrogate modeling ; hyperparameter tuning
Type of resource:
Text
Access Rights:
open access
Practice Partner:
No
Category:
Research
Part of statistic:
Part of statistic

Abstract in English:

Despite the growing interest in Explainable Artificial Intelligence (XAI), explainability is rarely considered during hyperparameter tuning or neural architecture optimization, where the focus remains primarily on minimizing predictive loss. In this work, we introduce the novel concept of XAI consistency, defined as the agreement among different feature attribution methods, and propose new metrics to quantify it. For the first time, we integrate XAI consistency directly into the hyperparameter tuning objective, creating a multi-objective optimization framework that balances predictive performance with explanation robustness. Implemented within the Sequential Parameter Optimization Toolbox (SPOT), our approach uses both weighted aggregation and desirability-based strategies to guide model selection. Through our proposed framework and supporting tools, we explore the impact of incorporating XAI consistency into the optimization process. This enables us to characterize distinct regions in the architecture configuration space: one region with poor performance and comparatively low interpretability, another with strong predictive performance but weak interpretability due to low \gls{xai} consistency, and a trade-off region that balances both objectives by offering high interpretability alongside competitive performance. Beyond introducing this novel approach, our research provides a foundation for future investigations into whether models from the trade-off zone-balancing performance loss and XAI consistency-exhibit greater robustness by avoiding overfitting to training performance, thereby leading to more reliable predictions on out-of-distribution data.