With the growing use of machine learning models and the accelerated development of complex models, there is a need for efficient solutions in the field of explainable artificial intelligence. In this work, we propose a new method of explaining models that couples state-of-the-art "most important" feature extraction methods with clustering methods and a new approach to feature generalization using machine-readable domain knowledge. We demonstrate the methodology on an artificially created domain and two different real domains. We evaluate the quality of explanations with the help of domain experts. Generalizations are evaluated as sensible but too general for researchers. We propose potential use cases for other groups of users. The implemented method is created as a publicly available Python library.
|