Explaining Artificial Neural Networks With Decision Tree Ensembles
View/ Open
Date
2023Author
Kılıç, Sayit
xmlui.dri2xhtml.METS-1.0.item-emb
Acik erisimxmlui.mirage2.itemSummaryView.MetaData
Show full item recordAbstract
With the development of efficient algorithms, artificial intelligence (AI) applications have become ubiquitous in almost every aspect of our lives. They have even started to be used in critical areas such as defense industry, economy, and healthcare. However, the use of AI models in these important areas raises concerns about their reliability. Therefore, explaining how these black box models work has become an important goal. This thesis, we propose a simple and fast model to explain the decisions of any black box model. To achieve this, we attempt to explain the basic behavior of the model through a set of semi-random decision trees. Our approach only requires the data used to train the black box model and the model itself to work. Current state-of-the-art explainable AI (XAI) models typically produce local explanations for a black box model's decision regarding a single observation. On the other hand, models that produce global explanations use complex computations to understand the effect of each feature on the model's decisions. However, our proposed approach defines separate regions in the model's general decision space to explain the decision-making process of the model, and requires significantly less computational power than other advanced XAI techniques while producing both local and global explanations.