Please use this identifier to cite or link to this item:
https://hdl.handle.net/11499/58245
Title: | Enhancing Software Defect Prediction through Explainable AI: Integrating SHAP and LIME in a Voting Classifier Framework | Authors: | Asal, B. Demir, M.O. |
Keywords: | Explainable Artificial Intelligence (XAI) LIME SHAP Software Defect Prediction Software Engineering Transparency in AI Voting Classifier Application programs Cost engineering Cost estimating AI systems Decision-making process Explainable artificial intelligence (XAI) Interpretability Local interpretable model-agnostic explanation Shapley Shapley additive explanation Software defect prediction Transparency in AI Voting classifiers Prediction models |
Publisher: | Institute of Electrical and Electronics Engineers Inc. | Abstract: | Explainable Artificial Intelligence (XAI) has become increasingly vital in the field of artificial intelligence, as it addresses the critical need for transparency and interpretability in AI models. As AI systems are increasingly deployed in high-stakes environments, understanding the decision-making process of these models is essential for building trust and ensuring responsible AI usage. XAI provides the methods to uncover the underlying mechanisms of AI models, making them more accessible and understandable to users and stakeholders. In the domain of software engineering, AI has emerged as a powerful tool for automating various tasks, including software defect prediction and cost estimation. However, the opaque nature of traditional AI models has raised concerns about their reliability and the ability to validate their outputs. This study focuses on enhancing the transparency of AI applications in software engineering by integrating XAI techniques. We employ a voting classifier trained on the KC2 dataset, coupled with SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) for local and global explainability. Our research demonstrates how the application of SHAP and LIME can provide clear and interpretable insights into the factors driving the predictions of the voting classifier. By making the model's decision-making process more transparent, we enable developers and stakeholders to better understand and trust the AI-driven predictions. This study not only advances the field of software defect prediction but also contributes to the broader adoption of XAI in software engineering, highlighting its importance in creating more reliable, understandable, and trustworthy AI systems. © 2024 IEEE. | Description: | 8th International Artificial Intelligence and Data Processing Symposium, IDAP 2024 -- 21 September 2024 through 22 September 2024 -- Malatya -- 203423 | URI: | https://doi.org/10.1109/IDAP64064.2024.10710700 https://hdl.handle.net/11499/58245 |
ISBN: | 979-833153149-2 |
Appears in Collections: | Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection |
Show full item record
CORE Recommender
Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.