Please use this identifier to cite or link to this item: https://hdl.handle.net/11499/58245
Full metadata record
DC FieldValueLanguage
dc.contributor.authorAsal, B.-
dc.contributor.authorDemir, M.O.-
dc.date.accessioned2024-11-20T18:04:21Z-
dc.date.available2024-11-20T18:04:21Z-
dc.date.issued2024-
dc.identifier.isbn979-833153149-2-
dc.identifier.urihttps://doi.org/10.1109/IDAP64064.2024.10710700-
dc.identifier.urihttps://hdl.handle.net/11499/58245-
dc.description8th International Artificial Intelligence and Data Processing Symposium, IDAP 2024 -- 21 September 2024 through 22 September 2024 -- Malatya -- 203423en_US
dc.description.abstractExplainable Artificial Intelligence (XAI) has become increasingly vital in the field of artificial intelligence, as it addresses the critical need for transparency and interpretability in AI models. As AI systems are increasingly deployed in high-stakes environments, understanding the decision-making process of these models is essential for building trust and ensuring responsible AI usage. XAI provides the methods to uncover the underlying mechanisms of AI models, making them more accessible and understandable to users and stakeholders. In the domain of software engineering, AI has emerged as a powerful tool for automating various tasks, including software defect prediction and cost estimation. However, the opaque nature of traditional AI models has raised concerns about their reliability and the ability to validate their outputs. This study focuses on enhancing the transparency of AI applications in software engineering by integrating XAI techniques. We employ a voting classifier trained on the KC2 dataset, coupled with SHAP (SHapley Additive exPlanations) and LIME (Local Interpretable Model-agnostic Explanations) for local and global explainability. Our research demonstrates how the application of SHAP and LIME can provide clear and interpretable insights into the factors driving the predictions of the voting classifier. By making the model's decision-making process more transparent, we enable developers and stakeholders to better understand and trust the AI-driven predictions. This study not only advances the field of software defect prediction but also contributes to the broader adoption of XAI in software engineering, highlighting its importance in creating more reliable, understandable, and trustworthy AI systems. © 2024 IEEE.en_US
dc.language.isoenen_US
dc.publisherInstitute of Electrical and Electronics Engineers Inc.en_US
dc.relation.ispartof8th International Artificial Intelligence and Data Processing Symposium, IDAP 2024en_US
dc.rightsinfo:eu-repo/semantics/closedAccessen_US
dc.subjectExplainable Artificial Intelligence (XAI)en_US
dc.subjectLIMEen_US
dc.subjectSHAPen_US
dc.subjectSoftware Defect Predictionen_US
dc.subjectSoftware Engineeringen_US
dc.subjectTransparency in AIen_US
dc.subjectVoting Classifieren_US
dc.subjectApplication programsen_US
dc.subjectCost engineeringen_US
dc.subjectCost estimatingen_US
dc.subjectAI systemsen_US
dc.subjectDecision-making processen_US
dc.subjectExplainable artificial intelligence (XAI)en_US
dc.subjectInterpretabilityen_US
dc.subjectLocal interpretable model-agnostic explanationen_US
dc.subjectShapleyen_US
dc.subjectShapley additive explanationen_US
dc.subjectSoftware defect predictionen_US
dc.subjectTransparency in AIen_US
dc.subjectVoting classifiersen_US
dc.subjectPrediction modelsen_US
dc.titleEnhancing Software Defect Prediction through Explainable AI: Integrating SHAP and LIME in a Voting Classifier Frameworken_US
dc.typeConference Objecten_US
dc.departmentPamukkale Universityen_US
dc.identifier.doi10.1109/IDAP64064.2024.10710700-
dc.relation.publicationcategoryKonferans Öğesi - Uluslararası - Kurum Öğretim Elemanıen_US
dc.authorscopusid57190739406-
dc.authorscopusid57209734889-
dc.identifier.scopus2-s2.0-85207868445en_US
dc.institutionauthor-
item.fulltextNo Fulltext-
item.languageiso639-1en-
item.openairecristypehttp://purl.org/coar/resource_type/c_18cf-
item.openairetypeConference Object-
item.grantfulltextnone-
item.cerifentitytypePublications-
Appears in Collections:Scopus İndeksli Yayınlar Koleksiyonu / Scopus Indexed Publications Collection
Show simple item record



CORE Recommender

Google ScholarTM

Check




Altmetric


Items in GCRIS Repository are protected by copyright, with all rights reserved, unless otherwise indicated.