Explainable artificial intelligence for traffic signal detection using LIME algorithm
Abstract
As technology progresses, so does everything around us, such as televisions, mobile phones, and robots, which grow wiser. Of these technologies, artificial intelligence (AI) is used to aid the computer in making decisions comparable to humans, and this intelligence is supplied to the machine as a model. As AI deals with the concept of Black-Box, the model’s decisions were poorly comprehended by the end users. Explainable AI (XAI) is where humans can understand the judgments and decisions made by the AI. Earlier, the predictions made by the AI were not as easy as we know the data now, and there was some confusion regarding the predictions made by the AI. The intention for the use of XAI is to improve the user interface of products and services by helping them trust the decisions made by AI. The machine learning (ML) model White-box shows us the result that can be understood by the people in that domain, wherein the end users cannot understand the decisions. To further enhance traffic signal detection using XAI, the concept called local interpretable model- agnostic explanation (LIME) algorithm has been taken into consideration and the performance is improved in this paper.
Keywords
Artificial intelligence; Explainable AI; Local interpretable modelagnostic explanation; Machine learning; Self-driving cars; Shapley additive explanations; Traffic signal detection
Full Text:
PDFDOI: http://doi.org/10.11591/ijict.v13i3.pp527-536
Refbacks
- There are currently no refbacks.
Copyright (c) 2024 Stewart Kirubakaran S
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
The International Journal of Informatics and Communication Technology (IJ-ICT)
p-ISSN 2252-8776, e-ISSN 2722-2616
This journal is published by the Institute of Advanced Engineering and Science (IAES) in collaboration with Intelektual Pustaka Media Utama (IPMU).