Interpretable Artificial Intelligence Using Nonlinear Decision Trees
Download Interpretable Artificial Intelligence Using Nonlinear Decision Trees PDF/ePub or read online books in Mobi eBooks. Click Download or Read Online button to get Interpretable Artificial Intelligence Using Nonlinear Decision Trees book now. This website allows unlimited access to, at the time of writing, more than 1.5 million titles, including hundreds of thousands of titles in various foreign languages.
Interpretable Artificial Intelligence Using Nonlinear Decision Trees
The recent times have observed a massive application of artificial intelligence (AI) to automate tasks across various domains. The back-end mechanism with which automation occurs is generally black-box. Some of the popular black-box AI methods used to solve an automation task include decision trees (DT), support vector machines (SVM), artificial neural networks (ANN), etc. In the past several years, these black-box AI methods have shown promising performance and have been widely applied and researched across industries and academia. While the black-box AI models have been shown to achieve high performance, the inherent mechanism with which a decision is made is hard to comprehend. This lack of interpretability and transparency of black-box AI methods makes them less trustworthy. In addition to this, the black-box AI models lack in their ability to provide valuable insights regarding the task at hand. Following these limitations of black-box AI models, a natural research direction of developing interpretable and explainable AI models has emerged and has gained an active attention in the machine learning and AI community in the past three years. In this dissertation, we will be focusing on interpretable AI solutions which are being currently developed at the Computational Optimization and Innovation Laboratory (COIN Lab) at Michigan State University. We propose a nonlinear decision tree (NLDT) based framework to produce transparent AI solutions for automation tasks related to classification and control. The recent advancement in non-linear optimization enables us to efficiently derive interpretable AI solutions for various automation tasks. The interpretable and transparent AI models induced using customized optimization techniques show similar or better performance as compared to complex black-box AI models across most of the benchmarks. The results are promising and provide directions to launch future studies in developing efficient transparent AI models.
Explainable and Interpretable Reinforcement Learning for Robotics
This book surveys the state of the art in explainable and interpretable reinforcement learning (RL) as relevant for robotics. While RL in general has grown in popularity and been applied to increasingly complex problems, several challenges have impeded the real-world adoption of RL algorithms for robotics and related areas. These include difficulties in preventing safety constraints from being violated and the issues faced by systems operators who desire explainable policies and actions. Robotics applications present a unique set of considerations and result in a number of opportunities related to their physical, real-world sensory input and interactions. The authors consider classification techniques used in past surveys and papers and attempt to unify terminology across the field. The book provides an in-depth exploration of 12 attributes that can be used to classify explainable/interpretable techniques. These include whether the RL method is model-agnostic or model-specific, self-explainable or post-hoc, as well as additional analysis of the attributes of scope, when-produced, format, knowledge limits, explanation accuracy, audience, predictability, legibility, readability, and reactivity. The book is organized around a discussion of these methods broken down into 42 categories and subcategories, where each category can be classified according to some of the attributes. The authors close by identifying gaps in the current research and highlighting areas for future investigation.
Explainable Artificial Intelligence
This four-volume set constitutes the refereed proceedings of the Second World Conference on Explainable Artificial Intelligence, xAI 2024, held in Valletta, Malta, during July 17-19, 2024. The 95 full papers presented were carefully reviewed and selected from 204 submissions. The conference papers are organized in topical sections on: Part I - intrinsically interpretable XAI and concept-based global explainability; generative explainable AI and verifiability; notion, metrics, evaluation and benchmarking for XAI. Part II - XAI for graphs and computer vision; logic, reasoning, and rule-based explainable AI; model-agnostic and statistical methods for eXplainable AI. Part III - counterfactual explanations and causality for eXplainable AI; fairness, trust, privacy, security, accountability and actionability in eXplainable AI. Part IV - explainable AI in healthcare and computational neuroscience; explainable AI for improved human-computer interaction and software engineering for explainability; applications of explainable artificial intelligence.