Products
Technology
Resources
Solutions
Community
About

Explaining Explainability in AI - SambaNova Systems

Posted by SambaNova Systems on January 21, 2022
Explaining Explainability in AI - SambaNova Systems

Explainability in AI seems like a very simple and reasonable concept. In simplest terms, it is the ability to explain why an AI system made a particular prediction. As AI becomes ubiquitous, this is important because people need to be able to trust that an AI model will make good predictions, and to do that, they need to know how those predictions are calculated.

It is easy to see why this type of understanding is important. There is a need to understand why one person was approved for a loan while another was not, why a particular medical treatment was prescribed over another, and any other decision. For purposes ranging from simply answering customer questions to meeting strict regulatory requirements, this is clearly a critical capability. Without the ability to explain how an AI model came to a decision, organizations can be challenged to use it. The problem is that in actual practice, this simple concept can border on the impossible.

The Challenge of Scale

Explainability in AI can be defined as the ability for a human to look at the decisions that an AI model made and understand how it came to the conclusion that it did. While explainability can be achieved using legacy, rules based, linear regressions or decision-tree techniques, that level of design does not deliver the accuracy that is needed to be effective for mission and business critical use cases.

To be effective at scale, AI solutions need to leverage the largest models, requiring deep neural networks (DNNs) and massive data sets. When powered by infrastructure that can deliver the needed performance, organizations can achieve transformative results using AI that is accurate and reliable.

The problem is that with deep neural networks (DNNs) there could literally be trillions of parameters, with a complex interaction of inputs in each layer and through multiple layers, making it impossible for any human to follow and understand how a decision was reached. That is why this is sometimes referred to as black box AI.

The Need for Interpretability

To successfully deploy large scale AI, organizations need to differentiate between the technical definition of explainability and the practical application of interpretability. Technical explainability provides a clear, comprehensible series of steps that were followed to reach a conclusion and is not possible with deep neural networks at scale that can include billions, or even trillions, of parameters. Interpretability combines the accuracy of the model with the ability of a human to understand why a particular decision was chosen without detailing each individual step in the process.

For example, when recommending a particular medication to a patient, a doctor may demand an explainable AI model, so that every step in choosing that treatment is understood. While this may be desired, what is truly needed is an interpretable AI model, where the doctor understands the treatment has a high rate of effectiveness with limited side effects. If the doctor understands that the AI model took into account the condition of the patient (height, weight, age, etc.), their specific genetic predispositions, drug efficacy in relation to other treatments, severity and frequency of side effects, and thousands of other parameters, then the doctor would not need to see each step. He would simply need to understand that the right steps were taken and that the AI model has demonstrated a high level of accuracy over time.

An interpretable AI model provides a comprehensive, auditable rationale for why a choice was made. It may not provide a detailed roadmap of every factor that went into the decision, but it is human comprehensible and provides the logic for why an AI made the choice it did, even when trillions of parameters are used to reach the ultimate conclusion.

Topics: technology

Editor
Editor

AI is here. With SambaNova, customers are deploying the power of AI and deep learning in weeks rather than years to meet the demands of the AI-enabled world. SambaNova’s flagship offering, Dataflow-as-a-ServiceTM, is a complete solution purpose-built for AI and deep learning that overcomes the limitations of legacy technology to power the large and complex models that enable customers to discover new opportunities, unlock new revenue and boost operational efficiency. For more information please visit us at sambanova.ai or contact us at info@sambanova.ai. Follow SambaNova Systems on LinkedIn.