AI Machine Learning & Data Science Research

Logic Explained Deep Neural Networks: A General Approach to Explainable AI

A research team from Università di Firenze, Università di Siena, University of Cambridge and Universitè Côte d’Azur proposes a general approach to explainable artificial intelligence (XAI) in neural architectures, designing interpretable deep learning models called Logic Explained Networks (LENs). The novel approach yields better performance than established white-box models while providing more compact and meaningful explanations.

Although deep learning models are playing increasingly important roles across a wide range of decision-making scenarios, a critical drawback is their inability to provide human-understandable motivations for their opaque or complex decision-making processes. This so-called “black box” issue has hindered the deployment of deep neural networks in safety-critical and other domains such as industry, medicine or courts, where human experts and concerned parties naturally desire more insight into just how the machine is formulating its decisions.

In the paper Logic Explained Networks, a research team from Università di Firenze, Università di Siena, University of Cambridge and Universitè Côte d’Azur proposes a general approach to explainable artificial intelligence (XAI) in neural architectures via interpretable deep learning models called Logic Explained Networks (LENs). The novel approach yields better performance than established white-box models while providing more compact and meaningful explanations.

The team summarizes their study’s contributions to XAI research as:

  1. Generalize existing neural methods for solving and explaining categorical learning problems [Ciravegna et al., 2020a, Ciravegna et al., 2020b] into a broad family of neural networks, i.e., the Logic Explained Networks (LENs).
  2. Describe how users may interconnect LENs in the classification task under investigation, and how to express a set of preferences to get one or more customized explanations.
  3. Show how to get a wide range of logic-based explanations, and how logic formulas can be restricted in their scope, working at different levels of granularity (explaining a single sample, a subset of the available data, etc.).
  4. Report experimental results using three out-of-the-box preset LENs, showing how they may generalize better in terms of model accuracy than established white-box models such as decision trees on complex Boolean tasks (in line with Tavares’ work [Tavares et al., 2020]).
  5. Advertise our public implementation of LENs through a Python package3 with extensive documentation about LENs models, implementing different trade-offs between interpretability/explainability and accuracy.

Previous research has shown that one possible way to provide human-understandable explanations is through the use of an expressive formal language such as first-order logic (FOL). Compared to other concept-based techniques, these logic-based explanations are presented in a rigorous and unambiguous way. Furthermore, they can be quantitatively measured to check their correctness and completeness, and their logic formula can also be applied to check generality in terms of quantitative metrics like accuracy, fidelity and consistency.

Inspired by the benefits of logic-based explanations, the proposed LENs are trained to solve and explain a categorical learning problem by integrating elements from deep learning and logic. LENs can be directly interpreted by means of a set of FOL formulas, and can make predictions in a manner that is well suited for providing FOL-based explanations that involve the input concepts. LENs can thus serve as a generic framework encompassing a large variety of use cases.

LENs provide FOL explanations for a set of output concepts as a function of their inputs, and can be implemented according to the user’s final goal or the properties of the considered problem. Notably, LENs can either generate FOL descriptions of each (high-level) output concept with respect to the (low-level) input, or directly classify and explain input data. Moreover, LENs can be paired with black-box classifiers operating on the same input data, and after mimicking the behaviours of the black-box classifier, can act as an additional explanation-oriented module.

The team conducted experiments on three out-of-the-box preset LENs: µ network, ψ network and ReLU network. The µ network is a neural model that can provide high-quality explanations, good learning capacity and modest interpretability; the ψ network is a fully interpretable model with limited learning capacity providing mediocre explanations; and the ReLU network is a model with state-of-the-art learning capabilities that can provide good explanations at the cost of very low interpretability. The researchers measured model and explanation accuracy, explanation complexity and fidelity, and rule consistency and extraction time to evaluate LEN performance against state-of-the-art approaches.

In the experiments, LENs generalized better in terms of model accuracy than established white-box models such as decision trees on complex Boolean functions (e.g. CUB), and in most cases outperformed Bayesian Rule Lists. Overall, the results demonstrate the proposed approach’s balanced trade-off between interpretability/explainability and accuracy.

The paper Logic Explained Networks is on arXiv.


Author: Hecate He | Editor: Michael Sarazen, Chain Zhang


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

4 comments on “Logic Explained Deep Neural Networks: A General Approach to Explainable AI

  1. Pingback: A General Approach to Explainable AI : MachineLearning - TechFlx

  2. Pingback: r/artificial - [R] Logic Explained Deep Neural Networks: A General Approach to Explainable AI - Cyber Bharat

  3. goooooooooooooooood

  4. very gooooooooooooood

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: