Activities   >   Extra talks   >   Problems of Universal Quantifiers in Logical Neural Networks

Problems of Universal Quantifiers in Logical Neural Networks

When

23 Feb 26    
00:00
Abstract: Deep learning models achieve high performance in various downstream tasks, yet their complexity hinders the evaluation of their underlying reasoning. The field of eXplainable Artificial Intelligence (XAI) addresses this challenge through diverse approaches, ranging from explaining opaque models (or black boxes) to developing inherently transparent architectures. Logical Neural Networks (LNNs) are one such approach, where neurons emulate logical vocabulary (such as atomic propositions and connectives) governed by real-valued semantics (e.g., Łukasiewicz logic) to form interpretable expressions. In classification or prediction tasks, LNNs can learn logically expressed rules that can later be directly interpreted, mainly by leveraging the expressibility of first order logic. This is done, for instance, by taking information from a knowledge base as an assignment for atomic formulas, and then computing the corresponding assignments for complex formulas. However, the seminal development of these networks, in an unpublished work by Riegel et al. in 2020, lacks a satisfactory treatment of universal quantifiers, and the majority of published uses for LNNs are based on propositional logic. This limits the potential for LNNs in the XAI field. In this talk, I will present some choices for defining universal quantifiers in the LNNs framework, and their associated challenges.