Back to top
Wray Buntine

Wray Buntine

VinUniversity

The Knowledge Revoilution

Abstract: The growth of LLMs has completely revolutionised the knowledge landscape, and as the models are still improving rapidly, more and more challenging tasks can be handled. However, confabulation and misinformation remains a problem, as is the ability to reason in more abstract and complex ways. This has led to a race by the big tech companies with the computational resources to develop extensions of LLMs, for instance Google’s AlphaProof. This talk discusses some of issues faced by us, the learning and reasoning community in these times. I will talk about the interplay between explicitly represented knowledge, uncertainty, the sources of information, and explanation. These are crucial issues in dealing with confabulation, misinformation and deeper reasoning methods.


Bio:Wray Buntine is Professor at VinUniversity and Director of Computer Science Programs in College of Engineering and Computer Science at VinUniversity …

William Cohen

William Cohen

Carnegie Mellon University

Why Does Chain of Thought Work? Explaining Why versus Explaining How

Abstract: While Chain-of-Thought (CoT) prompting is powerful, it is not well understood. A number of explanations for CoT’s success have been offered, e.g., that it ``unlocks” reasoning abilities inherit in language models, or that longer outputs provide additional computational power needed to solve computationally complex tasks. Here we propose another intuitive explanation of CoT prompting’s success: that CoT demonstrations provide guidance for how to solve task instances. In particular, we suggest that CoT is a way of teaching the model an algorithm by walking through the steps of the algorithm on sample problems.


Bio:William Cohen is a Visiting Professor at Carnegie Mellon University in the Machine Learning Department …

Anthony Cohn

Anthony Cohn

University of Leeds

Evaluating the spatial reasoning capabilities of Large Language Models (LLMs)

Abstract: Whilst LLMs have shown remarkable apparent abilities in many areas, the extent of the their abilities to perform reasoning is less clear. In particular for the case of spatial reasoning, which is situated in the physical world, it is unclear if disembodied LLMs can perform well. I will present some results, particularly focussing on qualitative spatial representations and reasoning, showing the degree of their capabilities in areas such mereotopology, directions and orientations. The approaches include (1) the use of fixed benchmarks; (2) the use of synthetic worlds in which arbitrary configurations can be set up and the correct answer easily determined; (3) conducting an extended conversation (“dialectical evaluation”) to probe the limits of LLM capabilities.​


Bio:Anthony (Tony) Cohn is Professor of Automated Reasoning in the School of Computing, University of Leeds and seconded part-time to the Alan Turing Institute where he is Foundational Models Theme lead...

Luc De Raedt

Luc De Raedt

KU Leuven

How to Make Logics Neurosymbolic

Abstract: Neurosymbolic AI (NeSy) is regarded as the third wave in AI. It aims at combining knowledge representation and reasoning with neural networks. Numerous approaches to NeSy are being developed and there exists an \`alphabet-soup' of different systems, whose relationships are often unclear. I will discuss the state-of-the art in NeSy and argue that there are many similarities with statistical relational AI (StarAI). Taking inspiring from StarAI, and exploiting these similarities, I will argue that Neurosymbolic AI = Logic + Probability + Neural Networks. I will also provide a recipe for developing NeSy approaches: start from a logic, add a probabilistic interpretation, and then turn neural networks into \`neural predicates'. Probability is interpreted broadly here, and is necessary to provide a quantitative and differentiable component to the logic. At the semantic and the computation level, one can then combine logical circuits (ako proof structures) labeled with probability, and neural networks in computation graphs. I will illustrate the recipe with NeSy systems such as DeepProbLog, a deep probabilistic extension of Prolog, and DeepStochLog, a neural network extension of stochastic definite clause grammars (or stochastic logic programs).


Bio:Luc De Raedt is currently Director of Leuven.AI, the KU Leuven Institute for AI, full professor of Computer Science at KU Leuven...

Sašo Džeroski

Sašo Džeroski

Jozef Stefan Institute

Towards semi-supervised relational learning

Abstract: In machine learning of predictive models (that map inputs to outputs), complex and partially labeled data are encountered increasingly often. The complexity can appear on the side of the inputs (as in, e.g., relational learning) or on the side of the outputs (as in, e.g., multi-target regression or hierarchical multi-label classification). On one hand, we have developed a variety of learning approaches for multi-target prediction, based on predictive clustering trees and ensembles thereof, which allow for accurate and explainable predictions. They can address different predictive modelling tasks, such as multi-target regression and multi-label classification, and different degrees of supervision, ranging from fully supervised, through semi-supervised learning, to unsupervised learning. The tree ensembles allow us to estimate the importance of variables (and provide explanations). On the other hand, we have also recently developed approaches for learning ensembles of relational trees for classification and estimating feature importance in this context. I will review our research in each of these two directions and outline very recent research towards semi-supervised relational learning for multi-target prediction.


Bio:Sašo Džeroski is Head of the Department of Knowledge Technologies at the Jozef Stefan Institute (Ljubljana, Slovenia) and a full professor at the Jozef Stefan International Postgraduate School...

Katsumi Inoue

Katsumi Inoue

National Institute of Informatics

Algebraic Logic Programming and Learning

Abstract: Reasoning and learning are interconnected and can enhance each other as a robust AI system that tackles complex tasks. The interplay between reasoning and learning becomes more and more important in generative AI with the growing use of LLMs. Besides, neurosymbolic AI has attracted much attention by connecting neural perception and symbolic reasoning.

Here, we present our original approach to integrate machine learning and symbolic reasoning, which provides a unified foundation for realizing a series of intelligent behaviors, i.e., recognition, learning, and inference, on a common mathematical ground.

To this end, we have focused on realization of symbolic reasoning based on algebraic methods, since algebraic data structures have been used in machine learning, so that it should be easier to connect between symbolic reasoning and machine learning within such a common numeric field.

We will show both linear algebraic approaches for logic programming semantics and differentiable approaches for finding solutions and learning programs. Various symbolic reasoning and learning methods have been realized in such algebraic manners, including Datalog evaluation, fixponit computation of logic programs, computation of satisfiable assignments in SAT, abduction, answer set programming, and inductive reasoning for propositional and first-order logic programs. ​


Bio:Katsumi Inoue received the Doctor of Engineering degree from Kyoto University in 1993 for studies on abductive and nonmonotonic reasoning. He is currently a Professor of Principles of Informatics Research Division, National Institute of Informatics, and is a Professor of Informatics Program, Graduate Institute for Advanced Studies, SOKENDAI...

Xue Li

Xue Li

University of Edinburgh

Textual entailment with LLMs and symbolic AI approaches

Abstract: Large language models (LLMs) have been applied to conduct textual entailment tasks, which are widely used in NLP tasks, e.g., fact-checking, argumentation, etc. However, LLMs are black boxes: it is unclear what knowledge they have and how they conduct reasoning if they do. In contrast, symbolic AI is explicit with rigorous reasoning procedures and well-formed formulas. For example, knowledge graphs (KGs) contain structured knowledge stored explicitly, allowing easy and reliable information retrieval. In this talk, I will introduce textual entailment tasks with LLMs first, followed by discussions on whether LLMs and symbolic AI can be combined for better performance.


Bio:Xue Li is a postdoctoral researcher in the School of Informatics at the University of Edinburgh...

Denis Mareschal

Denis Mareschal

Birkbeck University of London

AI as a model for human-like learning and reasoning

Abstract: Human-like computing approaches look to humans for inspiration and guidance on how to construct artificially intelligent systems. But how useful is the converse relation? What can we learn about human learning and reasoning from AI approaches? In this talk, I will focus on my work on learning and development in infants and children and reflect on the utility of AI methods past and present as explanatory models of cognitive development. I will also explore how the presence of AI-enhanced technologies in the child's environment may impact on their developmental outcomes. I will conclude that the relationship remains largely valuable in the human-to-computing direction but much less so in the other direction. One missing element is that human cognition is made of multiple parallel competing approaches to learning and reasoning and the key to understanding human-like computing is understanding the control that occurs between these systems.


Bio:Denis Mareschal is Professor of Psychology and Director of the Centre for Brain and Cognitive Development at Birkbeck University of London...

Claude Sammut

Claude Sammut

University of New South Wales

Measurement and Evaluation of Intelligent Robots

Abstract: As in many other fields, Machine Learning has had a significant impact on Robotics. We discuss how Machine Learning has impacted the design and development of software for robots, including areas such as robot vision, locomotion, manipulation and decision making. We also discuss how the performance of these systems can be evaluated objectively. Typically, machine learning algorithms are assessed on standard datasets, but in robotics, it is necessary to have the integrated robot system perform in a replicable environment with clear measurement criteria. This is one of the primary motivations for robotics competitions, such RoboCup, where robots are required to perform tasks all the tasks previously mentioned in arenas defined by a technical committee for the purpose of obtaining measurable performance results.


Bio:Claude Sammut is a Professor in the School of Computer Science and Engineering, University of New South Wales …

Ute Schmid

Ute Schmid

University of Bamberg

Near-miss Explanations to Teach Humans and Machines

Abstract: In explainable artificial intelligence (XAI), different types of explanations have been proposed -- feature highlighting, concept-based explanations, as well as explanations by prototypes and by contrastive (near miss) examples. In my talk, I will focus on near-miss explanations which are especially helpful to understand decision boundaries of neighbouring classes. I will show relations of near miss explanations to cognitive science research where it has been shown that structural similarity between a given concept and a to be explained concept has a strong impact on understanding and knowledge acquistion. Likewise, in machine learning, negative examples which are near-misses have been shown to be more efficient than random samples to support convergence of a model to the intended concept. I will present an XAI approach to construct contrastive explanations based on near-miss examples in an ILP setting and illustrate it in abstract as well as perceptual relational domains.


Bio:Ute Schmid is full professor of Cognitive Systems at University of Bamberg (Germany), director of the Bamberg Center of AI (BaCAI) and on the board of directors of the Bavarian Research Institute for Digital Transformation (bidt)...

Ehud Shapiro

Ehud Shapiro

Weizmann Institute of Science

Grassroots: A Radical Architecture for an Equitable Digital Society

Abstract: The Internet is dominated by global platforms that are centralized and autocratic. Emerging blockchains and cryptocurrencies offer global platforms that are decentralized but with plutocratic control. Our decade-long research on digital democracy provided a critical insight: That global platforms—centralized and decentralized alike—are anathema to an egalitarian and democratic foundation for the digital realm. In this talk we describe a radical alternative: A grassroots architecture for democratic digital communities to emerge locally and federate globally. A grassroots platform—unlike global platforms—can have multiple autonomous concurrent instances that interoperate if and when interconnected. I will review the grassroots mathematical foundations,a grassroots protocol stack, and grassroots platforms, including a grassroots social network, grassroots cryptocurrencies, and grassroots constitutional democratic communities and federations. To prototype grassroots platforms, we are developing a grassroots concurrent logic programming language; I will mention it and give some examples.


Bio:Ehud Shapiro is an Israeli scientist, entrepreneur, artist, and political activist who is Professor of Computer Science and Biology at the Weizmann Institute of Science...

Jiajun Wu

Jiajun Wu

Stanford University

Neuro-Symbolic Concept Learning

Abstract: I will discuss a concept-centric paradigm for building agents that can learn continually and reason flexibly across multiple domains and input modalities. The concept-centric agent utilizes a vocabulary of neuro-symbolic concepts. These concepts, including object, relation, and action concepts, are grounded on sensory inputs and actuation outputs. They are also compositional, allowing for the creation of novel concepts through their structural combination. To facilitate learning and reasoning, the concepts are typed and represented using a combination of symbolic programs and neural network representations. Leveraging such neuro-symbolic concepts, the agent can efficiently learn and recombine them to solve various tasks across different domains and data modalities, ranging from 2D images, videos, 3D scenes, temporal data, and robotic manipulation data.


Bio:Jiajun Wu is an Assistant Professor of Computer Science and, by courtesy, of Psychology at Stanford University...

Jun Zhu

Jun Zhu

Tsinghua University

Physics-Informed Machine Learning

Abstract: Recent advances of data-driven machine learning have revolutionized fields like computer vision, reinforcement learning, and many scientific and engineering domains. In many real-world and scientific problems, systems that generate data are governed by physical laws. Recent work shows that it provides potential benefits for machine learning models by incorporating the physical prior and collected data, which makes the intersection of machine learning and physics become a prevailing paradigm. By integrating the data and mathematical physics models seamlessly, it can guide the machine learning model towards solutions that are physically plausible, improving accuracy and efficiency even in uncertain and high-dimensional contexts. In this talk, I will present this learning paradigm called Physics-Informed Machine Learning (PIML) which is to build a model that leverages empirical data and available physical prior knowledge to improve performance on a set of tasks that involve a physical mechanism. I will also present some recent progress from three perspectives of machine learning tasks, representation of physical prior, and methods for incorporating physical prior.


Bio:Jun Zhu is a Bosch AI professor in the Department of Computer Science and Technology at Tsinghua University...

Cunjing Ge

Cunjing Ge

Nanjing University

Reasoning with Counting Models of Formulas

Abstract: The satisfiability (SAT) problems are fundamental in computer science. The tasks of reasoning are usually represented by satisfiability problems on various logic formulas. Their counting versions, which computing or approximating the count of models of given formulas, has been found interesting applications in various fields, such as, neural network verification, probabilistic inference, program compilation optimization, reliability analysis, information flow analysis, etc. Many tools for program analysis, testing, and verification are based on mathematical logic as the calculus of computation. SMT (Satisfiability Modulo Theories) is such a first-order logic language that is commonly used, which is propositional logic combine with linear arithmetic theory. In this talk, I will present a state-of-the-art SMT solving framework consists of a SAT engine and a theory solver. Then I will introduce methods for counting integer solutions or computing the volume of solution space on a set of linear constraints.


Bio:Cunjing Ge is now a postdoctoral researcher in the School of Artificial Intelligence at Nanjing University. In 2019, he received his Ph.D. degree in Computer Software and Theory from the Institution of Software, Chinese Academy of Sciences. From 2019 to 2021, he joined the FMV group, Johannes Kepler Universität Linz, Austria, and worked with Prof. Armin Biere, as a postdoctoral researcher. His research interests are automated reasoning and abductive learning, specifically solving solution counting problems on logic formulas.