The First International Workshop on Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations
In conjunction with the 36th AAAI conference on artificial intelligence
(AAAI-2022), February 22-March 1, 2022, Vancouver, BC, Canada
Workshop Day: Feb 28th,
Location: Virtual Room Blue 5
All times are in EST.
Title: Human Allied Learning of Symbolic Deep Models
Abstract: Historically, Artificial Intelligence has taken a symbolic route for representing and reasoning about objects at a higher-level or a statistical route for learning complex models from large data. To achieve true AI, it is necessary to make these different paths meet and enable seamless human interaction. First, I will introduce for learning from rich, structured, complex and noisy data. One of the key attractive properties of the learned models is that they use a rich representation for modeling the domain that potentially allows for seam-less human interaction. Next, I will present the recent progress that allows for more reasonable human interaction where the human input is taken as "advice" and the learning algorithm combines this advice with data. Finally, I will discuss about the potential of "closing the loop" where an agent figures out what it knows and solicits information about what it does not know. This is an important direction to realize the true goal of human allied AI.
Title: What Program Synthesis Can Learn From How People Write Code
Abstract: How can we best make systems which learn to write computer programs? Here I explore the idea that we should take insight from the techniques and tools that human coders use when building software, but that we should combine those insights with machine learning methods. I focus on two basic coding techniques: writing libraries, and using interpreters ("REPLs"). For libraries, I present a system called DreamCoder, which grows a library of reusable subroutines as it solves a range of programming problems. DreamCoder's architecture builds on the structure of wake-sleep neural network training algorithms, and combines both symbolic and neural learning. For interpreters, I present a system which learns to interact with a REPL while it writes code, showing that this can help mitigate the combinatorial search difficulties of program synthesis. At the end of the talk, I will present preliminary results on modeling another aspect of coding: creating challenging and interesting programming problems.
See Here for the list of accepted papers.
Title: Towards a Proper Foundation for Artificial General Intelligence
Abstract: Large pretrained language models like BERT and GPT-3 have generated enormous enthusiasm, and are capable of producing remarkably fluent language. But they have also been criticized on many grounds, and described as "stochastic parrots." Are they adequate as a basis for general intelligence, and if not, what would a better foundation for general intelligence look like?
Title: Towards Generalizable Neuro-Symbolic Reasoners
Abstract: Symbolic knowledge representation and reasoning and deep learning are fundamentally different approaches to artificial intelligence with complementary capabilities. The former are transparent and data-efficient, but they are sensitive to noise and cannot be applied to non-symbolic domains where the data is ambiguous. The latter can learn complex tasks from examples, are robust to noise, but are black boxes; require large amounts of –not necessarily easily obtained– data, and are slow to learn and prone to adversarial examples. Either paradigm excels at certain types of problems where the other paradigm performs poorly. In order to develop stronger AI systems, integrated neuro-symbolic systems that combine artificial neural networks and symbolic reasoning are being sought. In this context, one of the fundamental open problems is how to perform logic-based deductive reasoning over knowledge bases by means of trainable artificial neural networks. Over the course of this talk, we provide a brief summary of our recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners. More specifically, We designed a novel way of conducting neuro-symbolic through pointing to the input elements. More importantly we showed that the proposed approach is generalizable across new domain and vocabulary demonstrating symbol-invariant zero-shot reasoning capability. Furthermore, We have demonstrated that a deep learning architecture based on memory networks and pre-embedding normalization is capable of learning how to perform deductive reason over previously unseen RDF KGs with high accuracy. We are applying these mod- els on Resource Description Framework (RDF), and the description logic EL+ respectively. Throughout this talk we will discuss strengths and limitations of these models particularly in terms of accuracy, scalability, transferability, and generalizability.
Title: Combining Reasoning and Learning for Discovery
Abstract: Artificial Intelligence (AI) is a rapidly advancing field inspired by human intelligence. AI systems are now performing at human and even superhuman levels on various tasks, such as image identification and face and speech recognition. The tremendous AI progress that we have witnessed in the last decade has been largely driven by deep learning advances and heavily hinges on the availability of large, annotated datasets to supervise model training. However, often we only have access to small datasets and incomplete data. We amplify a few data examples with human intuitions and detailed reasoning from first principles and prior knowledge for discovery. I will describe Deep Reasoning Networks (DRNets), a general framework that seamlessly integrates deep learning and reasoning via an interpretable latent space for incorporating prior knowledge. and tackling challenging problems. DRNets requires only modest amounts of (unlabeled) data, in sharp contrast to standard deep learning approaches. DRNets reach super-human performance for crystal-structure phase mapping, a core, long-standing challenge in materials science, enabling the discovery of solar-fuels materials. We further demonstrate DRNets on single-player visual combinatorial games, variants of the Sudoku game. Finally, I will also talk about the effectiveness of a novel curriculum learning with restarts strategy to boost an A*-style best first search reinforcement learning framework. We show how such a strategy can outperform specialized solvers for Sokoban, a prototypical AI planning problem.
Title: Bridging discrete optimization and machine learning
Abstract: Many uses of machine learning require bridging the continuous world of ML models with the discrete world of combinatorial optimization. For example, in many socially consequential applications, the predictions of a machine learning model are used as the input to a combinatorial optimization problem which models the allocation of scarce resources according to predicted need. Or, in reasoning tasks we may wish to search for outputs that satisfy particular logical constraints. Connecting machine learning and discrete optimization offers the potential to train ML models end-to-end for such use cases. However, modern ML requires a differentiable training pipeline, while the solutions to discrete optimization problems inherently lack informative gradients. In this talk, I will present two strategies to integrate discrete optimization problems into the training loop of a ML model. The first is to construct a differentiable relaxation of the discrete optimization problem. I will show that this strategy can be instantiated across diverse problem classes such as linear programs, submodular maximization, mixed-integer linear programs, and MAXSAT problems. The second is to learn a good relaxation in an automated fashion for a given distribution of problem instances. Together, these methods provide a flexible toolbox to bridge the continuous and discrete worlds and lead to significant improvements on a range of decision making and reasoning tasks.
Title: Incorporating Symbolic Knowledge into Neural NLP Models
Abstract: Language is often underspecified and ambiguous, but we can understand it based on our commonsense knowledge and shared experiences. While neural NLP models may learn such knowledge from their training data, it is much more efficient to provide them with access to knowledge bases and knowledge models. In this talk I will present a line of work in which neural NLP models are enhanced with symbolic knowledge. Such models combine the best of both worlds: the generalizability of neural representations, with the structure and precision of symbolic knowledge. I will discuss work on solving difficult coreference problems, interpreting figurative language, and improving explainability. I will conclude with open problems and future directions in building neuro-symbolic NLP models.
Featuring: Bryan Wilder, Carla Gomes, Gary Marcus, Guy van den Broeck, Kevin Ellis, Sriraam Natarajan, Vered Shwartz,