The First International Workshop on Combining Learning and Reasoning: Programming Languages, Formalisms, and Representations
In conjunction with the 36th AAAI conference on artificial intelligence (AAAI-2022), February 22-March 1, 2022, Vancouver, BC, Canada
Workshop Day: Feb 28th,
Location: Virtual Room Blue 5
The increased availability of data and novel machine learning methods has sparked an interest in taking learning-based and data-driven approaches in many disciplines such as biology, social sciences, cognitive science, finance, and physics. To solve real-world problems, we require integration of AI learning paradigms to enable the incorporation of expert knowledge, and dealing with uncertainty or complex structures in learning or inference. This, in turn, leads to a long path between formulating the problem and materializing the learning algorithm. The current practice often falls short in addressing it in a principle way and led to experimentation with variety of models and algorithms. While this gap slows down the progress in the AI field, it also makes it less accessible for domain experts outside the field. This problem calls for formalisms and languages which can integrate the existing disciplines in particular the symbolic and sub-symbolic approaches for combining learning and reasoning. In this workshop we look the importance of the integrative paradigms from the lens of real-world applications for making AI accessible to domain experts and provide the means for: 1. High-level and declarative expression of the problem (i.e. the user specifies what she wants to achieve rather than how to achieve it), 2. Incorporating prior knowledge(e.g. laws of physics, or certain biological properties), 3. Reasoning over uncertain data/predictions, 4. Dealing with complex structures such as graphs and relations (e.g. to represent a social network or molecule), 5. Modularity, which allows the components of a program to be easily switched or reused.
|Submission Deadline||November 12, 2021 November 19, 2021|
|Notification||December 4, 2021 December 15, 2021|
|Camera Ready||December 15, 2021 December 20, 2021|
|Workshop Day||February 28|
We are open to the papers that have been submitted to the main conference. However, the reviews should be submitted to us and those will be quickly meta-reviewed (Please attach your reviews in the supplementary material as a pdf).
Thanks to our sponsor, CLeaR workshop provides financial aid to student authors/presenters if they need, in particular to support underrepresented minorities and developing countries. If you plan to submit your work in the workshop and require financial support for attending, please send us an email with your information and your submission information.
Bio: She is an Assistant Professor of Computer Science at the University of British Columbia. Her research interests focus on natural language processing, with the fundamental goal of building models capable of human-level understanding of natural language. She is interested in computational semantics and pragmatics, and commonsense reasoning. She is currently working on learning to uncover implicit meaning, which is abundant in human speech, and on developing machines with advanced reasoning skills.
Bio: He is a scientist, best-selling author, and entrepreneur. He is Founder and CEO of Robust.AI, and was Founder and CEO of Geometric Intelligence, a machine learning company acquired by Uber in 2016. He is the author of five books, including The Algebraic Mind, Kluge, The Birth of the Mind, and The New York Times best seller Guitar Zero, as well as editor of The Future of the Brain and The Norton Psychology Reader. He has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, linguistics, evolutionary psychology and artificial intelligence, often in leading journals such as Science and Nature, and is perhaps the youngest Professor Emeritus at NYU. His newest book, co-authored with Ernest Davis, Rebooting AI: Building Machines We Can Trust aims to shake up the field of artificial intelligence
Bio: He is a Schmidt Science Fellow at Harvard School of Public Health and Carnegie Mellon University. Starting in Fall 2022, he will be an Assistant Professor in the Machine Learning Department at CMU. he recently completed his PhD at Harvard University, where he was advised by Milind Tambe. His falls at the intersection of optimization, social networks, and machine learning. he designs algorithmic and data-driven methods to improve decision making under uncertainty and deploy these techniques for social impact, with a focus on applications in public health. he is particularly interested in interventions for marginalized populations. For example, to improve social network interventions for HIV prevention among homeless youth, he developed a combination of robust optimization techniques and algorithms for sampling social networks. His team intervention resulted in a field trial showing significantly improved adoption of protective behaviors. Other application areas include COVID-19 and tuberculosis treatment in India.
Bio: She is the Ronald C. and Antonia V. Nielsen Professor of Computing and Information Science and the director of the Institute for Computational Sustainability at Cornell University. Gomes received a Ph.D. in computer science in the area of artificial intelligence from the University of Edinburgh. Her research area is Artificial Intelligence with a focus on large-scale constraint reasoning, optimization, and machine learning. Recently, Gomes has become deeply immersed in research on scientific discovery for a sustainable future and more generally in research in the new field of Computational Sustainability. Gomes is the lead PI of an NSF Expeditions in Computing award. She has (co-)authored over 150 publications, which have appeared in venues spanning Nature, Science, and a variety of conferences and journals in AI and Computer Science, including five best paper awards. She was named the “most influential Cornell professor” by a Merrill Presidential Scholar (2020) and she was also the recipient of the Association for the Advancement of Artificial Intelligence (AAAI) Feigenbaum Prize (2021) for “high-impact contributions to the field of artificial intelligence, through innovations in constraint reasoning, optimization, the integration of reasoning and learning, and through founding the field of Computational Sustainability, with impactful applications in ecology, species conservation, environmental sustainability, and materials discovery for energy.”
Bio: He is a Professor and the Director for Center for ML at the Department of Computer Science at University of Texas Dallas and a RBDSCAII Distinguished Faculty Fellow at IIT Madras. He was previously an Associate Professor and earlier an Assistant Professor at Indiana University, Wake Forest School of Medicine, a post-doctoral research associate at University of Wisconsin-Madison and had graduated with his PhD from Oregon State University. His research interests lie in the field of Artificial Intelligence, with emphasis on Machine Learning, Statistical Relational Learning and AI, Reinforcement Learning, Graphical Models and Biomedical Applications. He is a AAAI senior member and has received the Young Investigator award from US Army Research Office, Amazon Faculty Research Award, Intel Faculty Award, XEROX Faculty Award, Verisk Faculty Award and the IU trustees Teaching Award from Indiana University. He is the program co-chair of SDM 2020 and ACM CoDS-COMAD 2020 conferences. He is the chief editor of Frontiers in ML and AI journal, an associate editor of MLJ, JAIR and DAMI journals and is the electronics publishing editor of JAIR.
Bio: He is a computer scientist working in artificial intelligence and program synthesis, with the goal of better combining reasoning and learning. He is an assistant professor at Cornell in computer science. Previously he was a research scientist at Common Sense Machines between starting at Cornell and completing his PhD at MIT’s Department of Brain and Cognitive Sciences, where he was coadvised by Josh Tenenbaum and Armando Solar-Lezama. Broadly, he is motivated by the goals of building machine learning systems that generalize strongly (extrapolating rather than interpolating); while requiring less data (greater sample efficiency); and which acquire interpretable knowledge that humans can understand and build on. He draws on ideas and techniques from machine learning, artificial intelligence, programming languages, and cognitive science. More specifically, he has investigated the hypothesis that some progress on these fronts can come from program induction.
Bio: Monireh Ebrahimi is a Senior Cognitive Software Engineer at IBM's Center for Open-Source Data and AI Technologies (CODAIT) in San Francisco where she works on Open Source, Data & AI Technologies. She has obtained her Ph.D. from Data Semantics (DaSe) lab at Kansas State University with a major focus on Neuro-Symbolic Integration. Throughout her dissertation titled "Generalizable Neuro-Symbolic Reasoners", she covers her recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners. Her primary research interests include Deep Learning, Knowledge Graphs, Reasoning, Semantic Web, and Natural Language Processing. She has organized several tutorials on "Current and Future Trends of Neural Knowledge Graph Representation and Reasoning" at IJCAI 2020, US2TS 2019, and US2TS 2020 and served as a PC member and reviewer for Artificial Intelligence (NeurIPS, AAAI, IJCAI, ICML, ICLR, JAIR), and Semantic Web conferences (ISWC, ESWC, TheWebCon) and received Most Outstanding Reviewer Award from WWW 2017.
||Michigan State University, IHMCemail@example.com|
||Delft University of Technologyfirstname.lastname@example.org|
||University of Washingtonemail@example.com|
||Michigan State Universityfirstname.lastname@example.org|
||Michigan State Universityemail@example.com|
||University of Pennsylvaniafirstname.lastname@example.org|
||University of California Los Angelesemail@example.com|