https://research.ibm.com/blog/deep-learning-meets-symbolic-ai Skip to main content Research * Focus areas * Publications * Collaborate * Careers * About * Blog * Back * Focus areas * Hybrid Cloud * Artificial Intelligence * Quantum Computing * Science * Back * Hybrid Cloud * Back * Artificial Intelligence * Back * Quantum Computing * Back * Science Research * Focus areas + Hybrid Cloud + Artificial Intelligence + Quantum Computing + Science * Publications * Collaborate * Careers * About * Blog Open IBM search field Close 29 Apr 2021 Research 4 minute read Mimicking the brain: Deep learning meets vector-symbolic AI To better simulate how the human brain makes decisions, we've combined the strengths of symbolic AI and neural networks. [image] Machines have been trying to mimic the human brain for decades. But neither the original, symbolic AI that dominated machine learning research until the late 1980s nor its younger cousin, deep learning, have been able to fully simulate the intelligence it's capable of. One promising approach towards this more general AI is in combining neural networks with symbolic AI. In our paper "Robust High-dimensional Memory-augmented Neural Networks" published in Nature Communications,^1 we present a new idea linked to neuro-symbolic AI, based on vector-symbolic architectures. We've relied on the brain's high-dimensional circuits and the unique mathematical properties of high-dimensional spaces. Specifically, we wanted to combine the learning representations that neural networks create with the compositionality of symbol-like entities, represented by high-dimensional and distributed vectors. The idea is to guide a neural network to represent unrelated objects with dissimilar high-dimensional vectors. In the paper, we show that a deep convolutional neural network used for image classification can learn from its own mistakes to operate with the high-dimensional computing paradigm, using vector-symbolic architectures. It does so by gradually learning to assign dissimilar, such as quasi-orthogonal, vectors to different image classes, mapping them far away from each other in the high-dimensional space. More importantly, it never runs out of such dissimilar vectors. Diagram of a proposed methodology to merge deep network representations with vector-symbolic representations in high-dimensional computingDiagram of a proposed methodology to merge deep network representations with vector-symbolic representations in high-dimensional computing Figure 1: Diagram of a proposed methodology to merge deep network representations with vector-symbolic representations in high-dimensional computing. This directed mapping helps the system to use high-dimensional algebraic operations for richer object manipulations, such as variable binding -- an open problem in neural networks. When these "structured" mappings are stored in the AI's memory (referred to as explicit memory), they help the system learn--and learn not only fast but also all the time. The ability to rapidly learn new objects from a few training examples of never-before-seen data is known as few-shot learning. High-dimensional explicit memory as computational memory During training and inference using such an AI system, the neural network accesses the explicit memory using expensive soft read and write operations. They involve every individual memory entry instead of a single discrete entry. These soft reads and writes form a bottleneck when implemented in the conventional von Neumann architectures (e.g., CPUs and GPUs), especially for AI models demanding over millions of memory entries. Thanks to the high-dimensional geometry of our resulting vectors, their real-valued components can be approximated by binary, or bipolar components, taking up less storage. More importantly, this opens the door for efficient realization using analog in-memory computing. Such transformed binary high-dimensional vectors are stored in a computational memory unit, comprising a crossbar array of memristive devices. A single nanoscale memristive device is used to represent each component of the high-dimensional vector that leads to a very high-density memory. The similarity search on these wide vectors can be efficiently computed by exploiting physical laws such as Ohm's law and Kirchhoff's current summation law. This approach was experimentally verified for a few-shot image classification task involving a dataset of 100 classes of images with just five training examples per class. Although operating with 256,000 noisy nanoscale phase-change memristive devices, there was just a 2.7 percent accuracy drop compared to the conventional software realizations in high precision. We believe that our results are the first step to direct learning representations in the neural networks towards symbol-like entities that can be manipulated by high-dimensional computing. Such an approach facilitates fast and lifelong learning and paves the way for high-level reasoning and manipulation of objects. The ultimate goal, though, is to create intelligent machines able to solve a wide range of problems by reusing knowledge and being able to generalize in predictable and systematic ways. Such machine intelligence would be far superior to the current machine learning algorithms, typically aimed at specific narrow domains. Stay up to date with the latest news, research, and events from IBM Research on Twitter. Follow us on Twitter 1. Home 2. | Blog Date 29 Apr 2021 Authors * Abu Sebastian * Abbas Rahimi Tags * AI * Neuro-symbolic AI Share References 1. Karunaratne, G. et al. Robust high-dimensional memory-augmented neural networks. Nat Commun 12, (2021).- Machine learning: From "best guess" to best data-based decisions Moving ML from "best guess" to best data-based decisionsMoving ML from "best guess" to best data-based decisions Release Yishai Shimoni and Ehud Karavani 01 Sep 2021 7 minute read * AI * Causality * Trusted AI Tapping into the inner rhythm of living organisms with AI and ML IBM's Dr. Laura-Jayne Gardiner, AI and informatics for life sciences, and Prof. Anthony Hall, Head of Plant Genomics, Earlham Institute IBM's Dr. Laura-Jayne Gardiner, AI and informatics for life sciences, and Prof. Anthony Hall, Head of Plant Genomics, Earlham Institute Research Laura-Jayne Gardiner and Anthony Hall 05 Aug 2021 8 minute read * AI * Explainable AI * Machine Learning IBM and The Michael J. Fox Foundation use AI to help predict progression of Parkinson's disease [Discovery_][yH5BAEAAAA] Research Kristen Severson, Soumyadip Ghosh, and Jianying Hu 29 Jul 2021 5 minute read * AI * Healthcare * Impact Science What's Next in AI: IBM Research Africa to host virtual AI seminar series in August What's Next in AI: IBM Research Africa to host virtual AI seminar series in AugustWhat's Next in AI: IBM Research Africa to host virtual AI seminar series in August News Tsabi Molapo, Tonya Nyakeya, Sibongile Gumede, and Sibusisiwe Makhanya 28 Jul 2021 3 minute read * AI PreviousResearchers shed light on graphene's interaction with water Next MIT and IBM announce ThreeDWorld Transport Challenge for physically realistic Embodied AI * Focus areas Focus areas + Hybrid Cloud + Artificial Intelligence + Quantum Computing + Science * Quick links Quick links + About + Publications + Blog * Work with us Work with us + Careers + Collaborate + Contact Research * Directories Directories + Teams + Researchers * Follow us Follow us + Twitter + LinkedIn + YouTube * Contact IBM * Privacy * Terms of use * Accessibility *