[HN Gopher] Deep learning meets vector-symbolic AI
       ___________________________________________________________________
        
       Deep learning meets vector-symbolic AI
        
       Author : sayonaraman
       Score  : 59 points
       Date   : 2021-09-23 16:24 UTC (6 hours ago)
        
 (HTM) web link (research.ibm.com)
 (TXT) w3m dump (research.ibm.com)
        
       | uoaei wrote:
       | Seems like a variant of a Siamese network which uses binarized
       | embedding vectors for predictions instead of the raw embedding
       | vectors. What exactly is the novelty presented here?
        
       | axiosgunnar wrote:
       | Can any experts chime in? Is this any good?
        
         | armcat wrote:
         | The fundamental idea is "operating in high dimensions", and
         | this does have some solid footing, e.g. see Cover's Theorem
         | (https://en.wikipedia.org/wiki/Cover's_theorem). In fact I
         | recently did a presentation of another paper (from FB AI
         | Research and Carnegie Mellon), exploring the concept of
         | projections to higher dimensions for sentence encoding tasks,
         | see here: https://github.com/acatovic/paper-
         | lunches/blob/main/fb-rands....
         | 
         | There is fair amount of research in the area of high
         | dimensional computing, as well as with sparse representations,
         | which seem to be grounded in neuroscience. As others have
         | pointed out, a number of commercial research labs exists. There
         | is Numenta with their Hierarchical Temporal Memory, and
         | Vicarious (whose founder was one of Numenta's co-founders), as
         | well as Cortical.io (who are borrowing sparse binary coding
         | concepts from Numenta, combining it with self-organizing maps,
         | and applying it to document understanding tasks).
        
           | sayonaraman wrote:
           | this is awesome, thanks for the link! There seems to be
           | mutual propagation from NLP to CV and from CV to NLP, i'm
           | wondering if there is a visual counterpart for these "Random
           | Encoders". The SOTA for image/text embeddings currently seems
           | to be CLIP [1].
           | 
           | [1] https://openai.com/blog/clip/
        
         | robbedpeter wrote:
         | Different words, same concept as sparse distributed
         | representation in a hierarchical distribution of neural
         | networks.
         | 
         | Automating the process of learning is non-trivial and making it
         | efficient is an ongoing question.
         | 
         | Vicarious.ai , Hawkins Numenta, cortical.io and various other
         | projects have been chasing this in various guises for many
         | years.
         | 
         | The lottery ticket effect on very large networks can make
         | contrasting different architectures difficult, and this looks
         | suspect. IBM isn't necessarily a powerhouse in this arena, so
         | it'd make sense not to get excited until they verify and expand
         | their theory. It could be that their initial success is
         | entirely coincidental with lotteries and the particulars of the
         | design are a dead end.
        
       | sayonaraman wrote:
       | you might also be interested in the recent work on "resonator
       | networks" VSA architecture [1-4] by Olshausen lab at Berkeley (P.
       | Kanerva who created the influential SDM model [5] is one of the
       | lab members).
       | 
       | It's a continuation of Plate [6] and Kanerva work in the 90s and
       | Olshausen' groundbreaking work on sparse coding [7] which
       | inspired the popular autoencoders [8].
       | 
       | I find it especially promising they found this superposition
       | based approach to be competitive with optimization so prevalent
       | in modern neural nets. May be backprop will die one day and be
       | replaced with something more energy efficient along these lines.
       | 
       | [1] https://redwood.berkeley.edu/wp-
       | content/uploads/2020/11/frad...
       | 
       | [2] https://redwood.berkeley.edu/wp-
       | content/uploads/2020/11/kent...
       | 
       | [3] https://arxiv.org/abs/2009.06734
       | 
       | [4] https://github.com/spencerkent/resonator-networks
       | 
       | [5] https://en.wikipedia.org/wiki/Sparse_distributed_memory
       | 
       | [6] https://www.amazon.com/Holographic-Reduced-Representation-
       | Di...
       | 
       | [7] http://www.scholarpedia.org/article/Sparse_coding
       | 
       | [8] https://web.stanford.edu/class/cs294a/sparseAutoencoder.pdf
        
       ___________________________________________________________________
       (page generated 2021-09-23 23:01 UTC)