[HN Gopher] Now that machines can learn, can they unlearn?
       ___________________________________________________________________
        
       Now that machines can learn, can they unlearn?
        
       Author : Engineering-MD
       Score  : 26 points
       Date   : 2021-08-27 21:11 UTC (1 days ago)
        
 (HTM) web link (www.wired.com)
 (TXT) w3m dump (www.wired.com)
        
       | aaccount wrote:
       | LOL, I think a better question is can people unlearn?
        
         | hcta wrote:
         | https://en.wikipedia.org/wiki/Planck%27s_principle "Science
         | progresses one funeral at a time" basically says that people,
         | even very smart people, can't unlearn, so the only way humanity
         | as a whole can unlearn is via people dying off. Maybe it's a
         | problem we should give more attention before tackling aging.
        
           | avmich wrote:
           | I think this is rather lopsided statement of the idea. It's
           | not the only way; but it's one of the ways.
        
         | actually_a_dog wrote:
         | You seem to be assuming that people (as a whole) actually learn
         | from the past and their mistakes. You know the saying, though:
         | "history never repeats itself, but it does often rhyme." We
         | just keep making a lot of the same mistakes as a species, over
         | and over again. :/
        
           | avmich wrote:
           | We also avoiding some of the past mistakes, so that should
           | qualify. E.g. historically Europe is a rather peaceful
           | continent last few decades.
        
             | Ekaros wrote:
             | Situation has changed... Still, somehow we did not learn
             | from WW1 to WW2...
        
         | rovr138 wrote:
         | I've forgotten a lot of things I've learned.
        
           | Ekaros wrote:
           | I used to be pretty decent in high school math, chemistry and
           | physics... Now I only vaguely can recall that stuff. Show me
           | some problem or something and I might not be sure what to
           | apply or where to start anymore.
        
       | NotEvil wrote:
       | https://outline.com/Hh2dMv
       | 
       | Depaywall
        
       | 0xdeadb00f wrote:
       | Yeah just clear/reset their weights and/or embeddings
        
         | Salgat wrote:
         | State of the art models can take over a month to train. Ideally
         | you'd have a way to cull out information related to a given
         | input without dramatically altering the model (for example,
         | whenever one of your million users asks to be removed). It'd be
         | very interesting to see how this would be possible. Maybe
         | analyze which nodes contribute the most to an output for that
         | sensitive data? But even then, how do you change them without
         | compromising the model?
        
         | axelroze wrote:
         | It's not that simple because that way you lose too much
         | information. Actually it is more likely the whole system would
         | fail if weights at any layer are reset.
         | 
         | There is a way to selectively unlearn something via Memory
         | Aware Synapses (MAS): - https://arxiv.org/abs/1711.09601
         | 
         | The idea was developed mostly for transfer learning as in learn
         | new stuff on a new domain but do not forget the old stuff as
         | well. For forgetting it could be trained on some old images +
         | all zeros target mask and the MAS to preserve everything else.
        
       | Borrible wrote:
       | In a technical sense concept drift awareness is unlearning. I
       | think I studied that in time series about twenty years ago.
       | 
       | It's an old concept...
       | 
       | And yes if people would unlearn, their machines would follow.
       | 
       | Some way or the other.
       | 
       | The question is, wether someone gives a shit and bothers to build
       | it. Or better, is payed to build it in such a way.
        
       | hcta wrote:
       | As a layman, I'd intuitively want to try to solve this problem
       | with an adversarial approach, i.e. train an adversary network to
       | predict the info we want hidden better than chance given oracle
       | access to the system we're trying to secure. But as I understand
       | it GANs are currently regarded as flaky because training them
       | requires the two component networks' learning be fairly matched.
       | I wish I had a better understanding of the
       | "competitors"/"successors" to GANs and how they would work on
       | this sort of problem.
        
       | erhk wrote:
       | Now that artificial is intelligent can it be unintelligent
       | _thonk_
        
       | lomkju wrote:
       | They so need to apply this to Google Maps, in India google maps
       | has gone crazy since past 2 years especially in Mumbai. It seems
       | to give all bad routes always and also all gig workers in Uber,
       | Ola, Swiggy, Zomato are affected by this too.
        
       | huachimingo wrote:
       | Tangentially related, sometimes making a bot do mistakes is
       | harder than making a correct bot.See Command and Conquer's
       | pathfinding anecdote:
       | 
       | https://youtu.be/S-VAL7Epn3o
        
         | folmar wrote:
         | ... but skip the first 7 minutes to go to pathfinding. Also the
         | lack of lip sync is disturbing.
        
       ___________________________________________________________________
       (page generated 2021-08-28 23:01 UTC)