[HN Gopher] DeepMind's AI helps untangle the mathematics of knots
___________________________________________________________________
DeepMind's AI helps untangle the mathematics of knots
Author : bryan0
Score : 78 points
Date : 2021-12-10 19:48 UTC (3 hours ago)
(HTM) web link (www.nature.com)
(TXT) w3m dump (www.nature.com)
| vanusa wrote:
| _One technique in particular, called saliency maps, turned out to
| be especially helpful. It is often used in computer vision to
| identify which parts of an image carry the most-relevant
| information. Saliency maps pointed to knot properties that were
| likely to be linked to each other, and generated a formula that
| seemed to be correct in all cases that could be tested. Lackenby
| and Juhasz then provided a rigorous proof that the formula
| applied to a very large class of knots2._
|
| Kewl, and not to discount the result but sounds like ... pattern
| matching, right?
|
| So why do the editors of all these journals keep feeding off the
| ML == AI meme?
|
| Surely they know better by now.
| Jensson wrote:
| Because pattern matching is the only significant new thing in
| AI research the past few decades. Every time they apply pattern
| matching to something new we will get articles like this and AI
| optimists will come and say that soon AI will take our jobs.
| Pattern matched images, pattern matched texts, reverse pattern
| matching to generate uncanny images and articles, pattern
| matched gameplay states etc.
| version_five wrote:
| Deep learning is pattern finding, not pattern matching. That
| is the difference. There is a business narrative that's
| hyping up ML/AI, there's lots of retrospective tracing of
| modern ML's roots back to something older, but that doesn't
| change the fact that there has been a big explosion of
| demonstrated advances opened up by modern approaches over the
| past 10 years.
|
| Edit: just to expand a bit, the "intelligence" in modern AI
| is in the training, not inference. I think people see
| inference in a neural network as pattern matching or some
| kind of filter, and that's basically true, and "dumb" if you
| like. But learning the weights is a whole different thing.
| Stochastic Gradient Descent et al are using often only a
| comparative few examples to learn a pattern and embed it in a
| set of weights in a way that can generalize despite being
| highly underdetermined. It's not general intelligence, but
| it's a much different thing than the casual dismissals people
| like to post, usually directed at the inference part as if
| the weights just magically appeared
| vanusa wrote:
| _Deep learning is pattern finding, not pattern matching._
|
| "Pattern matching" obviously includes "pattern finding"
| already, in common parlance.
|
| By "deep learning" one means, in effect, "hierarchical
| pattern finding" (which is basically what "feature" or
| "representation" learning means to a lay person).
|
| But still, at the end of the day ... pattern finding.
|
| Or that is to say: "just ML", not AI.
| jonas21 wrote:
| I think most people would take "pattern matching" to mean
| matching against fixed patterns or engineered features,
| as opposed to learned features which you might call
| "pattern finding".
| vanusa wrote:
| Although the distinctions you're drawing are valid --
| from the bigger picture point of view, this is basically
| hair splitting.
|
| The more basic and important point is: technology that
| works on the basis of "pattern finding" (however you wish
| to call it) -- even if it performs exponentially better
| and faster than humans, in certain applications -- is
| still far different from, and falls far short of
| technology that actually mimics full-scale sentient
| (never mind if it needs to be human) cognition.
|
| Or that is to say; of any technology that can
| (meaningfully) be called AI.
| fault1 wrote:
| I think what you are describing is the AI effect:
| https://en.wikipedia.org/wiki/AI_effect
|
| 'AI' has always almost been a marketing term, anyways.
|
| I would call all of this pattern detection btw.
|
| But so would i call the kernel perceptron (1964):
| https://en.wikipedia.org/wiki/Kernel_perceptron
| Jensson wrote:
| I think the AI effect is reversed. It should be "any time
| a program can compete with humans at a new task it will
| get marketed as AI".
|
| I don't really get the original argument. I never thought
| "todays AI isn't smart, but if it could play Go, that
| would be an intelligent agent!". So "the AI effect" is
| just a strawman, I have seen no evidence that anyone
| actually made such a change of heart. AI research is
| important, but nothing so far has been anywhere remotely
| intelligent and I never thought "if it could do X it
| would be intelligent" for any of the X AI can do today.
| When an AI can hold a remote job and get paid without
| getting fired for like a year, that is roughly the point
| I'd agree we have an intelligent agent.
| YeGoblynQueenne wrote:
| >> "Pattern matching" obviously includes "pattern
| finding" already, in common parlance.
|
| So, the terminology I recognise is "pattern matching"
| when referring to matching an object against a pattern,
| as in matching strings to regular expressions or in
| unification (where arbitrary programs can be matched);
| and "pattern _recognition_ " when referring specifically
| to machine vision tasks, as an older term that as far as
| I can tell has fallen out of fashion. The latter term has
| a long history that I don't know very well and that goes
| back to the 1950's. You can also find it as "statistical
| pattern recognition".
|
| To be honest, I haven't heard of "pattern finding"
| before. Can you say what "common parlance" is that? What
| I'm asking is, whom is it common to, in what circles,
| etc? Note that a quick google for "pattern finding" gives
| me only "pattern recognition" results.
|
| To clarify, deep learning is not "pattern matching", or
| in any case you will not find anyone saying that it is.
| rackjack wrote:
| I'm pretty sure they meant "pattern finding" as
| "discovering new patterns that can be used for matching",
| not "finding a pattern that matches an existing known
| one".
| Jensson wrote:
| But currently humans do that "pattern finding". If you
| want it to learn to recognize animals you give it
| thousands of images of different animals in all sorts of
| poses, angles and environments and tells it what those
| animals are, basically the "pattern" is created by the
| humans who compiles the dataset with enough examples to
| catch every situation, and then the program uses that
| pattern to find matches in other images. However if you
| want a human to recognize animals it is enough to show
| this picture and then they will be able to recognize most
| of these animals in other photos and poses, humans
| creates the pattern from the image and don't just rely on
| having tons of data:
|
| https://i.pinimg.com/originals/35/78/47/35784708f8cc9ef23
| 45c...
|
| Edit: In some cases you can write a program to explore a
| space. For example, you can write a program that plays
| board games randomly and notes the outcome, that is a
| data generator. Then you hook that to a pattern
| recognizer powered by a data centre, and you now have a
| state of the art gameplay AI. It isn't that simple, since
| writing the driver and feedback loop is extremely hard
| work that an AI can't do, humans has to do it.
| visarga wrote:
| > But still, at the end of the day ... pattern finding.
|
| And programming at the end of the day ... if's and for's.
| See how ridiculous it is? A fuzzy high level concept
| explaining away the complexity of getting there.
| space_fountain wrote:
| In addition to what others are saying I want to highlight
| that there's decent evidence that "all" we're doing is
| pattern finding and pattern matching. Definitely ML isn't
| at a human level yet, but I see no reason to think that
| we're missing some secrete sauce that's needed to make a
| true AI. When someday we do achieve human level
| intelligences in computers, it will be at least in large
| part with ML.
| Jensson wrote:
| > In addition to what others are saying I want to
| highlight that there's decent evidence that "all" we're
| doing is pattern finding and pattern matching.
|
| Is there? Link? It is possible, I just don't think there
| is enough evidence to say anything about it. What
| evidence would show that humans are just pattern finding
| and matching machines?
| vanusa wrote:
| We're getting into the territory of some external debates
| -- that is to say, into areas that have already been
| debate by others, so there isn't much knew I could tell
| you here.
|
| But basically I'm in the camp of Gary Marcus and others,
| who would probably respond by saying something along the
| lines of the following:
|
| "No, there's not particularly good evidence that all
| we're doing is pattern matching, and a lot of evidence to
| the contrary. For one thing, a lot of these ML algorithms
| (touted as near- or superhuman) are easily fooled,
| especially when you jiggle the environmental context by
| even just a little bit."
|
| "For another, and on a higher level, what algorithms lack
| -- but which sentient mammals have in spades -- is an
| ability to 'catch' themselves, an ability to look at the
| whole situation and say 'Woah, something just isn't right
| here'. (This is referred to as the 'elephant in the room'
| problem with modern AI)."
|
| "And then there's the lack of a genuine survival drive --
| not to mention the fact that we don't see any inkling of
| evidence of some capacity for self-awareness in any
| currently existing system."
|
| Just as a starting point. But these are some huge
| differences between currently existing AI systems
| (however you want to define "AI here) and actual sentient
| cognition.
|
| Huge, huge differences... such that I don't see one can
| get the idea that "all" we're doing is pattern
| recognition. Even if it may cover roughly 90 percent of
| what our neural tissues do - that other 10 percent is
| absolutely crucial, and completely elusive to any
| current, working technology as such.
|
| _When someday we do achieve human level intelligences in
| computers, it will be at least in large part with ML._
|
| No doubt it will, but still there's that ... remaining 10
| percent. Which you won't get to with accelerated pattern
| finding any more than a faster airplane will get you to
| the moon.
| vlovich123 wrote:
| We're quite far from human-level intelligence and require
| drastically more power and data storage. It wouldn't be
| surprising to see that the ML powering general AI would
| look the same as today's AI to 90s ML. Still useful but
| qualitatively a meaningfully different approach.
| DrNuke wrote:
| Some combination (sequential, parallel, meta, whatever
| convenient shape) of single tasks executed at super-human
| ability (computer vision + path learning are there
| already) would inevitably lead to some sort of
| singularity when coupled with just SoA motion, handling
| and decision making / optimization. Generalizing
| occasional singularities, though, still seems a long way
| ahead.
| gmadsen wrote:
| sounds more like pattern finding. and if you break down human
| cognition, it isn't much different..
| vanusa wrote:
| Yep - that's the whole point, the only point.
|
| And again, I thought it was pretty simple and generally
| known.
| atty wrote:
| I get that when you boil it down to pattern matching it sounds
| less impressive, but we are starting to see superhuman pattern
| matching algorithms, and algorithms that display logic through
| advanced "pattern matching". And it's quite obvious now that
| pattern matching is far more useful than pure rule based
| systems for any problem of sufficient complexity.
|
| And AI is a very broad field, including pattern matching
| (supervised ML or otherwise), logic/rule based systems, etc. Is
| your complaint that they use AI/ML interchangeably? Because in
| this case, while AI is a more term for the technology than ML,
| it is not a strictly incorrect title.
| vanusa wrote:
| _Is your complaint that they use AI /ML interchangeably?_
|
| Yes, that's all I'm saying. And it seems like such a known
| point that they're not the same and not interchangeable, or
| so I thought (that I'm kind of astonished at the downvotes at
| my original comment).
|
| _Because in this case, while AI is a more term for the
| technology than ML, it is not a strictly incorrect title._
|
| I get what you're saying at the product level -- and the fact
| that the vast bulk of the public subjected to these
| technologies couldn't tell you the difference, nor could they
| begin to care.
|
| But to practitioners, the basic facts remain: ML [?] AI, it's
| a proper subset, and we're doing the public a genuine
| disservice (and arguably causing substantial harm) by
| pretending to tell them that we're making good progress
| developing AI _as distinct from hypercharged ML_ , that any
| day now they'll have self driving cars ... and all that crap
| the industry has basically been telling people.
| aerodog wrote:
| surprised Erik Demaine wasn't part of this
| [deleted]
___________________________________________________________________
(page generated 2021-12-10 23:00 UTC)