[HN Gopher] Interview with Yann LeCun on AI
___________________________________________________________________
Interview with Yann LeCun on AI
Author : kgwgk
Score : 39 points
Date : 2024-10-13 18:37 UTC (3 hours ago)
(HTM) web link (www.wsj.com)
(TXT) w3m dump (www.wsj.com)
| singingwolfboy wrote:
| https://archive.ph/ooEEY
| tikkun wrote:
| It's frustrating how many disagreements come down to framings
| rather than actual substance.
|
| His framing of intelligence is one thing. The people who disagree
| with him are framing intelligence a different way.
|
| End of story.
|
| I wish that all the energy went towards substantive disagreements
| rather than disagreements that are _mostly_ (not entirely) rooted
| in semantics and definitions.
| jcranmer wrote:
| That's not what he's saying at all, though.
|
| What he's saying is that he thinks the current techniques for
| AI (e.g., LLMs) are near the limits of what you can achieve
| with such techniques and are thus a dead-end for future
| research; also consequently, hyperventilation about AI
| superintelligence and the like is extremely irresponsible. It's
| actually a substantial critique of AI today in its actual
| details, albeit one modulated by popular press reporting that's
| dumbing it down for popular consumption.
| aithrowawaycomm wrote:
| His point is that lots of AI folks are framing intelligence
| _incorrectly_ by overvaluing surface knowledge or ability to be
| trained to solve constrained problems, when cats have deeper
| cognitive abilities like planning and rich spatial reasoning
| which are far beyond the reach of any AI in 2024.
|
| ANNs are extremely useful tools because they can process all
| sorts of information humans find useful: unlike animals or
| humans, ANNs don't have their own will, don't get bored or
| frustrated, and can focus on whatever you point them at. But in
| terms of core cognitive abilities - _not_ surface knowledge,
| _not_ impressive tricks, and _certainly not_ LLM benchmarks -
| it is hard to say ANNs are smarter than a spider. (In fact they
| seem _dumber_ than jumping spiders, which are able to form
| novel navigational plans in completely unfamiliar manmade
| environments. Even web-spinning spiders have no trouble
| spinning their webs in cluttered garages or pantries; would a
| transformer ANN be able to do that if it was trained on bushes
| and trees?)
| qarl wrote:
| Well... except... cats can't talk.
| Scrapemist wrote:
| Maybe AI can translate
| qarl wrote:
| HEH... reminds me of an argument Searle once made: with the
| right "translation" you can make a wall intelligent.
| Barrin92 wrote:
| And as Marvin Minsky taught us, which is probably one of the
| most profound insights in the entire field, talking seems like
| an accomplishment to us because it's the _least_ developed part
| of our capacities. It 's so conscious a task not because it's a
| sign of intellect but because it's the least developed and most
| novel thing our brains do, which is why it's also likely the
| fastest to learn for a machine.
|
| Moving as smoothly as a cat and navigating the world is the
| part that actually took our brains millions of years to learn,
| and movement is effortless not because it's easy but because it
| took so long to master, so it's also going to be the most
| difficult thing to teach a machine.
|
| The cognitive stuff is the dumb part, and that's why we have
| chess engines, pocket calculators and chatbots before we have
| emotional machines, artificial plumbers and robots that move
| like spiders.
| aithrowawaycomm wrote:
| I believe my cats sometimes get frustrated with the limitations
| of their own vocalizations and try to work around them when
| communicating with me. If, say, they want a treat, they are
| only able to meow and perform "whiny" body motions, and maybe
| I'll give them pets or throw a ball instead. So they have
| adapted a bit:
|
| - both of them will spit regular kibble out in front of me when
| they want a fancier treat (cats are _hilarious_ )
|
| - the boy cat has developed very specific "sweet meows" (soft,
| high-pitched) for affection and "needy meows" (loud, full-
| chested) for toys or food; for the first few years he would
| simply amp up the volume and then give a frustrated growl when
| I did the wrong thing
|
| - the lady cat (who only has two volumes, "yell" and "scream"),
| instead stands near what she wants before meowing; bedroom for
| cuddles, water bowl for treats, hallway or office for toys
|
| - the lady cat was sick a while back and had painful poops; for
| weeks afterwards if she wanted attention and I was busy she
| would pretend to poop and pretend to be in pain, manipulating
| me into dropping my work and checking on her
|
| It goes both ways, I've developed ways of communicating with
| them over the years:
|
| - the lady is skittish but loves laying in bed with me, so I
| sing "gotta get up, little pup" in a particular way; she will
| then get up and give me space to leave the bed, without me
| scaring her with a sudden movement
|
| - I don't lose my patience with them often, but they understand
| my anxious/exasperated tone of voice and don't push their luck
| too much (note that some of this is probably shared mammalian
| instinct)
|
| - the boy sometimes bullies the lady, and I'll raise my voice
| at him; despite being otherwise skittish and scared of loud
| noises, the lady seems to understand that I am mad at the boy
| because of his actions and there's nothing to be alarmed by
|
| Sometimes I think the focus on "context-free" (or at least
| context-lite) symbolic language, essentially unique to humans,
| makes us lose focus on the fact that _communication_ is far
| older than the dinosaurs, and that maybe further progress on
| language AI should focus on communication itself, rather than
| symbol processing with communication as a side effect.
| mmoustafa wrote:
| It's really hard for me to believe Yann is engaging sincerely, he
| is downplaying LLM abilities on purpose.
|
| He leads AI at Meta, a company with the competitive strategy to
| commoditize AI via Open Source models. Their biggest hinderance
| would be regulation putting a stop to the proliferation of
| capabilities. So they have to understate the power of the models.
| This is the only way Meta can continue sucking steam out of the
| leading labs.
| muglug wrote:
| You're free to concoct a conspiracy that he's just a puppet for
| Meta's supposed business interests*, but that doesn't change
| the validity of his claims.
|
| * pretty sure any revenue from commercial Llama licenses are a
| rounding error at best
| threeseed wrote:
| > commoditize AI via Open Source models
|
| Sounds like we should be fully supporting them then.
| lyu07282 wrote:
| You don't have to assume malice, he is a strong believer in
| liberalism so naturally he would argue whatever leads to less
| regulation. Even if he thought AI was dangerous he would still
| believe that corporations are better suited to combat that
| threat than any government.
|
| Its similar to how the WSJ journalist would never ask him what
| he thinks about the larger effects of the "deindustrialization"
| of knowledge-based jobs caused by AI. Not because the
| journalist is malicious, its just the shared, subconscious
| ideology.
|
| People don't need a reason to protect capital interests, even
| poor people on the very bottom will protect it.
| krig wrote:
| (reacting to the title alone since the article is paywalled)
|
| AI can't push a houseplant off a shelf, so there's that.
|
| Talking about intelligence as a completely disembodied concept
| seems meaningless. What does "cat" even mean if comparing to
| something that doesn't have a physical corporeal presence in time
| and space. To compare like this seems to me like making a
| fundamental category error.
|
| edit: Quoting, "You're going to have to pardon my French, but
| that's complete B.S."
|
| I guess I'm just agreeing with LeCun here.
| jcranmer wrote:
| It's referring to the fact that cats are able to do tasks like
| multistage planning, which he asserts current AIs are unable to
| do.
| krig wrote:
| Thanks, that makes more sense than the title. :)
| dang wrote:
| We replaced the baity title with something suitably bland.
|
| If there's a representative phrase from the article itself
| that's neutral enough, we could use that instead.
| qarl wrote:
| > It's referring to the fact that cats are able to do tasks
| like multistage planning, which he asserts current AIs are
| unable to do.
|
| I don't understand this criticism at all. If I go over to
| ChatGPT and say "From the perspective of a cat, create a
| multistage plan to push a houseplant off a shelf" it will
| satisfy my request perfectly.
| sebastiennight wrote:
| Out of curiosity, would you say a person with locked-in
| syndrome[0] is no longer intelligent?
|
| [0]: https://en.wikipedia.org/wiki/Locked-in_syndrome
| krig wrote:
| I don't think "intelligent" is a particularly meaningful
| concept, and just leads to such confusion as your comment
| hints at. Do I think a person with locked-in syndrome is
| still a human being with thoughts, desires and needs? Yes. Do
| I think we can rank intelligences along an axis where a
| locked-in person somehow rates lower than a healthy person
| but higher than a cat? I don't think so. A cat is very good
| at being a cat, much better than any human is.
| krig wrote:
| I would also point out that a person with locked-in
| syndrome still has "a physical corporeal presence in time
| and space", they have carers, relationships, families,
| histories and lives beyond themselves that are inextricably
| tied to them as an intelligent being.
| mrandish wrote:
| While I'm no expert on AI, everything I've read from LeCunn on AI
| risk so far strikes me as directionally correct. I keep revisting
| the best examples I can find of the 'Foom' hypothesis and it just
| doesn't seem likely. Not to say that AI won't be both very useful
| and disruptive, just that existential fears like Skynet scenarios
| don't strike me as plausible.
| Elucalidavah wrote:
| > it just doesn't seem likely
|
| It is likely conditional on the price of compute dropping the
| way it has been.
|
| If you can basically simulate a human brain on a $1000 machine,
| you don't really need to employ any AI researchers.
|
| Of course, there has been some fear that the current models are
| a year away from FOOMing, but that does seem to be just the
| hype talking.
| threeseed wrote:
| If you can simulate a human brain and it required a $100b
| machine you would still get funding in a weekend.
|
| Because you could easily find ways to print money e.g. curing
| types of cancers or inventing a better Ozempic.
|
| But the fact is that there is no path to simulating a human
| brain.
| llamaimperative wrote:
| There is no _path_ to it? That 's a bold claim. Are brains
| imbued with special brain-magic that makes them more than,
| at rock bottom, a bunch of bog-standard chemical and
| electrical and thermal reactions?
|
| It seems very obviously fundamentally solvable, though I
| agree it is _nowhere_ in the near future.
| bob1029 wrote:
| > Are brains imbued with special brain-magic that makes
| them more than, at rock bottom, a bunch of bog-standard
| chemical and electrical and thermal reactions?
|
| Some have made this argument (quantum effects, external
| fields, etc.).
|
| If any of these are proven to be true then we are looking
| at a completely different roadmap.
| llamaimperative wrote:
| Uh yeah, but we have no evidence for any of them (aside
| from quantum effects, which are "engineerable" to the
| extent they exist in brains anyway).
| threeseed wrote:
| > "engineerable" to the extent they exist in brains
| anyway
|
| Can you please enlighten us then since you clearly _know_
| to what extent quantum effects exist in the brain.
| aithrowawaycomm wrote:
| This seems like a misreading - there's also no real path
| to P-NP or to disentangling the true chemical origins of
| life. OP didn't say it was impossible. The problem is we
| don't know very much about intelligence in animals
| generally, and even less about intelligence in humans. In
| particular, we know far less about intelligence than we
| do computational complexity or early forms of life.
| CooCooCaCha wrote:
| Those seem like silly analogies. There are billions of
| brains on the planet, humans can grow them inside
| themselves (pregnancy). Don't get me wrong, it's a hard
| problem, they just seem like different classes of
| problems.
|
| I could see P=NP being impossible to prove but I find it
| hard to believe intelligence is impossible to figure out.
| Heck if you said it'd take us 100 years I would still
| think that's a bit much.
| RandomLensman wrote:
| We have not even figured out single cell organisms, let
| alone slightly more complex organisms - why would
| intelligence be such an easy target?
| threeseed wrote:
| > Don't get me wrong, it's a hard problem, they just seem
| like different classes of problems
|
| Time travel. Teleportation through quantum entanglement.
| Intergalactic travel through wormholes.
|
| And don't get me wrong they are hard. But just another
| class of problems. Right ?
| aithrowawaycomm wrote:
| I think it'll take much longer than 100 years. The
| "limiting factor" here is cognitive science experiments
| on smart animals like rats and pigeons, and less smart
| animals like spiders and lampreys, all of which will help
| us understand what intelligence truly is. These
| experiments take time and resources.
| mrandish wrote:
| > If you can basically simulate a human brain
|
| Based on the evidence I've seen to date, doing this part at
| the scale of human intelligence (regardless of cost) is
| highly unlikely to be possible for at least decades.
|
| (a note to clarify: the goal "simulate a human brain" is
| _substantially_ harder than other goals usually discussed
| around AI, like "exceed domain expert human ability on tests
| measuring problem solving in certain domain(s).)
| llamaimperative wrote:
| > just that existential fears like Skynet scenarios don't
| strike me as plausible.
|
| What's the most plausible (even if you find it implausible)
| disaster scenario you came across in your research? It's a
| little surprising to see someone who has seriously looked into
| these ideas describe the bundle of them as "like Skynet."
| trescenzi wrote:
| I think the risk is much higher with regards to how people
| use it and much less that it becomes some sudden super
| intelligent monster. AI doesn't have to be rational or
| intelligent to cause massive amounts of damage it just has to
| be put in charge of dangerous enough systems. Or more
| pernicious you give it the power to make health care or
| employment decisions.
|
| It seems silly to me the idea of risk is all concentrated
| around the runaway intelligence scenario. While that might be
| possible there is real risk today in how we use these
| systems.
| mrandish wrote:
| I agree with what you've said. Personally, I have no doubt
| that, like any powerful new technology, AI will be used for
| all kinds of negative and annoying things as well as
| beneficial things. This is what I meant by "disruptive" in
| my GP. However, I also think that society will adapt to
| address these disruptions just like we have in the past.
| habitue wrote:
| Statements like "It doesnt seem plausible", "it doesn't seem
| likely" aren't the strongest arguments. How things seem to us
| is based on what we've seen happen before. None of us has
| witnessed humanity replace itself with something that we dont
| understand before.
|
| Our intuition isn't a good guide here. Intuitions are honed
| through repeated exposure and feedback, and we clearly don't
| have that in this domain.
|
| Even though it doesn't _feel_ dangerous, we can navigate this
| by reasoning through it. We understand that intelligence trumps
| brawn (e.g. Humans don 't out-claw a tiger, we master it with
| intelligence). We understand that advances in AI have been very
| rapid, and that even though current AI doesnt feel dangerous,
| current AI turns into much more advanced future AI very
| quickly. And we understand that we dont really understand how
| these things work. We "control them safely" through mechanisms
| similar to how evolution controls us: theough the objective
| function. That shouldn't fill us with confidence because we
| find loopholes in evolution's objective function left and right
| like contraception, hyper-palatable foods, tiktok, etc.
|
| All these lines of evidence converge on the conclusion that
| what we're building is dangerous to us.
| mrandish wrote:
| > ... "it doesn't seem likely" aren't the strongest
| arguments.
|
| Since we're talking about the future, it would be incorrect
| to talk in absolutes so speaking in probabilities and priors
| is appropriate.
|
| > Our intuition isn't a good guide here.
|
| I'm not just using intuition. I've done as extensive an
| evaluation of the technology, trends, predictions and, most
| importantly, history as I'm personally willing to do on this
| topic. Your post is an excellent summary of basically the
| precautionary principle approach but, as I'm sure you know,
| the precautionary principle can be over-applied to justify
| almost any level of response to almost any conceivable risk.
| If the argument construes the risk as probably existential,
| then almost any degree of draconian response could be
| justified. Hence my caution when the precautionary principle
| is invoked to argue for disruptive levels of response (and to
| be clear, you didn't).
|
| So the question really comes down to which scenarios at which
| level of probability and then what levels of response those
| bell-curve probabilities justify. Since I put 'foom-like'
| scenarios at low probability (sub-5%) and truly existential
| risk at sub-1%, I don't find extreme prevention measures
| justified due to their significant costs, burdens and
| disruptions.
|
| At the same time, I'm not arguing we shouldn't pay close
| attention as the technology develops while expending some
| reasonable level of resources on researching ways to detect,
| manage and mitigate possible serious AI risks, if and when
| they materialize. In particular, I find the current proposed
| legislative responses to regulate a still-nascent emerging
| technology to be ill-advised. It's still far too early and at
| this point I find such proposals by (mostly) grandstanding
| politicians and bureaucrats more akin to crafting potions to
| ward off an unseen bogeyman. They're as likely to hurt as to
| help while imposing substantial costs and burdens either way.
| I see the current AI giants embracing such proposals as
| simply them seeing these laws as an opportunity to raise the
| drawbridge behind themselves since they have the size and
| funds to comply while new startups don't - and those startups
| may be the most likely source of whatever 'solutions' we
| actually need to the problems which have yet to make
| themselves evident.
| Yoric wrote:
| On the other hand, we can wipe our civilization (with or
| without AI) without needing anything as sophisticated as
| Skynet.
| jimjimjim wrote:
| LLMs are great! They are just not what I would call Intelligence
| miohtama wrote:
| If you don't call it intelligence you miss the enormous
| political and social opportunity to go down in the history as
| the pioneer of AI regulation (:
| razodactyl wrote:
| He is highly knowledgeable in his field.
|
| He's very abrasive in his conduct but don't mistake it for
| incompetence.
|
| Even the "AI can't do video" thing was blown out and misquoted
| because discrediting people and causing controversy fuels more
| engagement.
|
| He actually said something along the lines of it "not being able
| to do it properly" / everything he argues is valid from a
| scientific perspective.
|
| The joint embeddings work he keeps professing has merit.
|
| ---
|
| I think the real problem is that from a consumer perspective, if
| the model can answer all their questions it must be intelligent /
| from a scientist's perspective it's not capable for the set of
| all consumers so it's not intelligent.
|
| So we end up with a dual perspective where both are correct due
| to technical miscommunication and misunderstanding.
| Yacovlewis wrote:
| From my own experience trying to build an intelligent digital
| twin startup based on the breakthrough in LLM's, I agree with
| LeCunn that LLMs are actually quite far from demonstrating the
| intelligence of house cats, and I myself likely jumped the gun by
| trying to emulate intelligent humans with the current stage of
| AI.
|
| His AI predictions remind me of Prof. Rodney Brooks (MIT, Roomba)
| and his similarly cautious timelines for AI development. Brooks
| has a very strong track record over decades of being pretty
| accurate with his timelines.
| steveBK123 wrote:
| I would suspect any possible future AGI-like progress would be
| some sort of ensemble. LLMs may be a piece of the puzzle, but
| they aren't a single model to AGI solution.
___________________________________________________________________
(page generated 2024-10-13 22:00 UTC)