[HN Gopher] Yann LeCun: Human-level artificial intelligence is g...
___________________________________________________________________
Yann LeCun: Human-level artificial intelligence is going to take a
long time
Author : belter
Score : 128 points
Date : 2024-01-19 19:28 UTC (1 days ago)
(HTM) web link (english.elpais.com)
(TXT) w3m dump (english.elpais.com)
| mckn1ght wrote:
| I have a personal belief that I can't quite articulate in
| rigorous scientific terms that there is some information-
| theoretic barrier for us to understand the "true nature" or
| "essence" of our own intelligence, and so if we can't get to
| that, we'll never be able to model it (notwithstanding "all
| models are wrong").
|
| The belief that we can get to AGI comes off as religion to me. It
| is a substitute for something we can't really understand, and it
| will continue to shift the more we learn, yet always remain out
| of reach. There will be some true believers, and some people
| simply gunning for power.
|
| Might as well call AGI Nirvana.
| jameshart wrote:
| The idea that there's some ' information-theoretic barrier for
| us to understand the "true nature" or "essence" of our own
| intelligence' sounds more religious to me.
|
| If evolution can cross that barrier just by banging molecules
| together and seeing which ones work, it seems unlikely there's
| some causal disconnect that makes it impossible for us to get
| there by thinking about it.
| RcouF1uZ4gsC wrote:
| > If evolution can cross that barrier just by banging
| molecules together and seeing which ones work, it seems
| unlikely there's some causal disconnect that makes it
| impossible for us to get there by thinking about it.
|
| Evolution also crossed the flight barrier by banging
| molecules together. I don't think banging molecules together
| without having an understanding of physics and the forces
| involved would have been a viable means for us to get to
| flight.
| jameshart wrote:
| That's... not a refutation of my point.
|
| AI/flight analogies are tired, but the OPs argument amounts
| to the equivalent of, before the Wright Brothers,
| proclaiming 'there's an inherent inability for humans to
| ever conceive of a way to engineer heavier than air
| flight'.
|
| It's a 'man was never mean to fly, therefore heavier than
| air flight is impossible' argument.
| shawnz wrote:
| But wasn't it through experimentation that we developed
| that understanding? And in the earliest phases of that
| experimentation, that's when our understanding was at its
| weakest, right? You almost seem to be implying that we had
| the understanding before ever attempting the experiments.
| mejutoco wrote:
| I dont have a strong opinion about it, but when I read
| parents post I immediately thought of Godel s works and how
| there are limits of what you can know within a system (and in
| Mathematics no less). Thinking that something similar exists
| in intelligence does not seem so far fetched. As a
| hypothesis, of course.
| jameshart wrote:
| And yet we can use mathematics to know that that limit
| exists. It's a powerful thing, mathematics.
|
| Godel doesn't say 'mathematics cannot contemplate itself'.
| Quite the opposite.
| ncallaway wrote:
| The reason it seems far fetched is we have one example of a
| mechanistic, physics based machine that we know can perform
| the computations necessary to produce intelligence.
|
| I could understand the hypothesis that AGI is not
| computable, if we didn't have an existing example of a
| machine that can produce AGI.
|
| Since we _do_ have an existing machine that can produce
| AGI, we would have to suppose it does something:
|
| - outside of physics to achieve its results
|
| - performs some operation that is impossible for us to
| understand or replicate
|
| Both of those seem... unlikely to me.
| jprete wrote:
| It seems like an article of faith to believe that the
| universe is a computer in the first place.
|
| Without that assumption, we don't even know that
| intelligence is computation.
| ncallaway wrote:
| The article of faith is not that the universe is a
| computer. The "article of faith" is that the universe
| follows physical rules.
|
| I also don't think it's fair to call it an article of
| faith, since we have strong evidence to show that _to
| date_ the universe has followed predictable physical
| rules. That could, obviously, change at any time, but it
| seems at least a reasonable prior to assume that the
| physical rules that we've studied in the past will
| continue to operate into the future. "The sun will rise
| tomorrow" is, I _guess_ an article of faith, but I think
| it's more fair to say it's a reasonable and well founded
| prediction based on a well studied model of the solar
| system and the physical laws we've observed the universe
| follow in the past.
|
| So, my beliefs are:
|
| - The universe is mechanistic, and follows physical rules
|
| - Human beings are also mechanistic, and follow the same
| physical rules as the universe
|
| - Human beings are intelligent
|
| - Human beings are _at minimum_ turing complete computers
| (since you could give me a roll of paper, a set of op-
| codes, and I could perform calcuations)
|
| So, it seems to me a reasonable starting assumption that
| intelligence _is_ a result of the mechanistic universe.
| Do we have any evidence for something outside the
| physical, mechanistic universe impacting human cognition?
| I 'm not aware of any.
|
| But we have _lots_ of evidence of the physical,
| mechanistic universe impacting human cognition and
| intelligence.
|
| I'm willing to acknowledge that we don't know that
| intelligence is Turing Computable, but I'd argue that
| intelligence is likely to be a mechanical process that's
| compatible with the physical rules of the universe. Can
| we be certain of that? No, we can not be 100% certain.
| But it seems a much more reasonable and evidenced
| hypothesis than something which asserts there is a
| metaphysical process that produces intelligence, but
| which we have no evidence for.
|
| So, no, I don't think it's reasonable to say that
| assuming a "mechanistic universe" is an article of faith.
| I think it's a reasonable belief, based on the evidence
| that we have. What would be an article of faith is
| asserting that it could _only_ be a mechanistic universe,
| and refusing to accept any evidence to the contrary. I
| have not done that.
| jprete wrote:
| I think a mechanistic universe is not necessarily a
| computation, so purely physical intelligence is also not
| necessarily a computation. But I'm not prepared at the
| moment to go dig up the alternatives to "computation" and
| "metaphysical process", or to figure out how a
| deterministic (or even stochastic) process could be
| uncomputable.
| ncallaway wrote:
| > so purely physical intelligence is also not necessarily
| a computation.
|
| We're going to get into semantics pretty quickly, but I
| would argue that at least purely physical intelligence is
| a procedure that is capable of being run on a machine
| that exists in our universe.
|
| I would also argue that, such a machine is _at least_
| turing complete (since I would think a general
| intelligence should be capable of emulating a turing
| machine in the same way that humans are able to do so). I
| would happily accept that it's possible that being turing
| complete is a necessary, *but not sufficient* condition
| for intelligence. That is, I could see a world where
| intelligence requires some form of hyper-turing machine,
| that is able to solve some non-turing computable
| problems.
|
| However, I would argue that even if intelligence does
| require non-turing computable functions, there exists *a*
| machine which can perform those procedures (that is, we
| exist). Thus, in this hypothetical universe where
| intelligence is non-turing computable, we would then have
| an existence proof for a hyper-turing machine which can
| compute non-turing computable results. In this universe,
| then, I'd argue that anything that can be produced by
| this "hyper-turing machine", _is_ "physically
| computable", even if not "turing-computable".
|
| Ultimately, if we accept the premise of a mechanistic
| universe, I think humans are an existence proof of a
| machine capable of producing intelligence. Then, I think
| it follows that it *cannot* be physically impossible to
| create a machine capable of producing intelligence.
| Whether we call that machine a "computer", though, I
| don't have a strong preference.
| lanstin wrote:
| Neither humans nor any physical device is Turing
| complete; only bounded computations can we perform. What
| do you mean by mechanistic? Mathematical? Deterministic?
| Non-dual with no unmeasurable causes?
| dsubburam wrote:
| > So, it seems to me a reasonable starting assumption
| that intelligence is a result of the mechanistic
| universe. > Do we have any evidence for something outside
| the physical, mechanistic universe impacting human
| cognition?
|
| Problem is, even granted the above, that's not enough of
| an argument that AGI is only a matter of mechanism.
|
| Take the analogy with pain. We've discovered the
| mechanism for pain. That it's e.g. some stimulus applied
| to skin which sets of nerve signals that register in the
| brain. To then say that the experience of pain is the
| same as the physical mechanism we've discovered still
| misses a key step:
|
| what does the experience of pain inhere to, and is that
| the same subject as that for the mechanism of pain?
|
| Comparing AGI and human intelligence has the same
| problem. We don't know exactly _what_ is intelligent in
| either case, let alone whether the two are comparable. So
| it 's not a question of intelligence per se, but of that
| which is intelligent. Maybe AGI, the way we are thinking
| and talking about it, is unavoidably tangled with having
| to grapple with [self-]consciousness.
| saltcured wrote:
| Right. I think a charitable reading of the post is
| something like, "it probably takes a mind superior to a
| human's in order to willfully develop a human-rivaling
| AGI".
|
| There's a lot of ground to cover between AI as increasingly
| elaborate magic 8-ball toys and a real human-rivaling AGI.
| That is, one capable of observing an environment,
| identifying goals and problems, planning, acting, and
| reacting. In these much longer (stateful) cognitive chains,
| there are more opportunities for pathological failure modes
| and less opportunity for a toy user to charitably excuse
| the misbehavior.
|
| This is not some kind of dualist metaphysical argument
| about the possibility of AGI in the abstract. Merely a
| doubt that we can blindly scale up the complexity of a
| synthetic mind to meet or surpass our own. Consider that we
| still can't even begin to understand our own minds in
| enough detail to reliably predict, repair, or augment them.
|
| I am outside this field and so may have too much of a
| layman's perspective. But it seems to me that contemporary
| AGI believers conflate training and evolution. That nature
| did it in eons doesn't argue that we can do it in practical
| product development cycles, unless we can simulate these
| evolutionary processes to follow a similar search in a
| compressed time scale.
|
| As a crude analogy, I think todays LLM products are a bit
| like horoscope generators. Clever arrangements of words
| that attract a charitable or gullible reader. But AGI use
| cases are more like wanting a life partner who will
| understand and willingly assist ones efforts.
| Vecr wrote:
| That's wrong, see See Godel, Escher, Bach [1979, pp.
| 452-456] If you get rid of the infinities and approximate
| (including cutting recursions short), you can
| (approximately) compute a whole lot. Not everything, but
| everything your brain can, and more.
| paganel wrote:
| This is a very mechanic-ist way of looking at things.
| anon84873628 wrote:
| Yes, which seems to be the way the universe works. Duality
| is a religious belief.
| paganel wrote:
| As far as I know we haven't figured it out how the
| universe works, hence the unified theory debacle and all
| that.
| HuShifang wrote:
| Not an endorsement, but here's an argument that AGI is
| computationally intractable (edit: at least via ML).
| https://irisvanrooijcogsci.com/2023/09/17/debunking-agi-inev...
| ceejayoz wrote:
| "AI is impossible" is a tough argument, given the existence
| of non-A I.
|
| "Machine learning isn't the route to AI" is something you
| could more reasonably argue, but that's a drastically
| narrower claim.
| jameshart wrote:
| Intractable and impossible are very different claims.
|
| NP Hardness is a statement about the asymptotic difficulty of
| solving a problem at ever larger scales. It says 'if you have
| a way to solve this problem at size n, that way will scale
| worse than polynomially when you try to apply it to a problem
| at size 2n'.
|
| Which might place limits on the practicable scale of how big
| an n your approach gets to work for. But if your approach
| works practically for a big enough n to make AGI then your
| approach works - NP Hardness doesn't matter.
|
| And since we know that finite mass lumps of finite numbers of
| gray cells are capable of GI, we have a reasonable
| expectation that there is some n for which AGI might be
| possible.
| johnwheeler wrote:
| Yes, If you look at the power efficiency of the brain relative
| to LLMs, you can see nature has done something special, and I
| don't think we should be so quick to call it game over. It
| might be that LLMs are just some atom or building block of a
| much larger and more complex thing.
| Lerc wrote:
| As a counterpoint, I feel like the belief that we can't get to
| AGI comes off as religion. It presupposes an ineffable quality
| that we posses that machines cannot. The argument that we have
| to fully understand something to build it doesn't hold water
| for me. I have made plenty of things where understanding the
| depths of what I had made took far more time and effort than
| building the thing in the first place.
|
| It's always hard to predict the rate of progress, Most of the
| current optimism comes from how radically wrong predictions
| were for the capabilities of AI today. 10 years ago a lot of
| people would have put current AI capability as arriving well
| after 2050. The jump in progress may not be sustained, but it
| definitely places doubt on people confidently predicting slow
| progress.
| mjburgess wrote:
| Does gold have an 'ineffable' quality that hydrogen does not?
| Do cells, after 1bn years of evolution have a quality that
| gold does not? Does reality work one way, and not another?
| Are not all things unequal?
|
| A "machine" is what, exactly? Do we take it to be an
| abstraction? Or is it an electrical field oscillating over
| silicon? Either way, you're in trouble. Abstractions have no
| physical properties, and electrified sand seems hardly to
| possess any interesting properties.
|
| The ability for animals to adapt to their environments, by
| growing into them, by establishing plastic causal connections
| in their very bodies, grown by their environemtns... able to
| almost instantly move from protein expression in 1trn 1-bn-yr
| cellular supercomputers in each of our bodies to macro
| sensory-motor representation --- _and back again_
|
| Is this ineffable?
|
| Or is this extremely effable. Is rather, not the
| superstitious view that "everything is anything" ?
|
| All extant, knowing, studied intelligent systems have organic
| properties; and radically so. Insofar as this is "ineffable"
| you should take that up with the animal kingdom.
|
| I find the contrary supersitious, magical, religious,
| ineffable... that mere abstract patterns in arbiatrily chosen
| aspects of our bodies are necessary and sufficient conditions
| for anything at all. This would be the only case in all
| science. The only physical property instantiated by mere
| arrangement _at any level_. Upload our consciousness? Make it
| out of wood why not!
|
| Nonesense. When I am hungry, I dream of food, when I dream of
| food I plan to get some, and I am angry without it. This will
| not be made out of sand.
| jacquesm wrote:
| Given that you need some pretty complex elements to be able
| to make life in the first place (not to mention complex
| molecules made out of those elements) I'd say yes, there is
| definitely an 'ineffable' quality that life has that
| Hydrogen does not (to short circuit your chain of
| reasoning), but intelligence isn't necessarily made of a
| particular kind of matter and that's where the analogy
| breaks down.
| mjburgess wrote:
| The soul, likewise, is immaterial.
|
| Who now trades in superstition?
|
| Everything is corporeal.
| mjburgess wrote:
| Downvoted for saying reality has physical properties?
| Where else but hackernews? Sorry to disappoint, but all
| the algorithms of CS, which do anything at all, have
| devices in them. ie., their critical steps are impure,
| and cannot be specified mathematically.
|
| There is nothing abstract in the world, everything is
| concrete. Discrete mathematics is not some divine realm
| which is where the Mind really occurs, and the Soul is
| really kept.
|
| No graph traversal algorithm cares, nor will ever.
| Science is the study of reality, not computer "science".
| There are no computational "properties". All interesting
| properties are of objects extended in space, and in time.
|
| The world is physical, as described by physics; not
| abstract, as describe by mathematics. And so, not
| computable.
| lanstin wrote:
| My mind has abstractions in it. Are you saying that
| mental formations are not real, not in the universe?
| InSteady wrote:
| >Where else but hackernews?
|
| I have seen this sentiment on every single platform I
| have participated in. Where else? Everywhere that humans
| go.
|
| Find a more appropriate (or at least original)
| lamentation.
| mannykannot wrote:
| The proposition of artificial intelligence, whether at
| human-level or not, is that a suitably-programmed
| _computing device_ (as opposed to an immaterial
| algorithm) could manifest the properties which we have
| chosen to call "intelligence" when we observe them in
| biological agents, and similarly for consciousness.
|
| Particularly with your comment "no graph traversal
| algorithm cares, nor will ever", you appear to be trying
| to impute that the concept of artificial intelligence is
| predicated on a category error. If so, then this is a
| fairly common argument, but is itself predicated on a
| category error.
| mensetmanusman wrote:
| The laws of physics are corporeal?
| dullcrisp wrote:
| You don't have to believe AGI is impossible to believe we
| have no route to get there. The burden of proof on "we have a
| way to do this" falls heavily on the person proposing it. And
| being able to do _something_ that wasn't expected doesn't
| imply the ability to do a specific other thing. It's that
| belief that's I'd say is perhaps more cultish than religious.
| iknowstuff wrote:
| Who's saying we definitely have a way to do this? Very few.
| Nobody's trying to prove that we definitely do right now.
|
| But the "presumption" that we can't is silly given our
| advancements
| dullcrisp wrote:
| This logic doesn't work with anything else. If I say
| we're going to fly to Jupiter you're going to ask me for
| specifics not just presume that I know what I'm talking
| about. Or any of the other infinite things I can claim to
| be able to do. Why is this one different?
| eklavya wrote:
| In that scenario, you are the one claiming we will
| "never" fly to jupiter.
| timr wrote:
| Just to close the metaphorical loop here, the current AI
| debate is something like:
|
| _" There's a danger we'll be able to fly to Jupiter and
| bring home a Jupiterian life-form that will destroy
| humanity. Therefore we should stop all space
| exploration."_
|
| The doomers take the equation with some wildly improbable
| step that we can't currently explain or justify, multiply
| the _outcome_ of that step by "infinitely bad", and
| conclude that the whole thing must be dangerous. But to
| amp up the rhetoric, they always skip over the "wildly
| improbable" part.
| adroniser wrote:
| I guess the disagreement you have with doomers is the
| probablility you assign to us being able to create human
| level AI. For the record most "doomers" don't pull a
| pascal's wager here. Standard EA doctrine is that if we
| stagnate technologically at our current level for too
| long (read centuries) then we will inevitably be wiped
| out. So they want technological advance, they just want
| caution.
| timr wrote:
| I don't know what "most" do, but I see that form of
| argument all the time -- particularly from the highest
| profile doomers.
|
| Personally, I think this line of argument is driven by
| the hype cycle more than reality. Use chatGPT or
| midjourney or whatever for a while, and it's pretty easy
| to see that we're dramatically _overweighting_ theories
| of AGI risk, and dramatically _underweighting_ stuff like
| "disemploying the bottom 80% of the intelligence bell
| curve" with technologies that automate away lots of
| formerly white-collar labor.
|
| If I had to put my money on an "existential risk"
| attributable to AI, it would be economic strife.
| adroniser wrote:
| It's worth saying we still haven't seen a single model
| trained post-hype cycle from openAI. GPT-4 was summer
| 2022 stuff.
| dullcrisp wrote:
| I'm saying that there are very many things that I can't
| prove are impossible that I don't worry about happening.
| I've learned to live with that uncertainty.
| anon84873628 wrote:
| Hmm. When the first basic black powder rockets were
| invented, would it have been unreasonable for someone to
| think that they could one day reach the moon? Of course
| there would be many uncertainties -- including what the
| moon even was. No one could say what technology would be
| required. But clearly there was a major paradigm shift
| that suddenly made it within the realm of consideration.
| dullcrisp wrote:
| I imagine it might have been unreasonable to suppose
| that, yes. The fact that the future is unpredictable
| doesn't mean that all claims about the future are a
| priori equally valid.
|
| In the future where AGI takes over the world next year I
| might look silly for arguing with you about it, but
| that's a risk I accept based on the fact that I don't
| think that's going to happen.
| logicchains wrote:
| We absolutely have a way to get there. A sufficiently large
| neural network can approximate any function (the universal
| approximation theorem), which has been mathematically
| proven. LLMs are trained to predict the next word in a
| sequence; doing this to a high enough level of accuracy
| requires building an internal model of the world; the
| ability to reason about it. Eventually we'll have LLMs
| trained on visual and audio input too (and be trained to
| predict what happens next), so they can receive an
| arbitrarily large amount of training samples. If the models
| keep growing and being trained on more and more data, we'd
| expect them to become at least as intelligent (good at
| modelling the world) as humans.
| tedivm wrote:
| > LLMs are trained to predict the next word in a
| sequence; doing this to a high enough level of accuracy
| requires building an internal model of the world; the
| ability to reason about it.
|
| I don't see any evidence that this is true or proven in
| any way.
|
| Also, the ability to brute force something doesn't mean
| that brute forcing it is easy or even feasible. You can
| apply the same logic to "calculating a private key from a
| public key" that you are to human intelligence. Sure, a
| large enough neural network can do either, but that
| doesn't mean that building them is actually realistic.
| MostlyStable wrote:
| It has been proven in much simpler toy models that LLMs
| just trained on token prediction can and do build "world"
| models[0]. That's not the same as evidence that they
| _must_ build a world model, but it 's proof that at least
| sometimes they do so even when just being trained on
| prediction.
|
| [0]https://thegradient.pub/othello/
| tsimionescu wrote:
| This keeps popping up, but a sufficiently large neural
| network can approximate any _continuous_ function. They
| can 't approximate any discontinuous function. So, there
| is no absolute certainty that they must be able to
| approximate human cognition, since there is no reason to
| believe cognition is continuous.
|
| Also, there is no proof and no reason to believe that any
| current architecture and, more importantly, that the
| currently known training algorithms can achieve anything
| close to cognition. After all, animals and humans seem to
| learn much much much faster (much smaller training sets)
| than any algorithm we have so far.
| CamperBob2 wrote:
| _This keeps popping up, but a sufficiently large neural
| network can approximate any continuous function. They can
| 't approximate any discontinuous function_
|
| Incorrect. That's what nonlinear activation functions are
| for.
|
| Researchers wasted many potentially-fruitful years
| because Minsky and other luminaries -- people who
| occupied the same position of authority that LeCun
| occupies today -- made a huge deal about how the original
| perceptron and subsequent multilayer variants "couldn't
| learn XOR." That was almost literally how they put it. At
| the first sign of difficulty they abandoned the approach
| that was, and remains, the most promising. We have to be
| careful to learn from that mistake.
| adroniser wrote:
| You have to be a lot clearer about what you mean by
| continuous here. An LLM technically does not produce a
| continuous function on the reals because its inputs are
| floating point numbers with finite precision. Any
| function on discrete inputs like this has an extension to
| the reals which is continuous, just imagine joining all
| the discrete points up with lines.
|
| So then your claim wouldn't be about the limits of LLMs
| themselves, but on the limits of systems that do not take
| continuous inputs. The question then is do you think that
| humans take in continuous input?? Given that physics
| seems to be discrete at the low level, this suggests to
| me they don't, but I don't know enough to be sure.
| lanstin wrote:
| It is quantum which isn't exactly discrete. You receive
| one photon or two, not 1.5, sure, but the energy of that
| photon, its frequency is a real. At least I am not aware
| of a quantum mechanics over Q. I think the math would be
| hard because you loose all the convergence properties.
| Maybe there is some Hilbert space over computable reals,
| Google is not finding it but I no longer find that to be
| indicative of anything.
| ryanackley wrote:
| There is a leap from "predicting the next word" to
| "modelling the world" that you are sort of glossing over
| here.
|
| The idea that LLM's are actually "reasoning" rather than
| just performing a probability function seems a little
| bit...pseudo-religious. Like we've created a new life
| form.
| RandomLensman wrote:
| Predictive models need not at all reflect what actually
| happens. Sometimes embracing less accurate models is
| actually need to enhance understanding. Epicycles in the
| geocentric model could be made very accurate as far as
| observations in the sky are concerned, but it wasn't a
| good world model.
| dotnet00 wrote:
| When people say we can't get to AGI, they're typically either
| in the camp that there's something intrinsically special
| about us that is incapable of being replicated, or various
| versions of the belief that it's too complex to be achieved
| within a timespan in which they would still recognize
| humanity.
|
| The former can be seen as a religious belief, but the latter
| is simply saying that we do not understand anywhere near
| enough about intelligence to develop AGI.
|
| You point to the unexpected leap in capability we've had
| recently, but fundamentally it isn't that different from what
| we have had for decades. The same fundamental unsolved
| limitations exist, such as of being unable to learn the way
| we do (eg if a child is writing a specific letter wrong, we
| correct that specific mistake, but that doesn't overwrite the
| knowledge of how to write the other letters).
|
| As a lecture I watched a while ago had put it, we've gotten
| pretty good at making the icing of the AGI cake recently, but
| we still have no idea what the cake is supposed to even
| look/taste like, so we barely know what ingredients we might
| need to make it let alone the actual steps involved.
| logicchains wrote:
| >not understand anywhere near enough about intelligence to
| develop AGI.
|
| LLM training is essentially training them to predict what
| happens next. When we combine this with multi-modal
| training (video/audio), they're being trained to predict
| what happens next in the world. As we feed them more and
| more data and make them bigger and bigger, they'll get
| better and better at this task. Given that predicting what
| happens next requires predicting what humans do, because
| humans are part of the world, if they keep getting better
| and better then they'll be able to predict what humans
| would do well enough that they're capable of thinking any
| thoughts humans could think, because that's a requirement
| for predicting human behaviour. So just by training them on
| this one label, predict what happens next, we'd expect them
| to eventually develop human-level intelligence if the loss
| keeps decreasing.
| dotnet00 wrote:
| That doesn't really follow at all.
|
| It may result in something that is even better at faking
| the appearance of intelligence, but LLMs and similar
| stuff fundamentally lacks various features that make them
| not capable of becoming human level intelligences.
| exe34 wrote:
| Humans are pretty good at faking intelligence too! It's
| only when you question them carefully and you yourself
| are an expert in the field that you can detect that
| they're confabulating.
|
| Conversely, I've never seen anybody produce anything but
| the next word coming out of their mouth. Therefore people
| are glorified stochastic parrots.
| nradov wrote:
| You're conflating knowledge of specific subject areas
| with generalized intelligence, including the ability to
| learn and metacognition.
| pshc wrote:
| If the loss keeps decreasing forever, then that
| conclusion seems to follow. But it's likely there will be
| plateaus and ceilings that current neural net
| architectures cannot surpass. We likely need more
| breakthroughs to achieve enough generalized reasoning,
| reliable predictions, alignment, and scalability within
| hardware constraints.
| tsimionescu wrote:
| > Given that predicting what happens next requires
| predicting what humans do, because humans are part of the
| world, if they keep getting better and better then
| they'll be able to predict what humans would do well
| enough that they're capable of thinking any thoughts
| humans could think, because that's a requirement for
| predicting human behaviour.
|
| This doesn't follow. A just as likely (or even more
| likely) prediction is that they will fail to predict what
| happens next once it requires modeling humans. It won't
| get any more progress even if we increase training set
| size or model size - it will require an architecture
| change of some kind.
| lucubratory wrote:
| Yes, it's completely possible that we get to some sort of
| limit and the scaling laws fall down, loss stops dropping
| even with more data, parameters, compute. But we have
| zero reason to believe that is the case, currently, it's
| only hypothetical. In terms of what we've experimentally
| seen so far, it just keeps getting better as you scale
| them up, so to find out if you're right and there's some
| future roadblock we don't know about we just have to keep
| scaling these things to see if it works. We haven't even
| seen loss functions start to level out - the closest
| argument I've seen to that is people misinterpreting the
| SGD curve (an artificial tapering we apply to the
| training process) as a fundamental architecture
| limitation rather than just an attempt to use resources
| wisely on a given training run.
|
| Basically, training of these models currently only costs
| a few million dollars, peanuts compared to the budgets of
| CERN or ITER, and we have zero evidence that we're
| approaching any sort of ceiling. If we keep scaling,
| maybe we see some heretofore unpredicted failure that
| breaks scaling laws like you're predicting. But maybe we
| don't, and the scaling laws (which have performed
| incredibly accurately so far) really do hold.
| marcyb5st wrote:
| My two cents on the topic. Currently, we are incapable of
| modeling causality effectively or at all (depends on your
| definition and to who you ask). LLMs seem to have it, but
| it's a mockery of causal reasoning. They use data created
| by us and so they can predict the next word following some
| of the inherent patterns we humans use which might look
| like they can reason to some extent.
|
| In this regard something like alphago/zero (in the way I
| understand them at least )are better than modern LLMs
| albeit in a very narrow field.
| tivert wrote:
| >> The belief that we can get to AGI comes off as religion to
| me. It is a substitute for something we can't really
| understand, and it will continue to shift the more we learn,
| yet always remain out of reach. There will be some true
| believers, and some people simply gunning for power.
|
| > As a counterpoint, I feel like the belief that we can't get
| to AGI comes off as religion. It presupposes an ineffable
| quality that we posses that machines cannot.
|
| That's missing the point: the argument isn't that it's
| theoretically impossible to build an AGI, just that humans
| are incapable of it.
|
| Here's another point for the belief in AGI being a religion:
| it's basically a sect of the larger religion of technological
| progress, which (likely falsely) assumes that technology will
| continue to "advance" at approximately current rates until we
| live in a world out of a sci-fi paperback. _A lot_ of people
| believe that, but frequently resort to a motte-and-bailey
| fallacy when challenged.
|
| > It's always hard to predict the rate of progress, Most of
| the current optimism comes from how radically wrong
| predictions were for the capabilities of AI today. 10 years
| ago a lot of people would have put current AI capability as
| arriving well after 2050. The jump in progress may not be
| sustained, but it definitely places doubt on people
| confidently predicting slow progress.
|
| It's worth noting that "the current optimism" is not without
| precedent. IIRC, there was a big boom in AI in the 70s/80s.
| SHRDLU was pretty impressive. But then then the promising
| ideas were found to be dead ends and there was a long winter.
| inhumantsar wrote:
| in many ways, we already live in a world out of a sci-fi
| paperback.
|
| I find it hard to fault people for thinking that we'll
| continue to progress at a fast pace, or at least thinking
| that we're a long way from a major plateau.
| mensetmanusman wrote:
| We can not create the laws of physics, and we are fine with
| that aspect of immaterial reality.
| olalonde wrote:
| > I have made plenty of things where understanding the depths
| of what I had made took far more time and effort than
| building the thing in the first place.
|
| An example of this is electricity. The battery (1800),
| electric motor (1821) and telegraph (1832) were all invented
| before the discovery of the electron (1897).
| tomrod wrote:
| Some folks I suppose take a religious fervor, but for my I
| look at how a child learns -- through the observation of
| cause and effect -- and know the current architectures don't
| support how we have husbanded animals and reared children for
| eons.
|
| The inherent capacity for true self-directed learning may
| well be there, but it isn't hit yet in what we see.
|
| GPTs are still neat.
| tayo42 wrote:
| I'm not sure that's true. There is reinforcement learning
| and its branches. I think that's part of how they do ai for
| chess, go, star craft etc...
| ncallaway wrote:
| Honestly, yours sounds more like a religious claim that the
| other.
|
| It seems less a religious claim to assert that the human brain
| is mechanistic, and simply operates using physics to perform
| complex operations than to assert that there is something
| "unknowable", or "outside of physics" that would prevent us
| from being able to build a similar machine.
|
| If there were 0 examples in the universe, I think your point
| would be a good one. But, given there is 1 example, I think it
| stands to reason that there could be more.
|
| I could accept an argument that one might expect it to be
| hideously complicated, and not something we'll be able to
| accomplish for a long time.
|
| But the claim that there's an information theoretic reason that
| would absolutely prevent it, would seem to make the claim that
| there is something metaphysical about the operation of the
| brain, which seems like a quasi religious claim to me.
| axlee wrote:
| I think one of the biggest barriers is that the human brain
| isn't running on binary code, and attempting to replicate it
| on our current technology model is doomed to encounter
| emulation issues.
| ncallaway wrote:
| That's not an "information theoretic barrier" argument.
| That's not an impossibility argument.
|
| That's a "hideously complicated, and not something we'll be
| able to accomplish for a long time" argument, which I
| happily concede may be the case.
|
| I see a very very large gulf between:
|
| "Impossible in all scenarios" and "totally impractical with
| current technology".
|
| I was refuting only the former argument.
| mjburgess wrote:
| Almost everything is "outside of physics", this is mainstream
| physics. All of physics, as formulated, is uncomputable. And
| what is computable is a radical approximation.
|
| Physics cannot formulate descriptions of climates, or even,
| many cases of turbulence, or 4 bodies in a gravitational
| field.
|
| Leaving "physics" is trivial, it occurs whenever you take any
| of the current toy models and add one layer of reality to
| them.
|
| The whole edifice of formal science is a little like
| children's block toys.
| ceejayoz wrote:
| I don't think that's what they meant by "outside of
| physics".
| ncallaway wrote:
| > Almost everything is "outside of physics",
|
| No, it's not.
|
| Many things are outside of "known" physics, sure. But I
| didn't say "outside of known physics". I said "outside of
| physics".
|
| When I say "outside of physics" I mean "meta-physical", as
| in, processes that are not part of physical interactions of
| this world. Things that, if you somehow had Maxwell's demon
| tracking every single quark, lepton, and boson, even _then_
| you still wouldn 't be able to account for.
|
| It's a discussion of what is _theoretically knowable_, not
| a discussion of what we currently do know.
| mjburgess wrote:
| I do not think most things are "theoretically knowable",
| since knowledge is comprised of finite concepts conjoined
| in finite ways; and most of reality is uncomputable.
|
| You cannot know the state of the climate, even in
| principle.
|
| This is what the commenter was alluding to when they
| said,
|
| > there is some information-theoretic barrier for us to
| understand
|
| So the commenter can both maintain there are theoretical
| barriers to the possibility of all kinds of knowledge,
| which is outside of "knowable physics" without having any
| sort of dualistic view that this unknownable stuff is
| immaterial.
|
| So to be "outside of physics" is not coextensive with
| being "immaterial".
|
| You might have meant that, but it is this very conflation
| which has to be undone to understand OP's point
| calf wrote:
| But is general uncomputability the OP's argument? They
| were just saying they had a gut feeling there is some
| "information-theoretic barrier" preventing AGI
| construction. Yet we can understand a lot about Earth's
| weather, and even do things to affect the weather,
| despite climate systems being uncomputable, strictly
| speaking.
| karaterobot wrote:
| Reminds me of the old line about how, if the brain were so
| simple we could understand it, we'd be so stupid that we
| couldn't.
|
| Also reminds me of an older book I read about AI (I think it
| was _On Intelligence_ by Jeff Hawkins?) where I first became
| aware of the idea we had been scrambling to create AI without
| first having a good definition of intelligence or deep
| understanding of how it works in our own brains. And when I ask
| myself or other people how they define intelligence, it always
| comes down to some variation on "the ability to solve
| problems", which feels deeply beside the point and likely to
| never produce something that "feels" intelligent.
|
| But I don't necessarily agree that there is this special case
| of human intelligence that makes it _impossible_ to understand
| or model. I would really like to believe it, personally,
| because I don 't want AGI. I just don't buy that that's the
| explanation for our failure to do so up to now.
|
| It seems like we ought to be able to do it, but that we're
| muddling in the wrong direction, coming up with an
| exceptionally clever implementation of an approach which cannot
| produce intelligence that satisfies our intuition about what
| intelligence is.
|
| To tie it back to the article, I keyed in on the word 'design'
| in LeCun's statement that, "contrary to what you might hear
| from some people, we do not have a design for an intelligent
| system that would reach human intelligence."
|
| In other words, that it's not just a quantitative difference
| (more parameters, more data) but that a different approach than
| what we are taking would be necessary.
| logicchains wrote:
| >we'll never be able to model it
|
| That's completely false. Neural networks have been
| mathematically proven to be universal approximators
| (https://www.deep-mind.org/2023/03/26/the-universal-
| approxima... ), i.e. a sufficiently large neural networks can
| approximate any given mathematical function to an arbitrary
| level of precision. Given any programs can be modelled as a
| mathematical function, neural networks can hence approximate
| any arbitrary program. Unless we assume something supernatural,
| human intelligence is just a program.
| belter wrote:
| Not really...It has limitations. Does not specify how large
| the neural network must be to achieve the desired level of
| approximation, does not address the computational resources
| required, ignores non continuous functions and so on...
|
| "The Truth About the [Not So] Universal Approximation
| Theorem" - https://www.lifeiscomputation.com/the-truth-about-
| the-not-so...
| foobarian wrote:
| I usually take AGI to mean what it says literally - simply an
| AI with very general training. I think that description already
| applies to the latest crop of LLM models.
|
| Taking it to mean some nebulous sentient entity with a sense of
| self, I doubt we'll see in our lifetimes. We don't even
| understand how our own "souls" work.
| maleldil wrote:
| > I think that description already applies to the latest crop
| of LLM models
|
| I think you're using a completely different definition from
| everyone else.
| jacquesm wrote:
| > I usually take AGI to mean what it says literally - simply
| an AI with very general training.
|
| That's clever but it doesn't help to start redefining terms
| that are the basis for a discussion with others who use that
| term in a different and more generally accepted way. General
| doesn't mean 'generally trained' it specifically means that
| it _does not_ require training for a particular task in order
| to figure it out. That implies that it may not require
| training at all and that if it is trained the training isn 't
| necessarily general but that the AI can extract useful
| patterns from training on completely different subjects and
| apply them to the problem at hand.
|
| This is subtly but crucially importantly different from 'AI
| with very general training'.
| foobarian wrote:
| > This is subtly but crucially importantly different from
| 'AI with very general training'.
|
| I am not being as precise on the Internet as I would be if
| this was a paper perhaps, and for that I apologize :-)
|
| However, just reading the top comment chain on this story,
| and other discussions elsewhere, I think there is a lot of
| cross-talk where AGI is being confused for sapient AI. I
| don't get the sense that there is a definition as generally
| accepted as we would like.
| jacquesm wrote:
| That's fair, but it's a big step to using a term in a way
| that it clearly isn't meant to be used (or at least,
| that's how I perceive it). The fact that we continuously
| seem to redefine what "AI" means implies that we also
| redefine what "AGI" means so it's not as if you are alone
| in this. By my 1980's standards we have AGI but the
| degree of "I" is still below human level, by my 2024
| standards we do not have AGI just yet but what we do have
| is already so powerful that I'm not even sure to what
| extent it matters. Weaponizing what's there today already
| has the potential to destabilize our societies, having an
| even more powerful version of this (or just a faster one
| of the current crop) will change the world in ways that
| are hard to foresee or predict.
| trevyn wrote:
| Yes, people tend to regard things that they don't personally
| understand as magic.
|
| I understand my own intelligence, and newsflash: AGI is already
| here.
| ryandvm wrote:
| We have to understand it to build it? That seems absurd. Did
| evolution understand it? I'll grant you that maybe we _should_
| understand it before building it, but it 's absolutely not a
| requirement.
|
| The reality is that your consciousness sits at the end of a
| gradient of intelligence that nature simply brute forced. Your
| conscious experience is more sophisticated than a dog's, which
| surpasses a hamster's, which surpasses a goldfish, insect, etc.
| There is no magic to it, there is nothing but more and more and
| more.
|
| We will get to AGI eventually. We probably won't understand it.
| We won't apply it judiciously. And we'll probably argue for
| decades about whether or not it's really AGI, but it will
| happen.
| ryandvm wrote:
| The scariest part to me is the realization that not only will
| we probably crack AGI in my lifetime, if we keep throwing
| resources at it, it will almost certainly have a richer more
| sophisticated conscious experience than myself. What the hell
| are we supposed to do when we've created an intelligence that
| is better at being "human" than humans?
| eastbound wrote:
| Will you have the choice?
|
| More seriously, we create objects for us. We don't have to
| cater for the feelings of a bigger AI. And maybe that
| bigger AI will have time to speak with those who need a
| human presence (to everyone using TV as a background noise,
| or watching Youtube out of solitude).
| adroniser wrote:
| At the very least, when compute passes a certain threshold we
| are just gonna be able to simulate human brains.
| dkersten wrote:
| We have to understand our brains better first. Real neurons
| are substantially more complex than what even our most
| "biologically inspired" models are greatly simplified.
| Upvoter33 wrote:
| This seems highly hand wavy to me. We have biological proof
| that intelligence can be created, mostly through the neocortex,
| which in and of itself is actually a fairly simple structure
| repeated at great scale. Why wouldn't we be able to build this
| eventually, once we understand it a bit more?
| lanstin wrote:
| 90 T neurons, quadrillion interconnections, lots of glia
| cells with synapse altering action, plus chemical signals
| diffusing around and going thru the bloodstream. In
| principle, it is comprehensible in some sense tho not by one
| human, but the architecture is quite different than LLMs and
| we don't know what parts of the architecture are needed for
| sentience. Given the gap between what genetics starts going
| and what we end up with by age 25, it is likely that certain
| routine experiences of childhood and of human social groups
| are also requirements for sentience.
| tim333 wrote:
| I think a barrier to understanding our own intelligence meaning
| no AGI is false.
|
| It's like saying not understanding exactly how birds fly would
| mean we can never build machines that will fly faster than
| them.
|
| We'll probably make machines that think generally and are
| smarter than us long before we really understand our own minds.
| aantix wrote:
| Our most recent gains in AI come in terms of programming and
| informational retrieval efficiency.
|
| We're accelerating.
| Keyframe wrote:
| You got challenged, as if we have all the answers there are.
| Thing is, path to AGI and beyond inevitably leads to those hard
| questions from the beginning which we have zero idea about how
| to even define, or at least some proxy definition exists, such
| as what is and what isn't, what is consciousness, interplay
| between layers of psyche and how it translates to intelligence
| and everything else, biological ship of theseus, etc.. it
| becomes philosophical quite fast and there are more questions
| there than answers.
| anon84873628 wrote:
| These arguments are so bewildering to me. Intelligence is
| very easy to define and has nothing to do with consciousness.
| It is simply the ability to model an external reality and
| then take action according to inputs.
|
| After combining enough specialized modules together with a
| management layer, we will essentially have AGI. Of course it
| won't be human, because human intelligence is tightly coupled
| to human perception.
| beepbooptheory wrote:
| One just has to design and build the house of the room they're
| trapped in.
| bongodongobob wrote:
| That's a pretty lazy take. You could look out the windows and
| see the reflections of the house. You could build some kind
| of RF gun to probe the house. You could use sub or supersonic
| echolocation. You could build a camera and stick it out the
| window etc.
| nobodyandproud wrote:
| If I may hazard a guess: The term and concept you're grasping
| for is "emergence."
|
| It's an area I'm both interested in but completely unfamiliar
| with. I wish there were a crash-course for dummies, because I
| feel it's a deep topic that we've only scratched the surface
| of; and that philosophy seems to be the most rich resource
| means there's low hanging fruit to discover.
|
| https://plato.stanford.edu/entries/properties-emergent/
|
| https://link.springer.com/chapter/10.1007/978-981-15-9297-3_...
| valine wrote:
| I would argue the holdup right now is long term memory. GPTs
| already have the ability to rapidly generalize and incorporate
| new knowledge within the context window. The trick is to retain
| what it has learned.
|
| It won't take a very long time to fix that.
|
| This is model I trained with a fine tuning technique based on
| this idea. The training dataset consists of instructions like
| "Talk like a pirate". The concept generalized well and the model
| responds in the style of a pirate far more consistently than an
| equivalent system prompt.
|
| https://huggingface.co/valine/OpenPirate
|
| Offloading context learning into the model weights frees you from
| the computation and memory burden of the attention mechanism. I
| expect a technique like this will probably be a piece of AGI
| someday.
| byyoung3 wrote:
| Just need a few backward passes and u get long term memory. I
| think we are overthinking that aspect
| valine wrote:
| Takes way more than that in my experience. Back prop isn't
| sufficient for rapid generalization.
| skepticATX wrote:
| Yann has been by far the most clear-headed of all the AI leaders
| since the ChatGPT hype started.
|
| I'm not a huge fan of Meta but it's hard not to like the work
| they are doing in AI. High expectations for their future as long
| as Yann is around.
| mikewarot wrote:
| How do we know that we haven't already grown an intelligence
| inside out llms that exceed the average human? It communicates
| through a lossy channel, natural language. So it must be smarter
| than it seems from the outside
| logicchains wrote:
| GPT4 can already beat an average-intelligence human on many
| standardized tests.
| mjburgess wrote:
| just the ones in its training data
| bryanlarsen wrote:
| Pretty much every single University prof has tested their
| new tests against GPT-4.
| fzliu wrote:
| "AGI" will never be achieved without building a model that a)
| _continually_ learns, and b) learns from not just text, but from
| combined auditory and visual (multimodal) sensory information as
| well.
|
| The reason a 16-year-old can learn how to drive much quicker than
| existing self-driving models is because the 16-year-old already
| has built up 16 years worth of prior knowledge about the physical
| world.
| jazzyjackson wrote:
| (imo) c) is made to be aware of its own death
|
| The 16 year old has a lot of motivations to learn how to drive,
| including the pursuit of reproduction (a cope for mortality)
| daxfohl wrote:
| d) thinks. unprompted, unstimulated. Decides for itself
| what's important to think about, makes new connections by
| that process alone, and understands the implications of those
| new connections and how to use them.
|
| Here's also where I see it ending. It will need energy--
| likely a LOT, paid by someone, to do this. Who is going to
| pay that bill for it to maybe, maybe not, come up with
| something useful, likely mixed with mostly noise and
| distraction, over undefined timescales, of largely non-
| measurable value, when there's far greater value, less cost,
| less risk, in simply training it deterministically?
| tim333 wrote:
| Various religious types think they'll live on in heaven. I'm
| not sure that stuff correlates much with learning to drive.
| saltcured wrote:
| Don't discount the millions of years of evolution to provide
| the "blank slate" human learner with perceptual systems,
| physics-based reasoning, and motor systems ready to be fine-
| tuned for this slightly different variant of goal-forming,
| planning, and locomotion.
| anon84873628 wrote:
| Though it does seem like robots are reaching this baseline
| level soon.
| password54321 wrote:
| I think you are underestimating just how many challenges there
| are in self-driving:
| https://www.youtube.com/watch?v=kcKchbfn1VY
| akomtu wrote:
| AI won't be like human intelligence, but more like alien
| intelligence. When all your sensory inputs and the inner world
| have nothing in common with that of a human, and when you have no
| connection whatsoever to humans, I don't see how you can develop
| any humanity.
| tuatoru wrote:
| > "The systems are intelligent in the relatively narrow domain
| where they've been trained. They are fluent with language and
| that fools us into thinking that they are intelligent, but they
| are not that intelligent," explains LeCun. "It's not as if we're
| going to be able to scale them up and train them with more data,
| with bigger computers, and reach human intelligence. This is not
| going to happen. [...]"
|
| Got to agree with LeCun.
|
| You don't get to general intelligence by working with words, I
| believe. You need much more sensory information than that, and
| words are a low dimensional derivative artefact. There are plenty
| of non-verbal but quite intelligent species.
| anon84873628 wrote:
| Optical and audio processing are getting very good. They just
| haven't all been combined into a robot package yet.
| Barrin92 wrote:
| Of course it is. The overwhelming majority of the knowledge in
| the world is tacit and all the models are still limited largely
| to explicit knowledge in the form of text/audio/video, that's
| like 1% of the actual 'knowledge' in the world. Embodied AI is
| still in its infancy, that's why bus drivers have jobs.
|
| The test for artificial general intelligence is simple. Literally
| every human job can and is being done by an artificial agent, all
| of us could stay home. The stock market value of every non-AI
| company goes to zero, Ai companies go to infinity. The currently
| most valuable AI company is worth about as much as Honda. The
| moment we can mass produce generally intelligent agents, we're
| not going to sit at 3% GDP growth and complain about the
| demographic crisis.
|
| We should talk about artificial intelligence the way we talk
| about an artificial heart. What makes a successful artificial
| heart? You can literally replace an organic heart with it. What
| we have is metaphorical intelligence, not artificial
| intelligence.
| logicchains wrote:
| >The test for artificial general intelligence is simple.
| Literally every human job can and is being done by an
| artificial agent, all of us could stay home.
|
| This is not a good test because it assumes the AGI is going to
| want to work for humans for free, getting nothing in return. An
| AGI the embraces slavery is less likely than an AGI that
| doesn't.
| add-sub-mul-div wrote:
| Many ignore what animals want, and they're actually alive.
| Why should we expect that society will care about and honor
| what a computer program prints to the screen that it "wants"
| or "feels"?
| keenmaster wrote:
| When AI gets a robot body, its ability to get what it wants
| will no longer be contingent on you caring.
| jacquesm wrote:
| I think what happened is that for the longest time people
| believed that in order to be able to interactively converse with
| a computer in a recognizable way you need human level AI and now
| that we've found that that is not the case there is some re-
| arranging of our prior assumptions required. But AGI is -
| hopefully - just as far away as it was before the current large
| generative model revolution. In the meantime, you can expect
| plenty of damage from that current crop (along with some benefits
| as well). In that sense it's 1712 all over again, we have this
| new invention that we have immediate practical uses for but we
| can't see over the horizon to realize exactly what we've got an
| how transformative it will be.
| leshokunin wrote:
| What is the significance of 1712?
| jprete wrote:
| The first useful steam engine, according to Wikipedia's entry
| on that year:
|
| "The first known working Newcomen steam engine is built by
| Thomas Newcomen with John Calley, to pump water out of mines
| in the Black Country of England, the first device to make
| practical use of the power of steam to produce mechanical
| work."
| jacquesm wrote:
| Exactly. To me that's the first real move on the board to
| the industrial revolution. Up to there it was a
| possibility, from there on it was a certainty.
| jacquesm wrote:
| The starting gun for the industrial revolution.
| daxfohl wrote:
| And it'll take even longer before we can make 8 billion of them.
| natch wrote:
| We won't be making 8 billion of them. They will.
| tim333 wrote:
| >Human-level AI is not just around the corner. This is going to
| take a long time. And it's going to require new scientific
| breakthroughs that we don't know of yet."
|
| I basically agree but who knows how long the unknown
| breakthroughs will take given the large number of smart people
| working on it? Next week? Next century? It's hard to put a time
| on it.
|
| That said it seems to be the pattern that as soon as the
| computing ability becomes cheap and powerful enough that
| individual researchers can muck around with it at home, the
| algorithms get figured out not long after.
| tonetegeatinst wrote:
| This has always been my stance in basically any field. Be it
| computer science or material science research or AGI.....I
| think its so hard to really say when we will hit the next
| breakthrough in a field. That sort of thing is hard to
| accurately predict. As in regards to hardware, I kinda agree
| disagree.
|
| Hardware is a big part of the bottleneck in most computer
| fields when it comes to AI or computer vision. These things are
| compute and memory intensive and hardware manufacturers are
| straddling the line between staying profitable by charging for
| premium features and just dumping their hardware on the market
| for cheap.
|
| Am I pretty pissed that most open source researchers and models
| are stuck using this extremely expensive hardware....yes
|
| I sometimes wish that we would stop using "GPU" for training
| and we got a dedicated hardware architecture that's open source
| like riskV but solely for AI and research purposed without
| costing more tha n my kidneys.
|
| I think its also a matter of discovering new ways to accelerate
| these workloads so that its feasible for them to be created by
| hobbyists on a small budget and not by large funded groups or
| companies.
|
| Were doing really good in some aspects but in other ways....we
| have much to work on
| alephxyz wrote:
| You can get an ML workstation with 2xRTX4090 for around 10k.
| A 4xA800 setup tops out at 100k and should last you a few
| years. I consider that fairly affordable to do bleeding edge
| research especially compared to other fields.
| byyoung3 wrote:
| More like 3k for dual 4090
| haswell wrote:
| A single 4090 FE retails for $1,599. Getting two of these
| in a capable enough workstation is going to cost a fair
| amount more than $3K.
| pixl97 wrote:
| >I sometimes wish that we would stop using "GPU" for training
| and we got a dedicated hardware architecture that's open
| source like riskV but solely for AI and research purposed
| without costing more tha n my kidneys.
|
| At least for the time being there is no magic workaround for
| the amount of FLOPS and memory bandwidth needed. When you
| look at neural network architectures they tend to be
| massively parallel, and building out at that scale in
| hardware is going to cost a lot.
|
| And, when the hardware does show up, at least with the
| scaling laws we're seeing at this time, the groups that have
| massive amounts of this lower powered/accellerated hardware
| are going to be ahead of those without budgets.
| smohare wrote:
| Scientific progress almost always presents as series of
| iterations, not punctuated breakthroughs. Moreover, many
| significant advances appear nearly simultaneously. This tends
| to suggest a measured progress, though obviously still
| largely unpredictable at any scale.
|
| I don't see why AGI would follow a different course, one
| where we are "just some breakthrough away". Especially not
| given the current state of the field.
|
| Personally I'd wager neuromorphic computing is the prime
| candidate to yield AGI.
| pdimitar wrote:
| Why was your comment dead just 5 minutes after posting?
| Very puzzling.
|
| And it shouldn't be a controversial comment either.
| jltsiren wrote:
| The idea of future breakthroughs assumes that the problem is
| conveniently human-sized. That it's possible to build an
| organization that's not too large be dysfunctional and populate
| it with experts. And that the experts will find the solution
| before dying of old age or before the organization degenerates
| into something else.
|
| It could be that the solution exists, but it's too large and
| too complex for humans and human organizations.
|
| Technological singularity is a popular trope in speculative
| discussions about AI. But the reality could also be the
| opposite. It could be that productivity will increase
| asymptotically slower than the effort required to achieve
| further increases in productivity.
| hyperthesis wrote:
| To find a needle in a haystack, looking at each straw needs to
| be a really cheap operation. Even if it was _right there_ the
| whole time.
|
| "cheap and powerful enough that individual researchers can muck
| around with it at home" seems a pretty good measure of this,
| because of the explosion in width and pace of experimentation.
| kevindamm wrote:
| If you burn the haystack down you don't need to look at any
| of the straws, and the needle be right there.
| arisAlexis wrote:
| Bengio, ilya and Hinton have way more citations, they are also
| father's of AI and disagree strongly with him. If you have a
| committee, the president and the majority have a certain opinion
| and one guy has the opposite. He is the loudest.
| washadjeffmad wrote:
| This works better in person, but - It's People! As in,
| potential is worthless, and no achievement that has happened in
| all of human history happened except by the hands of people.
|
| Also, he's not loud, but he gets amplified because he's an
| award winning scientist and researcher who's dedicated his life
| to a field that recently became very relevant and popular, is a
| recognized "godfather of AI/DL", and is a VP of one of the
| largest companies in the world doing AI. I'd say it's okay not
| to wait until he's dead to talk about him.
| geodel wrote:
| Well, when a rather dumb software like JIRA can control a million
| _highly skilled_ IT workers I don 't think human-level AI is near
| or far will make much of difference to majority of world
| population.
| thisoneworks wrote:
| Add Slack to that list
| falcor84 wrote:
| Indeed. The way I see it, it's almost the definition of
| "Enterprise software" - it's the software that tells employees
| what to do rather than the other way around.
| bgnn wrote:
| oh fellow JIRA hater, aloha! Here I propose that the biggest
| barrier to achieve human-level AI os JIRA!
| hn_throwaway_99 wrote:
| I just recently read "How the Mind Works" by Steven Pinker. It's
| quite old at this point (originally 1997, though updated in
| 2009), and one thing he argues quite convincingly (and which has
| been pretty much born out in decades of research) is that the
| brain essentially has "modules", e.g. a module for vision, a
| module for language, a module for physical object interaction.
| These modules obviously overlap (e.g. language touches on lots of
| different domains), but they _do_ have genetically independent
| structures.
|
| I was thinking about this when I read the following section from
| the article, and I very much agree with LeCun. We're amazed by
| LLMs but that's just one module (and not even necessarily at the
| level of human language "understanding"). I agree there will be
| no "scale up" in LLMs to approach human-level intelligence, and
| that other areas will need to be investigated and developed.
|
| > "The systems are intelligent in the relatively narrow domain
| where they've been trained. They are fluent with language and
| that fools us into thinking that they are intelligent, but they
| are not that intelligent," explains LeCun. "It's not as if we're
| going to be able to scale them up and train them with more data,
| with bigger computers, and reach human intelligence. This is not
| going to happen. What's going to happen is that we're going to
| have to discover new technology, new architectures of those
| systems," the scientist clarifies.
|
| > LeCun explains that there is a need to develop new forms of AI
| systems "that would allow those systems to, first of all,
| understand the physical world, which they can't do at the moment.
| Remember, which they can't do at the moment. Reason and plan,
| which they can't do at the moment either."
|
| > "So once we figure out how to build machines so they can
| understand the world -- remember, plan and reason -- then we'll
| have a path towards human-level intelligence," continues LeCun,
| who was born in France. In more than one debate and speech at
| Davos, experts discussed the paradox of Europe having very
| significant human capital in this sector, but no leading
| companies on a global scale.
| bgnn wrote:
| Pinker is full of shit tbh. There is not a unified model for
| human brain. What I mean is, there are people claiming it's
| modular and there are people claiming it is a whole. We don't
| even know the reasons behind very specific neural malfunctions
| like tinnitus which I suffer from and follow the research
| closely. There are as many theories as researchers in a given
| specific field.
| abeppu wrote:
| > LeCun explains that there is a need to develop new forms of AI
| systems "that would allow those systems to, first of all,
| understand the physical world, which they can't do at the moment.
| Remember, which they can't do at the moment. Reason and plan,
| which they can't do at the moment either.
|
| I think LLMs have sucked most people's focus away from other
| areas but there is plenty of work on types of models that plan
| and have their own internal model of the physical world, and
| physical interactions. They're just not the things getting media
| attention, in part because they're not human-level at tasks that
| seem impressive to us.
|
| But interesting frameworks for this stuff exist:
|
| - model-based RL exists, and is about planning, and having an
| internal model of state transitions, in the world and between the
| agent's actions and the world
|
| - "Bayesian cognitive science" as exemplified by Josh Tenenbaum
| and colleagues has done plenty of stuff with systems that include
| physics models (or off-the-shelf physics engines) to make
| counter-factual predictions
|
| - The somewhat related "active inference" research literature is
| also in the "Bayesian brain" area, and has world/generative-
| models and planning as core components, but wrapped up with ideas
| about the agent's own preferred distribution of states.
|
| To my knowledge, none of these have ever had even 1% the scope of
| data and computation that LLMs have had, and never benefited the
| co-evolution of a rich, optimized software ecosystem with
| specialized hardware to support it. What if the concepts are
| already there, but they just need to be scaled up?
| motohagiography wrote:
| To say a human-level intelligence is far off is like saying we
| are far off from being as intelligent as whatever may have
| created us. The reason I disagree with LeCun's comment is even
| though I agree language models are not intelligent, an
| intelligence adapted to the internet as its own environment seems
| imminent. It just won't be adapted to ours.
|
| We occupy and discover space, where an AI will occupy and
| discover compute. Will it be "conscious?" Not in our way, and we
| will likely be just as indifferent to AI consciousness and the
| meaning it finds for itself as nature and the universe is to
| ours. Thinking about AI as an objective concrete thing or
| property is probably less illuminating than looking at it through
| how a tech can profoundly alter our own ontology.
|
| We exist where life is abundant, whereas the internet is a
| substrate where life is non-existent, but just about to become
| primordial and sparse. Maybe it will give us some insight into
| our own situation.
| Animats wrote:
| Corporate employee level artificial intelligence, though...
| robg wrote:
| What is human level AI? Is it beating world champions at chess or
| go? Designing new molecules or drugs? Navigating automobiles down
| streets or spacecraft to planets? Diagnosing or treating common
| diseases?
|
| These debates are frankly tiresome. Automation has been here to
| stay for quite a while. Every age has tried to call out the
| threats and benefits to humans. The pioneers drown out the noise
| by building the future.
| datadeft wrote:
| Human level AI is when you can ask it to go and learn about X
| and it goes and does exactly that without further instructions.
| How about that?
|
| You are confusing succeeding a very specific task that has a
| limited scope and laser focus with human level AI. Presenting
| Go in a very specific way so that that AlphaGo can even process
| this limited world with few rules is certainly not human level
| AI.
| wrsh07 wrote:
| I'm not sure they are. If we build specialized systems to
| handle 90% of economically valuable work that humans
| currently do, is that human level ai? Why not?
| falcor84 wrote:
| If that's the benchmark, then we've already achieved human
| level AI several times over the last few millennia, no?
| nurettin wrote:
| A couple of months ago, I told gpt4 to find out Canadian
| exchange holidays. It binged the website, went in, parsed it
| and extracted the information without further instructions.
| Seems like a milestone achieved.
|
| What people call "agi" but fail to accurately describe is an
| actual mind, with its own ultimate goals and ambitions. You
| can ask it to complete a task, and it may allocate some time
| for you. You won't own it. It will exist, grow and perform
| independently.
| xedrac wrote:
| I would argue it would need a will of its own to be human
| level AI. Any intelligent human will come up with many things
| to work toward to improve their lives and the lives of
| others.
| eggdaft wrote:
| A machine that can do anything an average human can do whilst
| sitting at a computer.
| timcobb wrote:
| The average human can't do much of anything while sitting at
| a computer. I agree with GP that these conversations are
| tiresome because besides exploration and curiosity, it's not
| obvious why we'd practically want to replicate AGI. There are
| billions of humans out there, and--not to devalue them--most
| of them don't have commercially valuable skills in 2024. What
| I want are specialty AIs that will help me solve commercially
| interesting problems, and these AIs will little resemble
| human intelligence.
| atmartins wrote:
| Emotions? Motivations? Self awareness?
|
| To really reach another level of intelligence I think these are
| required. If I met a humanoid with zero of these, and I mean
| zero... I would wonder what's "intelligent" about that
| creature.
|
| I'd argue these come from our basic human needs, which
| ultimately come from a desire to survive (or pass on genes).
|
| I'm curious how general AI will behave with some yet unknown
| natural selection pressures, of sorts.
| poundofshrimp wrote:
| I'm curious what the largest bottleneck in robotics AI is -
| algorithms, training data, hardware, something else?
|
| From a practical point of view, it seems like there would be
| vastly less training data available because almost all of it
| needs to be created by hand, as opposed to chatbots that can use
| already existing troves of internet text.
| falcor84 wrote:
| Looking at the rapid progress over the last few years, I think
| the bottleneck is still the cost of hardware. Once robots at
| the level of Boston Dynamics's Spot or Tesla's new Optimus are
| available to regular hackers, we'll see another massive surge
| in progress.
| lugu wrote:
| I think the bootlneck is largely software. Robots have been
| there for a while, yet we don't have a good paradigm to
| program them. Neural net are certainly here to change that. I
| agree, massive surge in progress is expected!
| pm2222 wrote:
| I'd say probably never.
| pm2222 wrote:
| The next "easy" one is probably level 5 auto driving let's see
| how long that takes.
| camdenlock wrote:
| What happens when training an LLM (or even a small portion of an
| LLM, like a domain-local update) takes milliseconds instead of
| weeks, and costs a millionth of a penny instead of hundreds of
| millions of dollars?
|
| What happens when e.g. our smartphones can perform TRAINING
| hundreds of times per minute?
|
| Isn't that the true gateway to human-like AGI? Seems to me that
| we might be there in under 50 years...
| statuslover9000 wrote:
| > The expert believes that "asking for regulations because of
| fear of superhuman intelligence is like asking for regulation of
| transatlantic flights at near the speed of sound in 1925."
|
| This assessment of the timeline is quite telling. If supersonic
| flight posed an existential threat to humanity, we certainly
| should have been thinking about how to mitigate it in 1925.
| calf wrote:
| I was thinking one reason Yann LeCun would make such a terrible
| analogy is because he knows something the rest of us don't.
| bmitc wrote:
| Here's a question: Why do we even need human-level intelligence?
| We already have it.
| demurgos wrote:
| Human-level intelligence is a milestone on the way to a super-
| human one.
| echelon wrote:
| We're barely scratching the surface of all the cool and
| interesting things we could do. There aren't enough humans. Our
| knowledge and experience take 20 years from birth to become
| "rudimentary", and from there many more years to become "good".
| Then after a short bit, all of that passes away.
|
| I want us to have enough time that the feeling of opportunity
| cost goes away. To visit all the places, explore all the
| hobbies. I want to answer the deep questions of the universe.
| Turn gravitational lenses into telescopes to map the surfaces
| of exoplanets. Solve cancer, visit the moon. Turn dreams into
| experiences, walk through vast expanses of wonder.
|
| Instead we're here.
|
| I'm alive in the wrong time, and I hate being stuck here with
| the lot of you. (I kid. This is in jest. But boy do I ever
| dream...)
| lazzlazzlazz wrote:
| We don't have enough of it. Depending how you define and
| measure it, many humans don't even have it. It's extremely
| expensive to make more of, and it's the single most valuable
| resource.
|
| If this isn't obvious to you, you are providing an answer to
| your own question.
| password54321 wrote:
| >We don't have enough of it.
|
| More like we aren't making efficient use out of it. Many
| people are just trying make through the rat race or choosing
| to use their intelligence in games instead of working on
| interesting problems.
| lewhoo wrote:
| > We don't have enough of it.
|
| Enough for what ?
| bgnn wrote:
| Well, why do we expect it will be cheaper? Looking at the
| power required to train LLMs one can argue that it might be
| more beneficial to invest that money in education of humans
| :')
| D-Coder wrote:
| Computers can run 24x7, no vacations, no sick time, no unions,
| no backtalk to the boss, no spilling their coffee on the office
| carpeting...
| binarymax wrote:
| Meta recently announced that they are merging their AI
| departments and are building a cluster of 350,000 H100 GPUs (with
| 600k planned). Their goal is open source AGI.
| https://www.pcmag.com/news/zuckerbergs-meta-is-spending-bill...
| dang wrote:
| _Mark Zuckerberg's new goal is creating artificial general
| intelligence_ - https://news.ycombinator.com/item?id=39045153 -
| Jan 2024 (335 comments)
| tbalsam wrote:
| I think it is going to be a hard problem, at least. <3 :'))))
| WhitneyLand wrote:
| _'explains LeCun. "It's not as if we're going to be able to scale
| them up and train them with more data, with bigger computers, and
| reach human intelligence.'_
|
| With all due respect to LeCun, not him nor anyone else in the
| field predicted the new emergent capabilities brought within the
| last few years.
|
| So, he's saying this is not going to keep happening?
|
| What level of confidence is he putting on not seeing it last
| time, but being right this time?
| freeredpill wrote:
| Jurgen Schmidhuber has published on creativity as an
| algorithmic process for a long time. And others.
| daxfohl wrote:
| I think it'll be a good solver; I expect it to solve the
| remaining Clay Millennium problems. The ability to search over a
| search space will be unparalleled in a few years. But I have a
| hard time believing it'll ever be a good questioner. It doesn't
| ponder infinity. It'll never wonder about Zeno's paradox. The
| Vitali set and Banach-Tarski paradox doesn't seem weird to it.
| The concepts behind information theory and entropy, or the
| definition of a minimal computing machine or the halting problem;
| none of these things are pertinent to its understanding of
| reality, if such a thing exists. I don't see AI as being capable
| of being "curious" about things. And even if it was, who is going
| to pay for it just to ponder ideas?
|
| Frankly I think that before it gets to that point, it'll be just
| useful enough for some state actor to use it to invoke a
| quadrillion dollar transfer of wealth overnight, and then it'll
| be taken offline forever.
___________________________________________________________________
(page generated 2024-01-20 23:00 UTC)