[HN Gopher] Language is not essential for the cognitive processe...
___________________________________________________________________
Language is not essential for the cognitive processes that underlie
thought
Author : orcul
Score : 519 points
Date : 2024-10-17 12:10 UTC (3 days ago)
(HTM) web link (www.scientificamerican.com)
(TXT) w3m dump (www.scientificamerican.com)
| fnordpiglet wrote:
| For those who can't and don't think in words this is
| unsurprising.
| fsndz wrote:
| absolutely !
| neom wrote:
| How would someone think in words? You mean the words in the
| pictures or...?
| mjochim wrote:
| By "hearing" words, sentences, dialogues in their mind. Just
| like imagining a picture, but audio instead.
| Teever wrote:
| but words, sentences, and dialogues are all features of
| language.
| vivekd wrote:
| I think in words. For me during thought there is a literal
| voice in my putting my thoughts into words.
| BarryMilo wrote:
| Are there really people who don't know about inner
| monologues?
| IAmGraydon wrote:
| I think it's more likely that they lack the awareness to
| recognize it.
| jerf wrote:
| I have the standard internal monologue many people report,
| but I've never put much stock in the "words are _necessary_
| for thought " because while I think a lot in words, I also
| do a lot of thinking in not-words.
|
| We recently put the project I've been working on for the
| last year out into the field for the first time. As was
| fully expected, some bugs emerged. I needed to solve one of
| them. I designed a system in my head for spawning off child
| processes based on the parent process to do certain
| distinct types of work in a way that gives us access to OS
| process-level controls over the work, and then got about
| halfway through implementing it. Little to none of this
| design involved "words". I can't even say it involved much
| "visualization" either, except maybe in a very loose sense.
| It's hard to describe in words how I didn't use words but
| I've been programming for long enough that I pretty much
| just directly work in system-architecture space for such
| designs, especially relatively small ones like that that
| are just a couple day's work.
|
| Things like pattern language advocates aren't wrong that it
| can still be useful to put such things into words,
| especially for communication purposes, but I know through
| direct personal experience that words are not a _necessary_
| component of even quite complicated thought.
|
| "Subjective experience reports are always tricky, jerf. How
| do you know that you aren't fooling yourself about not
| using words?" A good and reasonable question, to which my
| answer is, I don't even _have_ words for the sort of design
| I was doing. Some, from the aforementioned pattern
| languages, yes, but not in general. So I don 't think I was
| just fooling myself on the grounds that even if I tried to
| serialize what I did directly into English, a
| transliteration rather than a translation, I don't think I
| could. I don't have one.
|
| I'm also not claiming to be special. I don't know the
| percentages but I'm sure many people do this too.
| binary132 wrote:
| Like, at the speed of speech?
| neom wrote:
| I'm an idiot. I thought this meant, for some reason unknown
| to me... written words, something I couldn't imagine being
| able to think in. Spoken words, sure.
| perryizgr8 wrote:
| So if you want to look at your phone there's a voice going
| "I shall pick up my phone and swipe the lock away now."?
| Trying to understand if ALL thinking is in words or some
| subset.
| kjkjadksj wrote:
| Could you imagine the impossibility of riding a bike if you had
| to consciously put words to every action before you did it?
| Razengan wrote:
| Can you _count_ without using a "language"?
|
| Try it now: Tap your hand on the desk randomly. Can you recall
| how many times you did it without "saying" a sequence in your
| head like "1, 2, 3" or "A, B, C" etc?
|
| If yes, how far can you count? With a language it's effectively
| infinite. You could theoretically go up to "1 million 5 hundred
| 43 thousand, 2 hundred and 10" and effortlessly know what comes
| next.
| kachnuv_ocasek wrote:
| Interestingly, I feel like I can "feel" small numbers (up to
| 4 or 5) easier than than thinking about them as objects in a
| language.
| 082349872349872 wrote:
| By feel, I can without language or counting, play mostly
| X . . X . . X . . . X . X . . .
|
| and every so often switch out for variations, eg:
| X . . X . . X . X . . . X . . .
|
| or X . . . X . . . . . X . X . . .
|
| but I'm no good for playing polyrhythms, which many other
| people can do, and I believe they must also do so more by
| feel than by counting.
| wizzwizz4 wrote:
| Practice a few polyrhythms, get used to things like:
| X . X X X . X . X X X . A . . A . . A . . A . .
| B . B . B . B . B . B .
|
| and: X . . X . X X X . X X . X . X X . .
| X . X X . . X X . X X . X . . X . X X . . X X . X . . X .
| . X X X X . . X X X X . . X . . X . X X . . X X . X . . X
| . X X . X X . . X X . X . . X X . X . X X . X X X . X . .
| A . . . . A . . . . A . . . . A . . . . A . . . . A . . .
| . A . . . . A . . . . A . . . . A . . . . A . . . . A . .
| . . A . . . . A . . . . A . . . . A . . . . A . . . . A .
| . . . A . . . . A . . . . A . . . . B . . . . . . B
| . . . . . . B . . . . . . B . . . . . . B . . . . . . B .
| . . . . . B . . . . . . B . . . . . . B . . . . . . B . .
| . . . . B . . . . . . B . . . . . . B . . . . . . B . . .
| . . . B . . . . . . C . . C . . C . . C . . C . . C
| . . C . . C . . C . . C . . C . . C . . C . . C . . C . .
| C . . C . . C . . C . . C . . C . . C . . C . . C . . C .
| . C . . C . . C . . C . . C . . C . . C . . C . . C . . C
| . .
|
| Learn to do them with one limb (or finger) per line, and
| also with all the lines on the same limb (or finger). And
| then suddenly, they'll start to feel intuitive, and
| you'll be able to do them by feel. (It's a bit like
| scales.)
| youoy wrote:
| It's a well known phenomenon! I will drop this link here in
| case you are not familiar with it:
|
| https://www.sciencealert.com/theres-a-big-difference-in-
| how-...
| j_bum wrote:
| This is highly anecdotal, but when I lift weights, I have an
| "intuition" about the number of reps I've performed without
| consciously counting them.
|
| An example of this would be when I'm lifting weights with a
| friend and am lost in the set/focusing on mind-muscle
| connection, and as a result I forget to count my reps. I am
| usually quite accurate when I verify with my lifting partner
| the number of reps done/remaining.
|
| As OP mentioned, many people have _no_ internal speech,
| otherwise known as anendophasia, yet can still do everything
| anyone with an internal dialogue can do.
|
| Similarly for me, I can do "mental object rotation" tasks
| even though I have aphantasia.
| wizzwizz4 wrote:
| > _I have an "intuition" about the number of reps I've
| performed without consciously counting them._
|
| This is known as subitising.
| datameta wrote:
| Can you expand on your last sentence? The notion is
| fascinating to me.
| datameta wrote:
| I can remember the sequence of sounds and like a delay line
| repeat that sequence in my head. This becomes easier the more
| distinguishable the taps are or the more of a cadence
| variability there is. But if it is a longer sequence I
| compress it by remembering an analogue like so: doo doo da
| doo da doo da da doo (reminiscent of morse code, or a kind of
| auditory binary). Would we consider this language? I think in
| the colloquial sense no, but it is essentially a machine
| language equivalent.
|
| For context I have both abstract "multimedia" thought
| processes and hypervisor-like internal narrative depending on
| the nature of the experience or task.
| card_zero wrote:
| Do you also have some noise for mathematical operations,
| such as raising a number to a power, and for equals? So doo
| doo da _ugh_ doo doo _feh_ doo doo da doo da doo da da doo?
|
| ...maybe I do this sometimes myself. Remembering the proper
| names of things is effort.
| pineaux wrote:
| I think this is what language is. It's a sequence
| rememberance system.
| Razengan wrote:
| Oh no... That would vindicate the chatbots..
| jwarden wrote:
| I can. But I do this by visualizing the taps as a group. I
| don't have to label them with a number. I can see them in my
| mind, thus recalling the taps. If I tap with any sort of
| rhythm I can see the rhythm in the way they are laid out in
| my mind and this helps with recollection.
|
| If I want to translate this knowledge into a number, I need
| to count the taps I am seeing in my head. At that point I do
| need to think of the word for the number.
|
| I could even do computations on these items in my mind,
| imagine dividing them into two groups for instance, without
| ever having to link them to words until I am ready to do
| something with the result, such as write down the number of
| items in each group.
| calf wrote:
| But that's like how I memorize sheet music, visual groups
| and subgroups of notes, and yet sheet music is formally
| linguistic nevertheless. So in such debates I think a
| tricky pitfall to avoid is that all data structures are
| essentially linguistic as well.
| nemo wrote:
| Many animals can do some form of counting of small numbers
| where there's no connection to language possible.
| mcswell wrote:
| One, two, ...many.
| KoolKat23 wrote:
| An important note. If you're hearing your voice in your head
| doing this, that's subvocalisation and it's basically just
| saying it out loud, the instruction is still sent to your
| vocal chords
|
| It's the equivalent of <thinking> tags for LLM output.
| fnordpiglet wrote:
| I don't make a sound or word in my mind but I definitely keep
| track of the number. My thinking is definitely structured and
| there are things in my thoughts but there is no words or
| voice. I also can't see images in my mind either. I've no
| idea what an inner monologue or the minds eye is like. I have
| however over the years found ways to produce these
| experiences in a way of my own. I found for instance some
| rough visualization was helpful in doing multi variate
| calculus but it's very difficult and took a lot of practice.
| I've also been able to simulate language in my mind to help
| me practice difficult conversations but it's really difficult
| and not distinct.
|
| I would note though I have a really difficult time with
| arithmetic and mechanical tasks like counting. Mostly I just
| lose attention. Perhaps an inner voice would help if it
| became something that kept a continuity of thought.
| bonoboTP wrote:
| Can you draft a sentence (with all the words precisely
| determined) in your mind before you say it or you write it
| down? Can you "rehearse" saying it without moving your
| tongue or mouth? If yes, that's pretty much an "inner
| voice".
| fnordpiglet wrote:
| Not really, I can speak it out loud though which is often
| what I do. I have over the years been able to do it in my
| mind but it's not really a voice or words but some
| conceptual framing of the words. It's difficult to
| explain.
| Razengan wrote:
| This is So unrelatable lol. Imagine how different alien
| minds would be!!
| bonoboTP wrote:
| I can imagine the numbers as figures (I mean that the shape
| of the characters 1, 2 etc), or the patterns on a dice in
| sequence.
|
| This is a parallel stream, because if I count with imagined
| pictures, then I can speak and listen to someone talking
| without it disturbing the process. If I do it with
| subvocalization, then doing other speech/language related
| things would disturb the counting.
| aeonik wrote:
| Wow I've never tried this before, and I feel like this is
| way easier than using words.
| slashdave wrote:
| > Can you count without using a "language"?
|
| Yes. Seriously, these kind of questions are so surprising. It
| tells you that everyone's experience is just a little
| different.
| GoblinSlayer wrote:
| I can count to 10 with fingers.
| cassianoleal wrote:
| I remember back in school, a language teacher once was trying
| to convey the importance of language. One of his main arguments
| was that we needed words and languages in order to think. I
| still recall my disbelief.
|
| I spent the next few days trying to understand how that process
| worked. I would force myself to think in words and sentences.
| It was incredibly limiting! So slow and lacking in images, in
| abstract relationships between ideas and sensations.
|
| It took me another few years to realise that many people
| actually depend on those structures in order to produce any
| thought and idea.
| truculent wrote:
| I once realised that, for me, subvocalising thoughts was a
| way to keep something "in RAM", while some other thoughts
| went elsewhere, or developed something else. Perhaps slower
| speed helps in that respect?
| bonoboTP wrote:
| I think people are just using the word "think" differently.
| They may have picked up a different meaning for that verb
| than you. For them, thinking == inner vocalization. It's just
| a different definition. They would not call imagining things
| or daydreaming or musing or planning action steps as
| "thinking".
|
| Also, many people simply repeat facts they were told. "We
| need words to think" is simply a phrase this person learned,
| a fact to recite in school settings. It doesn't mean they
| deeply reflected on this statement or compared it with their
| experience.
| HarHarVeryFunny wrote:
| Right, I think it's less than 50% of people that have an "inner
| voice" - using language to think.
|
| Other animals with at best very limited language, are still
| highly intelligent and capable of reasoning - apes, dogs, rats,
| crows, ...
| mcswell wrote:
| "Highly intelligent" is not a word to be used with apes,
| dogs, rats or crows.
| HarHarVeryFunny wrote:
| If that's your opinion, then define intelligence in a
| meaningful/reductive fashion (not just "i know it when i
| see it"), then defend this opinion based on that!
| fsndz wrote:
| more proof that we need more than LLMs to build LRMs:
| https://www.lycee.ai/blog/drop-o1-preview-try-this-alternati...
| hackboyfly wrote:
| Well it's important to note that this does not mean that our
| language does not play a role in shaping our thoughts.
|
| "You cannot ask a question you that you have no words for"
|
| - Judea Pearl
| m463 wrote:
| <raises eyebrows>
| nurettin wrote:
| Next they will argue that your eyebrows are words.
| m463 wrote:
| dogs have language!
| kjkjadksj wrote:
| My cat asks me to go outside. No english words involved of
| course. She sits and faces the door, meows at it, and paws at
| the knob. Maybe you can argue they are speaking cat when they
| ask.
| sshine wrote:
| I swear my cat says Hao Wan Er haowa'er? when he lacks
| stimulation, which means "Fun?"
| lazyasciiart wrote:
| Now I need to learn about how they convey these questions without
| language.
| m463 wrote:
| I like Temple Grandin's "Thinking the Way Animals Do":
|
| https://www.grandin.com/references/thinking.animals.html
| eth0up wrote:
| Considering that, in 2024, if not a majority, then, still, a vast
| portion of our consciousness is words. Perhaps not for the
| illiterate, but for many, much of our knowledge is through the
| written or spoken word. [Edit: Even a hypothetical person, alone
| and isolated, never having spoken, would still devise internal
| language structures, at least for the external realm. ]
|
| Base consciousness is surely not dependent on language, but I
| suspect base consciousness may be extremely different from what
| one might expect, so much that compared to what we perceive as
| consciousness, might seem something close to death.
| eth0up wrote:
| Well, I'm not sure cognition entirely without language is even
| possible for non larval humans. Language is a natural tendency
| and it arises regardless of documentation, scribblings or
| utterings. It exists whether audible or not. Language itself is
| manifestation of the thinking process that permits it.
|
| And I'll hold to the notion that the complete absence of
| language (and its underlying structure) would resemble death if
| death can be resembled. Perhaps death is only the excoriation
| of thought, cognition and language, with something more
| fundamental persisting.
| bassrattle wrote:
| Is this the death of the Sapir-Whorf theory?
| zorked wrote:
| Sapir-Whorf is not alive.
| xiande04 wrote:
| No. Just because words are not _needed_ for cognitive
| processes, does not mean that people still can and do think in
| language. The properties of that language could then influence
| thought. This is known as the Weak Sapir-Whorf hypothesis (note
| "hypothesis", not "theory").
| saghm wrote:
| Yep, this pretty accurately describes the way of think. I
| have a pretty heavy inner monologue, but it's not the only
| way I think. I've found that words are the way I "organize"
| my thoughts from muddled general ideas mixed with feelings
| into concise ideas that I can understand and gain insights
| from. I often won't fully grasp the significance of an idea I
| have until I talk it out with someone and find a way to put
| it into words that distill whatever I'm thinking into a more
| minimal form.
|
| Somewhat relatedly, I've started suspecting over the past few
| years that this is why I struggle to multitask or split my
| attention; while I can ruminate on several things at once,
| the "output" of my thinking is bottlenecked by a single
| stream that requires me to focus on exclusively to get a
| anything useful from it. Realizing this has actually helped
| me quite a bit in terms of being more productive because I
| can avoid setting myself up for failure by trying to get too
| much done at once and failing rather than tackling things one
| at a time.
| numpad0 wrote:
| This also doesn't say that the non-literal cognitive process
| is DNA-wired logic. Could very well be culturally constructed
| as well.
|
| IMO this rather reinforce Sapir-Whorf positions than refute,
| it means more than literal language/grammar influence
| thoughts. That's directly against UG theory that
| predetermined rigid grammar is all you need.
| gsich wrote:
| It was dead before.
| airstrike wrote:
| https://archive.is/PsUeX
| acosmism wrote:
| now I really want to understand the deep thoughts my cat is
| having
| psychoslave wrote:
| But maybe they exceed human cognition abilities?
| codersfocus wrote:
| While not essential for thought, language is a very important
| tool in shaping and sharing thoughts.
|
| Another related tool is religion (for emotions instead of
| thoughts,) which funnily enough faces the same divergence
| language does.
|
| Right now society that calls itself "secular" simply does not
| understand the role of religion, and its importance in society.
|
| To be clear, I don't belong to any religion, I am saying one
| needs to be invented for people who are currently "secular."
|
| In fact, you have the disorganized aspects of religion already.
| All one needs to spot these are to look at the aspects that
| attempt to systematize or control our feelings. Mass media,
| celebrities for example.
|
| Instead of letting capitalistic forces create a pseudoreligion
| for society, it's better if people come together and organize
| something healthier, intentionally.
| akomtu wrote:
| Materialism is such a religion. It's sciency and emotion-free,
| so it appeals to the secular minds.
| Vecr wrote:
| Holding materialism as an axiom, either directly or non-
| directly (through other axioms) could be called a "religion"
| (though at that point I'm not sure what couldn't be), and
| either way that could be considered bad.
|
| Thinking some type of materialism is even mostly correct,
| with the sum over all mostly materialist theories being close
| to 1, isn't a religion at all.
| GoblinSlayer wrote:
| In secular society art is the language for emotions.
| nickelpro wrote:
| As always, barely anyone reads the actual claims in the article
| and we're left with people opining on the title.
|
| The claims here are exceptionally limited. You don't need spoken
| language to do well on cognitive tests, but that has never been a
| subject of debate. Obviously the deaf get on fine without spoken
| language. People suffering from aphasia, but still capable of
| communication via other mechanisms, still do well on cognitive
| tests. Brain scans show you can do sudoku without increasing
| bloodflow to language regions.
|
| This kind of stuff has never really been in debate. You can teach
| plenty of animals to do fine on all sorts of cognitive tasks.
| There's never been a claim that language holds dominion over all
| forms of cognition in totality.
|
| But if you want to discuss the themes present in Proust, you're
| going to be hard pressed to do so without something resembling
| language. This is self-evident. You cannot ask questions or give
| answers for subjects you lack the facilities to describe.
|
| tl;dr: Language's purpose is thought, not all thoughts require
| language
| dse1982 wrote:
| This. Also the question is what the possible complexity of the
| question is that you want to convey. As long as it is rather
| simple it might seem realistic to argue that there is no
| language involved (i would argue this is wrong). But as soon as
| the problems get more complex, the system you need to use to
| communicate this question becomes more and more undeniably a
| form of language (i think about complexity here as things like
| self-referentiality which need sufficiently complex formal
| systems to be expressed - think what godel is about). So this
| part seems more complicated than it is understood. The same
| goes for the brain-imaging argument. As a philosopher I have
| unfortunately seen even accomplished scientists in this field
| follow a surprisingly naive empiricist approach a lot of times
| - which seems to me to be the case here also.
| GoblinSlayer wrote:
| You mean communication should happen through language?
| K0balt wrote:
| A much more interesting hypothesis is that abstract thought
| (thought about things not within the present sensorium) , or
| perhaps all thought, requires the use of symbols or tokens to
| represent the things that are to be considered.
|
| I think this may have been partially substantiated through
| experiments in decoding thoughts with machine sensors.
|
| If this turns out to-not- to be true it would have huge
| implications for AI research.
| rhelz wrote:
| Great point. They even did a bad job of reading the title. The
| title wasn't "Language is not essential for thought", the title
| was "Language is not essential for the cognitive processes *
| _underlying*_ thought. "
|
| We'd better hope that is true, because if we didn't have non-
| linguistic mastery of the cognitive processes _underlying_
| thought it 's hard to see how we could even acquire language in
| the first place.
| ryandv wrote:
| > As always, barely anyone reads the actual claims in the
| article and we're left with people opining on the title
|
| One must ask why this is such a common occurrence on this (and
| almost all other) social media, and conclude that it is because
| the structure of social media itself is rotten and imposes
| selective pressures that only allow certain kinds of content to
| thrive.
|
| The actual paper itself is not readily accessible, and properly
| understanding its claims and conclusions would take substantial
| time and effort - by which point the article has already slid
| off the front page, and all the low-effort single-sentence
| karma grabbers who profit off of simplistic takes that appeal
| to majority groupthink have already occupied all the comment
| space "above the fold."
| HarHarVeryFunny wrote:
| > Language's purpose is thought
|
| Language's purpose - why it arose - is more likely
| communication, primarily external communication. The benefit of
| using language to communicate with yourself via "inner voice" -
| think in terms of words - seems a secondary benefit, especially
| considering that less than 50% of people report doing this.
|
| But certainly language, especially when using a large
| vocabulary of abstract and specialist concepts, does boost
| cognitive abilities - maybe essentially through "chunking",
| using words as "thought macros", and boosting what we're able
| to do with our limited 7+/- item working memory.
| mcswell wrote:
| Whether language's _purpose_ was communication or thought is
| not easily answered.
|
| For one, how would you know? It left no fossils, nor do we
| have any other kind of record from that time.
|
| For another, the very question implies a teleological view of
| evolution, which is arguably wrong.
|
| As for what 50% of people report (where did that number come
| from?), we have virtually zero intuitive insight into the
| inner workings of our minds in general, or of the way we
| process language. All the knowledge that has been obtained
| about how language works--linguistics--has been obtained by
| external observation of a black box. (FMRIs and the like
| provide a _little_ insight inside that black box, but only at
| the most general level--and again, that 's not intuition.)
| numpad0 wrote:
| hot take: language's original purpose must have been to
| _lie_.
|
| It doesn't take words to understand implication of a club
| in your hand and a body of dead ape. From there it takes
| either violence or words to defend yourself(rightfully or
| not). Here, using language to explain the situation is more
| efficient.
| GoblinSlayer wrote:
| You can look at modern animals: they use language for
| communication.
|
| If people had no idea if they think with words or not,
| presumably they would say so.
| HarHarVeryFunny wrote:
| Surely it's obvious that language production and perception
| evolved out of more primitive animal vocalizations, used
| for communicative purposes. How could it not have ?!
|
| Note that human speech ability required more than brain
| support - it also required changes to the vocal apparatus
| for pronunciation (which other apes don't have), indicating
| that communication (vocalization) was either driving the
| development of language, or remained a very important part
| of it.
| pessimizer wrote:
| > Obviously the deaf get on fine without spoken language.
|
| Why the introduction of "spoken?" Sign languages are just as
| expressive as spoken language, and could easily be written.
| _Writing is a sign._
|
| > But if you want to discuss the themes present in Proust,
| you're going to be hard pressed to do so without something
| resembling language. This is self-evident.
|
| And it's also a bad example. Of course you can't discuss the
| use of language without the use of language. You can't discuss
| the backstroke without any awareness of water or swimming,
| either. You can certainly do it without language though, just
| by waving your arms and jumping around.
|
| > Language's purpose is thought
|
| Is it, though? Did you make that case in the preceding
| paragraphs? I'm not going to go out on a limb here and
| alternatively suggest that language's purpose is
| _communication,_ just like the purpose of laughing, crying,
| hugging, or smiling. This is why we normally do it loudly, or
| write it down where other people can see it.
| slashdave wrote:
| No, language's purpose is to communicate. Isn't this obvious?
| habitue wrote:
| Language may not be essential for thought, (most of us have the
| experience of an idea occurring to us that we struggle to put
| into words), but language acts as a regularization mechanism on
| thoughts.
|
| Serializing much higher dimensional freeform thoughts into
| language is a very lossy process, and this kinda ensures that
| mostly only the core bits get translated. Think of times when
| someone gets an idea you're trying to convey, but you realize
| they're missing some critical context you forgot to share. It
| takes some activation energy to add that bit of context, so if it
| seems like they mostly get what you're saying, you skip it. Over
| time, transferring ideas from one person to the next, they tend
| towards a very compressed form because language is expensive.
|
| This process also works on your own thoughts. Thinking out loud
| performs a similar role, it compresses the hell out of the
| thought or else it remains inexpressible. Now imagine repeated
| stages of compressing through language, allowing ideas to form
| from that compressed form, and then compressing those ideas in
| turn. It's a bit of a recursive process and language is in the
| middle of it.
| ujikoluk wrote:
| Yes, dimension reduction.
| pazimzadeh wrote:
| Communication of thought is a whole different question. Either
| way you're making a lot of strong claims without support?
|
| > this kinda ensures that mostly only the core bits get
| translated
|
| The kinda is doing a lot here. Many times the very act of
| trying to communicate a thought colors/corrupts the main point
| and gives only one perspective or a snapshot of the overall
| thought. There's a reason why they say a picture is worth a
| thousand words. Except the mind can conjure much more than a
| static picture. The mind can also hold the idea and the
| exceptions to the idea in one coherent model. For me this can
| be especially apparent when taking psychedelics and finding
| that trying to communicate some thoughts with words requires
| constant babbling to keep refining the last few sentences, ad
| libidum. There are exceptions of course, like for simple ideas.
| habitue wrote:
| > Many times the very act of trying to communicate a thought
| colors/corrupts the main point and gives only one perspective
| or a snapshot of the overall thought. There's a reason why
| they say a picture is worth a thousand words.
|
| Yeah! Sometimes the thought isnt compressible and language
| doesnt help. But a lot of times it is, and it does
| pazimzadeh wrote:
| Does language actually 'help', or is it just the best we
| have? e.g. would running a thought through language have
| any benefit in a world where telepathy existed
| akomtu wrote:
| Imo, that's the essense of reasoning. Limited memory and slow
| communication channels force us to create compact, but
| expressive models of reality. LLMs, on the other hand, have all
| the memory in the world and their model of reality is a piece-
| wise interpolation of the huge training dataset. Why invent
| grammar rules if you can keep the entire dictionary in mind?
| mcswell wrote:
| Why do LLMs (or rather similar models that draw pictures)
| keep getting the number of fingers on the human hand wrong,
| or show two people's arms or legs merging? Or in computer-
| created videos, fail at object preservation? It seems to me
| they do _not_ have a model of the world, only an imperfect
| model of pictures they 've seen.
| psychoslave wrote:
| >You can ask whether people who have these severe language
| impairments can perform tasks that require thinking. You can ask
| them to solve some math problems or to perform a social reasoning
| test, and all of the instructions, of course, have to be
| nonverbal because they can't understand linguistic information
| anymore. Scientists have a lot of experience working with
| populations that don't have language--studying preverbal infants
| or studying nonhuman animal species. So it's definitely possible
| to convey instructions in a way that's nonverbal. And the key
| finding from this line of work is that there are people with
| severe language impairments who nonetheless seem totally fine on
| all cognitive tasks that we've tested them on so far.
|
| They should start with what is their definition of language. To
| me it's any mean you can use to communicate some information to
| someone else and they generally get a correct inference of what
| kind of representations and responses are expected is the
| definition of a language. Whether it's uttered words, a series of
| gestures, subtle pheromones or a slap in your face, that's all
| languages.
|
| For the same reason I find extremely odd that the hypothesis that
| animals don't have any form of language is even considered as a
| serious claim in introduction.
|
| Anyone can prove anything and its contrary about language if the
| term is given whatever meaning is needed for premises to match
| with the conclusion.
| GavinMcG wrote:
| Just as a data point, my guess is that a very small minority of
| English-language speakers would define the term as broadly as
| you do, at least in a context relating the concept to
| analytical thought processes. At the very least, I think most
| people expect that language is used actively, such that
| pheromones wouldn't fall within the definition. (And actually,
| that's reflected when you say language is a means "you can
| _use_ ".) Likewise, a slap in the face certainly can be
| interpreted, but slapping doesn't seem like a _means_ of
| communicating in general--because a slap only communicates one
| thing.
| psychoslave wrote:
| It's also doubtful that thinking about the concept of
| analytical thought processes is something most humans do
| either, at least not in these terms and this perspective.
|
| Should we expect experts in cognitive science exposing their
| view in a scientific publication to stick to the narrowest
| median view of language though? All the more when in the same
| article you quote people like Russell who certainly didn't
| have a naive definition of language when expressing a point
| of view on the matter.
|
| And slapping in general can definitely communicate far more
| than a single thing depending on many parameters. See
| https://www.33rdsquare.com/is-a-slap-disrespectful-a-
| nuanced... for a text exploring some of nuances of the
| meaning it can encompasse. But even a kid can get that slap
| could perfectly have all the potential to create a fully
| doubly articulated language, as The Croods 2 creators funnily
| have put in scene. :D
| dleeftink wrote:
| I'm not sure it's that fringe. Popular addages such as
| 'language is a vehicle for thought' and 'the pen is mightier
| than the sword' reveal that language is sometimes implied to
| be tool-like, with many of our unspoken acts carrying
| linguistic meaning (e.g. ghosting, not answering a call, sign
| language, gesturing, nodding, etc.).
|
| Even tools present us a certain 'language', talking to us via
| beeps, blinks and buzzes, and are having increasingly
| interesting discussions amongst themselves (e.g. subreddit
| simulator, agent based modeling). Recent philosophers of
| technology as Mark Coeckelbergh present a comprehensive
| argument for why we need to move away from the tool/language
| barrier [0], and has been part in informing the EC Expert
| Group on AI [1].
|
| [0]: https://www.taylorfrancis.com/books/mono/10.4324/9781315
| 5285...
|
| [1]: https://philtech.univie.ac.at/news/news-about-
| publicatons-et...
| throwaway19972 wrote:
| > For the same reason I find extremely odd that the hypothesis
| that animals don't have any form of language is even considered
| as a serious claim in introduction.
|
| I guess I've always just assumed it refers to some feature
| that's uniquely human--notably, recursive grammars.
| psychoslave wrote:
| Not all human languages exhibits recursion though:
| https://en.wikipedia.org/wiki/Pirah%C3%A3_language
|
| And recursion as the unique trait for human language
| differentiation is not necessarily completely consensual
| https://omseeth.github.io/blog/2024/recursive_language/
|
| Also, let's recall that in its broader meaning, the
| scientific consensus is that humans are animals and they
| evolved through the same basic mechanism as all other life
| forms that is evolution. So even assuming that evolution made
| some unique language hability emerge in humans, it's most
| likely that they share most language traits with other
| species and that there is more things to learn from them that
| what would be possible if it's assumed they can't have a
| language and thoughts.
| throwaway19972 wrote:
| Does any other living entity have recursive grammars? It
| seems uniquely human.
|
| It seems that the second link may indicate otherwise but
| I'm still pretty skeptical. This requires extraordinary
| evidence. Furthermore there may be a more practical limit
| of "stack size" or "context size" that effectively
| exceptionalizes humans (especially considering the size and
| proportional energy consumption of our brains).
| psychoslave wrote:
| Does it matter in the frame of investigating relations
| between cognitive processes and languages?
|
| Other animals have cognitive processes, and languages, or
| at least it seems to be something scientifically
| consensual. Thus the surprise reading the kind of
| statement given in introduction.
|
| Whether humans have exceptional language habilites or
| even "just" a biggest board to play on with the same
| basic facilities seems to be a completely different
| matter.
| earleybird wrote:
| I'm inclined to believe that of the animals that exhibit
| varying degrees of self awareness, they have mental
| structures isomorphic to a recursive grammar. As such,
| perhaps using a recursive grammar is not distinctly a human
| trait.
| throwaway19972 wrote:
| I don't think that recursive grammar is linked to self
| awareness. Certainly not strongly. Many animals that don't
| appear to have ability to interpret recursive grammar seem
| to have self awareness.
| ryandv wrote:
| They do, in the first section of the journal article itself:
|
| > Do any forms of thought--our knowledge of the world and
| ability to reason over these knowledge representations--require
| language (that is, representations and computations that sup-
| port our ability to generate and interpret meaningfully
| structured word sequences)?
|
| Emphasis on "word sequences," to the exclusion of, e.g. body
| language or sign language. They go on to discuss some of the
| brain structures involved in the production and interpretation
| of these word sequences:
|
| > Language production and language understanding are sup-ported
| by an interconnected set of brain areas in the left hemisphere,
| often referred to as the 'language network'.
|
| It is these brain areas that form the basis of their testable
| claims regarding language.
|
| > Anyone can prove anything and its contrary about language if
| the term is given whatever meaning is needed for premises to
| match with the conclusion.
|
| This is why "coming to terms" on the definitions of words and
| what you mean by them should be the first step in any serious
| discussion if you aim to have any hope in hell of communicating
| precisely; it is also why you should be skeptical of political
| actors that insist on redefining the meanings of (especially
| well-known) terms in order to push an agenda. Confusing a term
| with its actual referent is exceedingly commonplace in modern
| day.
| psychoslave wrote:
| I don't find these excerpts in the linked article. Are you
| consulting an other document than the one pointed here?
| ryandv wrote:
| The paper itself: https://www.nature.com/articles/s41586-02
| 4-07522-w.epdf?shar...
| chongli wrote:
| Language is infinitely productive. Using a finite number of
| sounds or symbols, humans can produce unlimited utterance
| chains to communicate novel and complex ideas.
|
| Think about it: almost every nontrivial conversation you've had
| or comment/blog/article/book you've read constituted an
| entirely new (to you) utterance which you understood and which
| enabled you to acquire new ideas and information you had
| previously lacked. No non-human animals have demonstrated this
| ability. At best they are able to perform single-symbol
| utterances to communicate previously-understood concepts
| (hungry, sad, scared, tired) but are unable to combine them to
| produce a novel utterance, the way a child could tell you about
| her day:
|
| _"Today the teacher asked me to multiply 3 times 7 and I got
| the answer right away! Then Bobby farted and the whole class
| was laughing. At lunch I bit my apple and my tooth felt funny.
| I think it's starting to wiggle! Sally asked me if I could go
| to her house for a sleepover but I said I had to ask mom and
| dad first."_
| psychoslave wrote:
| > Language is infinitely productive. Using a finite number of
| sounds or symbols, humans can produce unlimited utterance
| chains to communicate novel and complex ideas.
|
| We maybe disagree, in the sense that it seems to be mixing
| indefinitely bounded expressiveness with actual unlimited
| expression production that could potentially be in a
| bijective relationship with the an infinite set of
| expression.
|
| We human are mortals and even at the whole humankind scale,
| we will produce a finite set of utterances.
|
| The main thing bringing so much flexibility to languages, is
| our ability to reuse, fit and evolve them as we go through
| indefinitely many inedit experiences of the world. So
| something like context change tolerance. But if we want to be
| fair with crediting admirable unknowingly extensive
| creativeness, we should first consider the universe as a
| whole, with its permanent flow of novel context, which also
| include all interpretations of itself through mere mortals as
| ourself.
| jjtheblunt wrote:
| The conclusion implied by the title seems self evident for anyone
| who has seen any (at least) nonhuman mammalian predator.
| danielmarkbruce wrote:
| Or anyone who has done any thinking in their own brain.
| mcswell wrote:
| Nonhuman predators don't do math, or most of the other
| cognitive things which (I presume) the author of this article
| investigated in aphasics.
| heresie-dabord wrote:
| Whether in the predator or in the prey, the reward system of
| getting food and surviving through evolution in geological time
| would strengthen effective thinking.
|
| Then comes the need to transmit/transfer understanding.
|
| From the fine article:
|
| > various properties that human languages have--there are about
| 7,000 of them spoken and signed across the world--are optimized
| for efficiently transmitting information, making things easy to
| perceive, easy to understand, easy to produce and easy to learn
| for kids.
| kaiwen1 wrote:
| Here's what Helen Keller had to say about this in _The World I
| Live In_:
|
| "Before my teacher came to me, I did not know that I am. I lived
| in a world that was a no-world. I cannot hope to describe
| adequately that unconscious, yet conscious time of nothingness. I
| did not know that I knew aught, or that I lived or acted or
| desired. I had neither will nor intellect. I was carried along to
| objects and acts by a certain blind natural impetus. I had a mind
| which caused me to feel anger, satisfaction, desire. These two
| facts led those about me to suppose that I willed and thought. I
| can remember all this, not because I knew that it was so, but
| because I have tactual memory. It enables me to remember that I
| never contracted my forehead in the act of thinking. I never
| viewed anything beforehand or chose it. I also recall tactually
| the fact that never in a start of the body or a heart-beat did I
| feel that I loved or cared for anything. My inner life, then, was
| a blank without past, present, or future, without hope or
| anticipation, without wonder or joy or faith.
|
| It was not night--it was not day.
|
| . . . . .
|
| But vacancy absorbing space, And fixedness, without a place;
| There were no stars--no earth--no time-- No check--no change--no
| good--no crime.
|
| My dormant being had no idea of God or immortality, no fear of
| death.
|
| I remember, also through touch, that I had a power of
| association. I felt tactual jars like the stamp of a foot, the
| opening of a window or its closing, the slam of a door. After
| repeatedly smelling rain and feeling the discomfort of wetness, I
| acted like those about me: I ran to shut the window. But that was
| not thought in any sense. It was the same kind of association
| that makes animals take shelter from the rain. From the same
| instinct of aping others, I folded the clothes that came from the
| laundry, and put mine away, fed the turkeys, sewed bead-eyes on
| my doll's face, and did many other things of which I have the
| tactual remembrance. When I wanted anything I liked,--ice-cream,
| for instance, of which I was very fond,--I had a delicious taste
| on my tongue (which, by the way, I never have now), and in my
| hand I felt the turning of the freezer. I made the sign, and my
| mother knew I wanted ice-cream. I "thought" and desired in my
| fingers. If I had made a man, I should certainly have put the
| brain and soul in his finger-tips. From reminiscences like these
| I conclude that it is the opening of the two faculties, freedom
| of will, or choice, and rationality, or the power of thinking
| from one thing to another, which makes it possible to come into
| being first as a child, afterwards as a man.
|
| Since I had no power of thought, I did not compare one mental
| state with another. So I was not conscious of any change or
| process going on in my brain when my teacher began to instruct
| me. I merely felt keen delight in obtaining more easily what I
| wanted by means of the finger motions she taught me. I thought
| only of objects, and only objects I wanted. It was the turning of
| the freezer on a larger scale. When I learned the meaning of "I"
| and "me" and found that I was something, I began to think. Then
| consciousness first existed for me. Thus it was not the sense of
| touch that brought me knowledge. It was the awakening of my soul
| that first rendered my senses their value, their cognizance of
| objects, names, qualities, and properties. Thought made me
| conscious of love, joy, and all the emotions. I was eager to
| know, then to understand, afterward to reflect on what I knew and
| understood, and the blind impetus, which had before driven me
| hither and thither at the dictates of my sensations, vanished
| forever.
|
| I cannot represent more clearly than any one else the gradual and
| subtle changes from first impressions to abstract ideas. But I
| know that my physical ideas, that is, ideas derived from material
| objects, appear to me first an idea similar to those of touch.
| Instantly they pass into intellectual meanings. Afterward the
| meaning finds expression in what is called "inner speech." When I
| was a child, my inner speech was inner spelling. Although I am
| even now frequently caught spelling to myself on my fingers, yet
| I talk to myself, too, with my lips, and it is true that when I
| first learned to speak, my mind discarded the finger-symbols and
| began to articulate. However, when I try to recall what some one
| has said to me, I am conscious of a hand spelling into mine.
|
| It has often been asked what were my earliest impressions of the
| world in which I found myself. But one who thinks at all of his
| first impressions knows what a riddle this is. Our impressions
| grow and change unnoticed, so that what we suppose we thought as
| children may be quite different from what we actually experienced
| in our childhood. I only know that after my education began the
| world which came within my reach was all alive. I spelled to my
| blocks and my dogs. I sympathized with plants when the flowers
| were picked, because I thought it hurt them, and that they
| grieved for their lost blossoms. It was two years before I could
| be made to believe that my dogs did not understand what I said,
| and I always apologized to them when I ran into or stepped on
| them.
|
| As my experiences broadened and deepened, the indeterminate,
| poetic feelings of childhood began to fix themselves in definite
| thoughts. Nature--the world I could touch--was folded and filled
| with myself. I am inclined to believe those philosophers who
| declare that we know nothing but our own feelings and ideas. With
| a little ingenious reasoning one may see in the material world
| simply a mirror, an image of permanent mental sensations. In
| either sphere self-knowledge is the condition and the limit of
| our consciousness. That is why, perhaps, many people know so
| little about what is beyond their short range of experience. They
| look within themselves--and find nothing! Therefore they conclude
| that there is nothing outside themselves, either.
|
| However that may be, I came later to look for an image of my
| emotions and sensations in others. I had to learn the outward
| signs of inward feelings. The start of fear, the suppressed,
| controlled tensity of pain, the beat of happy muscles in others,
| had to be perceived and compared with my own experiences before I
| could trace them back to the intangible soul of another. Groping,
| uncertain, I at last found my identity, and after seeing my
| thoughts and feelings repeated in others, I gradually constructed
| my world of men and of God. As I read and study, I find that this
| is what the rest of the race has done. Man looks within himself
| and in time finds the measure and the meaning of the universe."
| farts_mckensy wrote:
| Stix's claim appears to be unfalsifiable. In scientific and
| philosophical discourse, a proposition must be falsifiable--there
| must be a conceivable empirical test that could potentially
| refute it. This criterion is fundamental for meaningful inquiry.
|
| Several factors contribute to the unfalsifiability of this claim:
|
| Subjectivity of Thought: Thought processes are inherently
| internal and subjective. There is no direct method to observe or
| measure another being's thoughts without imposing interpretative
| frameworks influenced by social and material contexts.
|
| Defining Language and Thought: Language is not merely a
| collection of spoken or written symbols; it is a system of signs
| embedded within social relations and power structures. If we
| broaden the definition of language to include any form of
| symbolic representation or communication--such as gestures,
| images, or neural patterns--then the notion of thought occurring
| without language becomes conceptually incoherent. Thought is
| mediated through these symbols, which are products of historical
| and material developments.
|
| Animal Cognition and Symbolic Systems: Observations of animals
| like chimpanzees engaging in strategic gameplay or crows crafting
| tools demonstrate complex behaviors. Interpreting these actions
| as evidence of thought devoid of language overlooks the
| possibility that animals utilize their own symbolic systems.
| These behaviors reflect interactions with their environment
| mediated by innate or socially learned symbols--a rudimentary
| form of language shaped by their material conditions.
|
| Limitations of Empirical Testing: To empirically verify that
| thought can occur without any form of language would require
| accessing cognitive processes entirely free from symbolic
| mediation. Given the current state of scientific methodologies--
| and considering that all cognitive processes are influenced by
| material and social factors--this is unattainable.
|
| Because of these factors, Stix's claim cannot be empirically
| tested in a way that could potentially falsify it. It resides
| outside the parameters of verifiable inquiry, highlighting the
| importance of recognizing the interplay between language,
| thought, and material conditions.
|
| Cognitive processes and language are deeply intertwined. Language
| arises from collective practice; it both shapes and is shaped by
| the material conditions of the environment. Thought is mediated
| through language, carrying the cognitive imprints of the material
| base. Even in non-human animals, the cognitive abilities we
| observe may be underpinned by forms of symbolic interaction with
| their environment--a reflection of their material engagement with
| the world.
|
| Asserting that language is not essential for thought overlooks
| the fundamental role that social and material conditions play in
| shaping both language and cognition. It fails to account for how
| symbolic systems--integral to language--are embedded in and arise
| from material realities.
|
| Certain forms of thought might appear to occur without human
| language, but this perspective neglects the intrinsic connection
| between cognition, language, and environmental conditiond.
| Reasoning itself can be viewed as a form of internalized language
| --a symbolic system rooted in social and material contexts.
| Recognizing this interdependence is crucial for a comprehensive
| understanding of the nature of thought and the pivotal role
| language plays within it.
| slashdave wrote:
| You are just redefining symbols (language) as thought. This is
| semantic nonsense and purely circular reasoning.
| farts_mckensy wrote:
| You're not getting it. The very proposition of discussing
| cognitive processes as comprehensible without language
| inherently relies on circular reasoning. The claim that
| thought occurs without language cannot be falsified. To
| analyze or describe thought, we must use language, which is
| the very tool that shapes and defines that thought. The
| discussion itself becomes impossible if you remove language
| from the equation, meaning language and thought are co-
| constituitve.
|
| Just as Godel showed that no formal system can be both
| complete and consistent, language as a system cannot fully
| encapsulate the entirety of cognitive processes without
| relying on foundational assumptions that it cannot internally
| validate. Attempting to describe thought without
| acknowledging this limitation is akin to seeking completeness
| in an inherently incomplete framework. Without language, the
| discussion becomes impossible, rendering the initial claim
| fundamentally flawed.
| slashdave wrote:
| You are under the false assumption that thought can only be
| described by language. Why are you constructing this false
| hierarchy? Furthermore, symbolic constructs are not by
| definition language. The opposite, really. Language cannot
| be formed without symbols. Symbols, however, do not need
| language.
| farts_mckensy wrote:
| How else can thought be described if not through
| language? I don't know what you mean by "symbolic
| constructs." Symbols are the foundation of language--
| they're not the opposite. There is no sense in which
| symbols exist outside of at the very least a
| protolinguistic system. Once you begin to associate
| sensory data with meaning, you are doing the work of
| creating language. To analyze or describe cognition, we
| must use language, which organizes symbols into
| meaningful constructs. That thought occurs without
| language is not even wrong per se. It's unfalsifiable,
| which frankly is worse than being wrong in a scientific
| context. As Wittgenstein puts it, 'The limits of my
| language mean the limits of my world.' Without language,
| discussing thought is impossible, making the claim that
| thought occurs without language scientifically untenable.
| It is an attempt to position thought as the
| transcendental signified.
| slashdave wrote:
| Yes, you need language to describe (discuss) something.
| But not everything that exists must have a description.
| Furthermore, meaningful does not require organization.
|
| If you stand outside under the sun, do you have to be
| able to write the word "sun" in order to feel warm?
| farts_mckensy wrote:
| You're sidestepping the problem. Feeling warmth is a
| sensory issue. Connecting the fact that you're feeling
| warm with the fact that you're in the sun is cognition.
| In order to do that, you are doing the work of creating
| language. Sun equals warm.
| GoblinSlayer wrote:
| Thought is observable
| https://www.biorxiv.org/content/10.1101/2021.02.02.429430v1
| farts_mckensy wrote:
| I didn't dispute the idea that thought is observable.
| Animats wrote:
| This is an important result.
|
| The actual paper [1] says that functional MRI (which is measuring
| which parts of the brain are active by sensing blood flow)
| indicates that different brain hardware is used for non-language
| and language functions. This has been suspected for years, but
| now there's an experimental result.
|
| What this tells us for AI is that we need something else besides
| LLMs. It's not clear what that something else is. But, as the
| paper mentions, the low-end mammals and the corvids lack language
| but have some substantial problem-solving capability. That's seen
| down at squirrel and crow size, where the brains are tiny. So if
| someone figures out to do this, it will probably take less
| hardware than an LLM.
|
| This is the next big piece we need for AI. No idea how to do
| this, but it's the right question to work on.
|
| [1]
| https://www.nature.com/articles/s41586-024-07522-w.epdf?shar...
| HarHarVeryFunny wrote:
| Brain size isn't necessarily a very good correlate of
| intelligence. For example dolphins and elephants have bigger
| brains than humans, and sperm whales have much bigger brains
| (5x by volume). Neanderthals also had bigger brains than modern
| humans, but are not thought to have been more intelligent.
|
| A crow has a small brain, but also has very small neurons, so
| ends up having 1.5B neurons, similar to a dog or some monkeys.
| card_zero wrote:
| Not sure neuron number correlates to smarts, either.
|
| https://www.scientificamerican.com/article/gut-second-brain/
|
| There are 100 million in my gut, but it doesn't solve any
| problems that aren't about poop, as far as I know.
|
| https://en.wikipedia.org/wiki/List_of_animals_by_number_of_n.
| ..
|
| If the suspiciously round number is accurate, this puts the
| human gut somewhere between a golden hamster and ansell's
| mole-rat, and about level with a short-palated fruit bat.
| HarHarVeryFunny wrote:
| Agreed. It's architecture that matters, although for a
| given brain architecture (e.g. species) there might be
| benefits to scale. mega-brain vs pea-brain.
|
| I was just pointing out that a crow's brain is built on a
| more advanced process node than our own. Smaller
| transistors.
| Animats wrote:
| That makes sense. Birds are very weight-limited, so
| there's evolutionary pressure to keep the mass of the
| control system down.
| readthenotes1 wrote:
| I suspect there is more going on with your gut neurons then
| you would expect. If nothing else, the vagus nerve I had to
| direct communication link.
|
| I like to think that it is my gut brain that is telling me
| that it's okay to have that ice cream...
| kridsdale1 wrote:
| Don't assume whales are less intelligent than humans. They're
| tuned for their environment. They won't assemble machines
| with their flippers but let's toss you naked in the pacific
| and see if you can communicate and collaborate with peers
| 200km away on complex hunting strategies.
| batch12 wrote:
| Let's toss a whale on land and see if it can communicate
| and collaborate with peers 10 ft away on anything. I don't
| think being tuned to communicate underwater makes them more
| intelligent than humans.
| ninetyninenine wrote:
| > I don't think being tuned to communicate underwater
| makes them more intelligent than humans.
|
| Your responding to a claim that was never made. The claim
| was don't assume humans are smarter than whales. Nobody
| said whales are more intelligent than humans. He just
| said don't assume.
| BoingBoomTschak wrote:
| Why would he not "assume" that when humans have shaped
| their world so far beyond what it was, creating intricate
| layers of art, culture and science; even going into space
| or in the air? Man collectively tamed nature and the rest
| of the animal kingdom in a way that no beast ever has.
|
| Anyway, this is just like solipsism, you won't find a
| sincere one outside the asylum. Every Reddit intellectual
| writing such tired drivel as "who's to say humans are
| more intelligent than beasts?" deep down knows the score.
| ninetyninenine wrote:
| > Why would he not "assume" that when humans have shaped
| their world so far beyond what it was, creating intricate
| layers of art, culture and science; even going into space
| or in the air? Man collectively tamed nature and the rest
| of the animal kingdom in a way that no beast ever has.
|
| Because whales or dolphins didn't evolve hands. Hands are
| a foundational prerequisite for building technology. So
| if whales or dolphins had hands we don't know if they
| would develop technology that can rival us.
|
| Because we don't know, that's why he says don't assume.
| This isn't a "deep down we know" thing like your more
| irrational form of reasoning. It is a logical conclusion:
| we don't know. So don't assume.
| BoingBoomTschak wrote:
| It is very naive to think that the availability of such
| tools isn't partly responsible for that intelligence; "We
| shape our tools and thereafter our tools shape us". And
| it seems too man-centric of an excuse: you can see all
| our civilization being built on hands so you state that
| there can't be a way without.
|
| The "they MIGHT be as intelligent, just lacking hands"
| theory can't have the same weight as "nah" in an honest
| mind seeing the overwhelming clues (yes, not proof, if
| that's what you want) against it. Again, same way that
| you can't disprove solipsism.
| ninetyninenine wrote:
| The difference is that my conclusion is logical and yours
| is an assumption.
| winwang wrote:
| Conclusion noted: nuke the whales before they nuke us.
|
| (/s)
| FL33TW00D wrote:
| It's probably more relevant to compare intraspecies rather
| than interspecies.
|
| And it turns out that human brain volume and intelligence are
| moderately-highly correlated [1][2]!
|
| [1]: https://pmc.ncbi.nlm.nih.gov/articles/PMC7440690/ [2]: h
| ttps://www.sciencedirect.com/science/article/abs/pii/S01602..
| .
| yurimo wrote:
| Right, but what is also important to remember is while size
| is important what is also key here is the complexity of a
| neural circuits. Human brain has a lot more connections and
| is much more complex.
| og_kalu wrote:
| Dolphins, Orcas, whales and other intelligent cetaceans do
| not have Hands and live in an environment without access to a
| technological accelerator like fire.
|
| The absence of both of these things is an incredible crippler
| for technological development. It doesn't matter how
| intelligent you are, you're never going to achieve much
| technologically without these.
|
| I don't think brain size correlations is as straightforward
| as 'bigger = better' every time but we simply don't know how
| intelligent most of these species are. Land and Water are
| completely different beasts.
| HarHarVeryFunny wrote:
| Intelligence isn't measured by ability to create technology
| or use tools.
|
| Intelligence is the ability to use experience to predict
| your environment and the outcomes of your own actions. It's
| a tool for survival.
| og_kalu wrote:
| Okay and how have we determined we have more intelligence
| than those species with this measure ?
| HarHarVeryFunny wrote:
| Clearly we haven't, given that there is very little
| agreement as to what intelligence is. This is just my
| definition, although there's a lot behind why I define it
| this way.
|
| However, I do think that a meaningful intelligence
| comparison between humans and dolphins, etc, would
| conclude that we are more intelligent, especially based
| on our reasoning/planning (= multi-step prediction)
| abilities, which allows us not only to predict our
| environment but also to modify it to our desires in very
| complex ways.
| og_kalu wrote:
| >However, I do think that a meaningful intelligence
| comparison between humans and dolphins, etc, would
| conclude that we are more intelligent, especially based
| on our reasoning/planning (= multi-step prediction)
| abilities
|
| I'm not sure how you would make meaningful comparisons
| here. We can't communicate to them as they communicate
| and we live in almost completely different environments.
| Any such comparison would be extremely biased to us.
|
| >which allows us not only to predict our environment but
| also to modify it to our desires in very complex ways.
|
| We modify our environment mostly through technology.
| Intelligence is a big part of technology sure but it's
| not the only part of it and without the other parts
| (hands with opposable thumbs, fire etc), technology as we
| know it wouldn't exist and our ability to modify the
| environment would seem crippled to any outside observer
| regardless of how intelligent we may be.
|
| It's not enough to think that the earth revolves around
| the sun, we need to build the telescopes (with hands and
| materials melted down and forged with fire) to confirm
| it.
|
| It's not enough to dream and devise of flight, we need
| the fire to create the materials that we dug with our
| hands and the hands to build them.
|
| It's not enough to think that Oral communication is
| insufficient for transmitting information through
| generations. What else will you do without opposable
| thumbs or an equivalent ?
|
| Fire is so important for so many reasons but one of the
| biggest is that it was an easy source of large amounts of
| energy that allowed us to bootstrap technology. Where's
| that easy source of energy underwater ?
|
| Without all the other aspects necessary for technology,
| we are relegated to hunter/gatherer levels of influencing
| the environment at best. Even then, we still crafted
| tools that creatures without opposable thumbs would never
| be able to craft.
| HarHarVeryFunny wrote:
| Another angle to look at intelligence is that not all
| species need it, or need it to same degree. If you are a
| cow, or a crocodile, then you are a 1-trick grass-
| munching or zebra-munching pony, and have no need for
| intelligence. A generalist species like humans, that
| lives in a hugely diverse set of environments, with a
| hugely diverse set of food sources, has evolved
| intelligence (which in turn supports further
| generalization) to cope with this variety.
|
| At least to our own perception, and degree of
| understanding, it would appear that the ocean habitat(s)
| of dolphins are far less diverse and demanding.
| Evidentially complex enough to drive their intelligence
| though, so perhaps we just don't understand the
| complexity of what they've evolved to do.
| og_kalu wrote:
| Evolution is a blind, dumb optimizer. You can have a
| mutation that is over-kill and if it doesn't actively
| impede you in some way, it just stays. It's not like it
| goes, "Ok we need to reduce this to the point where it's
| just beneficial enough etc".
|
| That said, i definitely would not say the Ocean is
| particularly less diverse or demanding.
|
| Even with our limited understanding, there must be
| adaptations for Pressure, Salinity, light, Energy,
| Buoyancy, Underwater Current etc that all vary
| significantly by depth and location.
|
| And the bottlenose dolphin for instance lives in every
| ocean of the world except the Arctic and the Antarctic
| oceans.
| KoolKat23 wrote:
| > What this tells us for AI is that we need something else
| besides LLMs.
|
| Basically we need Multimodal LLM's (terrible naming as it's not
| an LLM then but still).
| Animats wrote:
| I don't know what we need. Nor does anybody else, yet. But we
| know what it has to _do_. Basically what a small mammal or a
| corvid does.
|
| There's been progress. Look at this 2020 work on neural net
| controlled drone acrobatics.[1] That's going in the right
| direction.
|
| [1] https://rpg.ifi.uzh.ch/docs/RSS20_Kaufmann.pdf
| fuzzfactor wrote:
| You could say language is just the "communication module"
| but there has got to be another whole underlying interface
| where non-verbal thoughts are modulated/demodulated to
| conform to the language expected to be used when
| communication may or may not be on the agenda.
| bbor wrote:
| Well said! This is a great restatement of the core setup
| of the Chomskian "Generative Grammar" school, and I think
| it's an undeniably productive one. I haven't read this
| researchers full paper, but I would be sad (tho not
| shocked...) if it didn't cite Chomsky up front. Beyond
| your specific point re:interfaces--which I recommend the
| OG _Syntactic Structures_ for more commentary on--he's
| been saying what she's saying here for about half a
| century. He's too humble /empirical to ever say it
| without qualifiers, but IMO the truth is clear when
| viewed holistically: language is a byproduct of
| hierarchical thought, not the progenitor.
|
| This (awesome!) researcher would likely disagree with
| what I've just said based on this early reference:
| In the early 2000s I really was drawn to the hypothesis
| that maybe humans have some special machinery that is
| especially well suited for computing hierarchical
| structures.
|
| ...with the implication that they're not, actually. But I
| think that's an absurd overcorrection for anthropological
| bias -- humans are uniquely capable of a whole host of
| tasks, and the gradation is clearly a qualitative one. No
| ape has ever asked a question, just like no plant has
| ever conceptualized a goal, and no rock has ever computed
| indirect reactions to stimuli.
| slibhb wrote:
| Chomsky is shockingly _un_ humble. I admire him but he's
| a jerk who treats people who disagree with him with
| contempt. It's fun to read him doing this but it's
| uncollegiate (to say the least).
|
| Also, calling "generative grammar" productive seems wrong
| to me. It's been around for half a century -- what tools
| has it produced? At some point theory needs to come into
| contact with empirical reality. As far as I know,
| generative grammar has just never gotten to this point.
| keybored wrote:
| Who has he mistreated?
| calf wrote:
| Nobody, people are just crying because Chomsky calls them
| out, rationally, on their intellectual and/or political
| bullshit, and this behavior is known as projection.
| bbor wrote:
| Well, it's the basis of programming languages. That seems
| pretty helpful :) Otherwise it's hard to measure what
| exactly "real world utility" looks like. What have the
| other branches of linguistics brought us? What has any
| human science brought us, really? Even the most empirical
| one, behavioral psychology, seems hard to correlate with
| concrete benefits. I guess the best case would be "helps
| us analyze psychiatric drug efficacy"?
|
| Generally, I absolutely agree that he is not humble in
| the sense of expressing doubt about his strongly held
| beliefs. He's been saying pretty much the same things for
| decades, and does not give much room for disagreement
| (and ofc this is all ratcheted up in intensity in his
| political stances). I'm using humble in a slightly
| different way, tho: he insists on qualifying basically
| all of his statements about archaeological anthropology
| with "we don't have proof yet" and "this seems likely",
| because of his fundamental belief that we're in a "pre-
| Galilean" (read: shitty) era of cognitive science.
|
| In other words: he's absolutely arrogant about his core
| structural findings and the utility of his program, but
| he's humble about the final application of those findings
| to humanity.
| soulofmischief wrote:
| I think one big problem is that people understand LLMs as
| text-generation models, when really they're just sequence
| prediction models, which is a highly versatile, but data-
| hungry, architecture for encoding relationships and
| knowledge. LLMs are tuned for text input and output, but
| they just work on numbers and the general transformer
| architecture is highly generalizable.
| NoMoreNicksLeft wrote:
| In these discussions, I always knee-jerk into thinking
| "why don't they just look inward on their own minds". But
| the truth is, most people don't have much to gaze upon
| internally... they're the meat equivalent of an LLM that
| can sort of sound like it makes sense. These are the
| people always bragging about how they have an "internal
| monologue" and that those that don't are aliens or
| psychotics or something.
|
| The only reason humans have that "communication model" is
| because that's how you model other humans you speak to.
| It's a faculty for rehearsing what you're going to say to
| other people, and how they'll respond to it. If you have
| any profound thoughts at all, you find that your spoken
| language is deficient to even transcribe your thoughts,
| some "mental tokens" have no short phrases that even
| describe them.
|
| The only real thoughts you have are non-verbal. You can
| see this sometimes in stupid schoolchildren who have
| learned all the correct words to regurgitate, but those
| never really clicked for them. The mildly clever teachers
| always assume that if they thoroughly practice the
| terminology, it will eventually be linked with the
| concepts themselves and they'll have fully learned it.
| What's really happening is that there's not enough mental
| machinery underneath for those words to ever be anything
| to link up with.
| soulofmischief wrote:
| This view represents one possible subjective experience
| of the world. But there are many different possible ways
| a human brain can learn to experience the world.
|
| I am a sensoral thinker, I often think and internally
| express myself in purely images or sounds. There are,
| however, some kinds of thoughts I've learned I can only
| fully engage with if I speak to myself out loud or at
| least inside of my head.
|
| The most appropriate mode of thought depends upon the
| task at hand. People don't typically brag about having
| internal monologues. They're just sharing their own
| subjective internal experience, which is no less valid
| than a chiefly nonverbal one.
| KoolKat23 wrote:
| As far as I understand it, it's just output and speaking
| is just enclosed in tags, that the body can act on, much
| like inline code output from an LLM.
|
| e.g. the neural electrochemical output has a specific
| sequence that triggers the production of a certain
| hormone in your pituitary gland for e.g. and the hormone
| travels to the relevant body function activating/stopping
| it.
| KoolKat23 wrote:
| I think you may underestimate what these models do.
|
| Proper multimodal models natively consider whatever input
| you give them, store the useful information in an
| abstracted form (i.e not just text), building it's world
| model, and then output in whatever format you want it to.
| It's no different to a mammals, just the inputs are perhaps
| different. Instead of relying on senses, they rely on text,
| video, images and sound.
|
| In theory you could connect it to a robot and it could
| gather real world data much like a human, but would
| potentially be limited to the number of sensors/nerves it
| has. (on the plus side it has access to all recorded data
| and much faster read/write than a human).
| danielmarkbruce wrote:
| Is it important? To who? Anyone with half a brain is aware that
| language isn't the only way to think. I can think my way
| through all kinds of things in 3-d space without a single word
| uttered in any internal monologue and I'm not remotely unique -
| this kind of thing is put in all kinds of math and iq'ish like
| tests one takes as a child.
| voxl wrote:
| Before you say things this patiently dumb you should probably
| wonder what question the researchers are actually interested
| in and why your average experience isn't sufficient proof.
| gotoeleven wrote:
| I am 3-d rotating this comment in my head right now
| orhmeh09 wrote:
| *patently
| danielmarkbruce wrote:
| It's "patently" and maybe understand the definition of
| "average" before using it.
|
| Once you've figured out how to use language, explain why
| this is important and to who. Then maybe what the upshot
| will be. The fact that someone has proven something to be
| true doesn't make it important.
|
| The comment I replied to made it sound like it's important
| to the field of AI. It is not. Almost zero serious
| researchers think LLMs all by themselves are "enough".
| People are working on all manner of models and systems
| incorporating all kinds of things "not LLM". Practically no
| one who actually works in AI reads this paper and changes
| anything, because it only proves something they already
| believed to be true and act accordingly.
| jebarker wrote:
| > What this tells us for AI is that we need something else
| besides LLMs
|
| Not to over-hype LLMs, but I don't see why this results says
| this. AI doesn't need to do things the same way as evolved
| intelligence has.
| weard_beard wrote:
| To a point. If you drill down this far into the fundamentals
| of cognition you begin to define it. Otherwise you may as
| well call a cantaloupe sentient
| jebarker wrote:
| I don't think anyone defines AI as "doing the thing that
| biological brains do" though, we define it in terms of
| capabilities of the system.
| weard_beard wrote:
| I think if you gave it the same biological inputs as a
| biological brain you would quickly see the lack of
| capabilities in any man made system.
| Dylan16807 wrote:
| Okay, but does that help us reach any meaningful
| conclusions? For example, okay some AI system doesn't
| have the capabilities of an auditory cortex or
| somatosensory cortex. Is there a reason for me to think
| it needs that?
| weard_beard wrote:
| Name a creature on earth without one.
|
| Imagine trying to limit, control, or explain a being
| without familiar cognitive structures.
|
| Is there a reason to care about such unfamiliar
| modalities of cognition?
| Dylan16807 wrote:
| > Name a creature on earth without one.
|
| Anything that doesn't have a spine, I'm pretty sure.
|
| Also if we look at just auditory, tons of creatures are
| deaf and don't need that.
|
| > Imagine trying to limit, control, or explain a being
| without familiar cognitive structures.
|
| I don't see why any of that that affects whether it's
| intelligent.
| heavyset_go wrote:
| It doesn't need to, but evolved intelligence is the only
| intelligence we know of.
|
| Similar reason we look for markers of Earth-based life on
| alien planets: it's the only example we've got of it
| existing.
| zbyforgotp wrote:
| Ok, but at least it suggests that this other thing might be
| more efficient in some ways.
| awongh wrote:
| One reason might that LLMs are successful because of the
| architecture, but also, just as importantly because they can
| be trained over a volume and diversity of human thought
| that's encapsulated in language (that is on the internet).
| Where are we going to find the equivalent data set that will
| train this other kind of thinking?
|
| Open AI O1 seems to be trained on mostly synthetic data, but
| it makes intuitive sense that LLMs work so well because we
| had the data lying around already.
| jebarker wrote:
| I think the data is way more important for the success of
| LLMs than the architecture although I do think there's
| something important in the GPT architecture in particular.
| See this talk for why: [1]
|
| Warning, watch out for waving hands: The way I see it is
| that cognition involves forming an abstract representation
| of the world and then reasoning about that representation.
| It seems obvious that non-human animals do this without
| language. So it seems likely that humans do too and then
| language is layered on top as a turbo boost. However, it
| also seems plausible that you could build an abstract
| representation of the world through studying a vast amount
| of human language and that'll be a good approximation of
| the real-world too and furthermore it seems possible that
| reasoning about that abstract representation can take place
| in the depths of the layers of a large transformer. So it's
| not clear to me that we're limited by the data we have or
| necessarily need a different type of data to build a
| general AI although that'll likely help build a better
| world model. It's also not clear that an LLM is incapable
| of the type of reasoning that animals apply to their
| abstract world representations.
|
| [1] https://youtu.be/yBL7J0kgldU?si=38Jjw_dgxCxhiu7R
| tsimionescu wrote:
| > However, it also seems plausible that you could build
| an abstract representation of the world through studying
| a vast amount of human language and that'll be a good
| approximation of the real-world too and furthermore it
| seems possible that reasoning about that abstract
| representation can take place in the depths of the layers
| of a large transformer.
|
| While I agree this is possible, I don't see why you'd
| think it's likely. I would instead say that I think it's
| _unlikely_.
|
| Human communication relies on many assumptions of a
| shared model of the world that are rarely if ever
| discussed explicitly, and without which certain concepts
| or at least phrases become ambiguous or hard to
| understand.
| necovek wrote:
| GP argument seems to be about "thinking" when restricted
| to knowledge through language, and "possible" is not the
| same as "likely" or "unlikely" -- you are not really
| disagreeing, since either means "possible".
| tsimionescu wrote:
| GP said plausible, which does mean likely. It's possible
| that there's a teapot in orbit around Jupiter, but it's
| not plausible. And GP is specifically saying that by
| studying human language output, you could plausibly learn
| about the world that have birth to the internal models
| that language is used to exteriorize.
| necovek wrote:
| If we are really nitpicking, they said it's _plausible_
| you could build an abstract representation of the world
| by studying language-based data, but that it 's
| _possible_ it could be made to effectively reason too.
|
| Anyway, it seems to me we are generally all in agreement
| (in this thread, at least), but are now being really
| picky about... language :)
| necovek wrote:
| I agree we are not limited with the data set size: all
| humans learn the language with the much smaller language
| training set (just look at kids and compare them to
| LLMs).
|
| OTOH, humans (and animals) do get other data feeds
| (visual, context, touch/pain, smell, internal balance
| "sensors"...) that we develop as we grow and tie that to
| learning about language.
|
| Obviously, LLMs won't replicate that since even adults
| struggle to describe these verbally.
| BurningFrog wrote:
| Videos are a rich set of non verbal data that could be used
| to train AIs.
|
| Feed it all the video ever recorded, hook it up to web
| cams, telescopes, etc. This says a lot about how the
| universe works, without using a single word.
| Animats wrote:
| > One reason might that LLMs are successful because of the
| architecture, but also, just as importantly because they
| can be trained over a volume and diversity of human thought
| that's encapsulated in language (that is on the internet).
| Where are we going to find the equivalent data set that
| will train this other kind of thinking?
|
| Probably by putting simulated animals into simulated
| environments where they have to survive and thrive.
|
| Working at animal level is uncool, but necessary for
| progress. I had this argument with Rod Brooks a few decades
| back. He had some good artificial insects, and wanted to
| immediately jump to human level, with a project called
| Cog.[1] I asked him why he didn't go for mouse level AI
| next. He said "Because I don't want to go down in history
| as the inventor of the world's greatest artificial mouse."
|
| Cog was a dud, and Brooks goes down in history as the
| inventor of the world's first good robotic vacuum cleaner.
|
| [1] https://en.wikipedia.org/wiki/Cog_(project)
| at_a_remove wrote:
| "Where are we going to find the equivalent data set that
| will train this other kind of thinking?"
|
| Just a personal opinion, but in my shitty _When
| H.A.R.L.I.E. Was One_ (and others) unpublished fiction
| pastiche (ripoff, really), I had the nascent AI stumble
| upon Cyc as its base for the world and "thinking about
| how to think."
|
| I never thought that Cyc was enough, but I do think that
| something Cyc-like is necessary as a component, a seed
| for growth, until the AI begins to make the transition
| from the formally defined, vastly interrelated frames and
| facts in Cyc to being able to growth further and
| understand the much less formal knowledgebase you might
| find in, say Wikipedia.
|
| Full agreement with your animal model is only sensible.
| If you think about macaques, they have a limited range of
| vocalization once they hit adulthood. Noe that the
| mothers almost never make a noise at their babies.
| Lacking language, when a mother wants to train an infant,
| _she hurts it_. (Shades of _Blindsight_ there) She picks
| up the infant, grasps it firmly, and nips at it. The baby
| tries to get away, but the mother holds it and keeps at
| it. Their communication is pain. Many animals do this.
| But they also learn threat displays, the _promise_ of
| pain, which goes beyond mere carrot and stick.
|
| The more sophisticated multicellular animals (let us say
| birds, reptiles, mammals) have to learn to model the
| behavior of other animals in their environment: to prey
| on them, to avoid being prey. A pond is here. Other
| animals will also come to drink. I could attack them and
| eat them. And with the macaques, "I must scare the baby
| and pain it a bit because I no longer want to breastfeed
| it."
|
| Somewhere along the line, modeling other animals (in-
| species or out-species) hits some sort of self-reflection
| and the recursion begins. That, I think, is a crucial
| loop to create the kind of intelligence we seek. Here I
| nod to Egan's _Diaspora_.
|
| Looping back to your original point about the training
| data, I don't think that loop is _sufficient_ for an AGI
| to do anything but think about itself, and that 's where
| something like Cyc would serve as a framework for it to
| enter into the knowledge that it isn't merely _cogito
| ergo sum_ ming in a void, but that it is part of a world
| with rules stable enough that it might reason, rather
| than "merely" statistically infer. And as part of the
| world (or your simulated environment), it can engage in
| new loops, feedback between its actions and results.
| jamiek88 wrote:
| I like your premise! And will check out Harlie!
| sokoloff wrote:
| > A pond is here. Other animals will also come to drink.
| I could attack them and eat them.
|
| Is that the dominant chain, or is the simpler "I've seen
| animals here before that I have eaten" or "I've seen
| animals I have eaten in a place that
| smelled/looked/sounded/felt like this" sufficient to
| explain the behavior?
| at_a_remove wrote:
| Could be! But then there are ambushes, driving prey into
| the claws of hidden allies, and so forth. Modeling the
| behavior of other animals will have to occur _without_
| place for many instances.
| nickpsecurity wrote:
| I always start with God's design thinking it is best.
| That's our diverse, mixed-signal, brain architecture
| followed by a good upbringing. That means we need to train
| brain-like architectures in the same way we train children.
| So, we'll need whatever data they needed. Multiple streams
| for different upbringings, too.
|
| The data itself will be most senses collecting raw data
| about the world most of the day for 18 years. It might
| require a camera on the kid's head which I don't like. I
| think people letting a team record their life is more
| likely. Split the project up among many families running in
| parallel, 1-4 per grade/year. It would probably cost a few
| million a year.
|
| (Note: Parent changes might require an integration step
| during AI training or showing different ones in the early
| years.)
|
| The training system would rapidly scan this information in.
| It might not be faster than human brains. If it is, we can
| create them quickly. That's the passive learning part,
| though.
|
| Human training involves asking lots of questions based on
| internal data, random exploration (esp play) with
| reinforcement, introspection/meditation, and so on. Self-
| driven, generative activities whose outputs become inputs
| into the brain system. This training regiment will probably
| need periodic breaks from passive learning to ask questions
| or play which requires human supervision.
|
| Enough of this will probably produce... disobedient,
| unpredictable children. ;) Eventually, we'll learn how to
| do AI parenting where the offspring are well-behaved,
| effective servants. Those will be fine-tuned for practical
| applications. Later, many more will come online which are
| trained by different streams of life experience, schooling
| methods, etc.
|
| That was my theory. I still don't like recording people's
| lives to train AI's. I just thought it was the only way to
| build brain-like AI's and likely to happen (see Twitch).
|
| My LLM concept was to do the same thing with K-12 education
| resources, stories, kids games, etc. Parents already could
| tell us exactly what to use to gradually build them up
| since they did that for their kids year by year. Then,
| several career tracts layering different college books and
| skill areas. I think it would be cheaper than GPT-4 with
| good performance.
| uoaei wrote:
| in the high entropy world we have, we are forced to assume
| that the first thing that arises as a stable pattern is
| inevitably the most likely, and the most likely to work.
| there is no other pragmatic conclusion to draw.
|
| for more, see "Assembly Theory"
| numpad0 wrote:
| Title doesn't mean bullet trains can't fly, but do imply what
| call flights could be more than moving fast, and effects of
| wings might be worth discussing.
| lanstin wrote:
| Language models would seem to be exquisitely tied to the way
| that evolved intelligence has formulated its society and
| training.
|
| An Ab Initio AGI would maybe be free of our legacy, but LLMs
| certainly are not.
|
| I would expect a ship-like intelligence a la the Culture
| novels to have non-English based cognition. As far as we can
| tell, our own language generation is post-hoc explanation for
| thought more so than the embodiment of thought.
| theptip wrote:
| LLM as a term is becoming quite broad; a multi-modal
| transformer-based model with function calling / ReAct
| finetuning still gets called an LLM, but this scaffolding might
| be all that's needed.
|
| I'd be extremely surprised if AI recapitulates the same
| developmental path as humans did; evolution vs. next-token
| prediction on an existing corpus are completely different
| objective functions and loss landscapes.
| fhdsgbbcaA wrote:
| I asked both OpenAI and Claude the same difficult programming
| question. Each gave a nearly identical response down to the
| variable names and example values.
|
| I then looked it up and they had each copy/pasted the same
| Stack overflow answer.
|
| Furthermore, the answer was extremely wrong, the language I
| used was superficially similar to the source material, but
| the programming concepts were entirely different.
|
| What this tells me is there is clearly no "reasoning"
| happening whatsoever with either model, despite marketing
| claiming as such.
| alphan0n wrote:
| What was the question?
| fhdsgbbcaA wrote:
| Had to do with connection pooling.
| alphan0n wrote:
| Small wonder why you received a sub-optimal response.
| fhdsgbbcaA wrote:
| I'll say the unholy combination of managing the python
| GIL, concurrency, and connection reuse is not my favorite
| topic.
| vundercind wrote:
| They don't _wonder_. They'd happily produce entire novels
| of (garbage) text if trained on gibberish. They wouldn't be
| confused. They wouldn't hope to puzzle out the meaning.
| There is none, and they work just fine anyway. Same for
| real language. There's no meaning, to them (there's not
| really a "to" either).
|
| The most interesting thing about LLMs is probably how much
| relational information turns out to be encoded in large
| bodies of our writing, in ways that fancy statistical
| methods can access. LLMs aren't thinking, or even in the
| same ballpark as thinking.
| theptip wrote:
| Humans copy/paste from SO too. Does that prove humans can't
| reason?
| fuzzfactor wrote:
| >Does that prove humans can't reason?
|
| It could be said not as well as the ones that don't need
| SO.
| fhdsgbbcaA wrote:
| If you don't read or understand the code, then no, you
| aren't reasoning.
|
| The condition of "some people are bad at thing" does not
| equal "computer better at thing than people", but I see
| this argument all the time in LLM/AI discourse.
| ninetyninenine wrote:
| >What this tells me is there is clearly no "reasoning"
| happening whatsoever with either model, despite marketing
| claiming as such.
|
| Not true. You yourself have failed at reasoning here.
|
| The problem with your logic is that you failed to identify
| the instances where LLMs have succeeded with reasoning. So
| if LLMs both fail and succeed it just means that LLMs are
| capable of reasoning and capable of being utterly wrong.
|
| It's almost cliche at this point. Tons of people see the
| LLM fail and ignore the successes then they openly claim
| from a couple anecdotal examples that LLMs can't reason
| period.
|
| Like how is that even logical? You have contradictory
| evidence therefore the LLM must be capable of BOTH failing
| and succeeding in reason. That's the most logical answer.
| haswell wrote:
| Success doesn't imply that "reasoning" was involved, and
| the definition of reasoning is extremely important.
|
| Apple's recent research summarized here [0] is worth a
| read. In short, they argue that what LLMs are doing is
| more akin to advanced pattern recognition than reasoning
| in the way we typically understand reasoning.
|
| By way of analogy, memorizing mathematical facts and then
| correctly recalling these facts does not imply that the
| person actually _understands_ how to arrive at the
| answer. This is why "show your work" is a critical aspect
| of proving competence in an education environment.
|
| An LLM providing useful/correct results only proves that
| it's good at surfacing relevant information based on a
| given prompt. That fact that it's trivial to cause bad
| results by making minor but irrelevant changes to a
| prompt points to something other than a truly reasoned
| response, i.e. a reasoning machine would not get tripped
| up so easily.
|
| - [0]
| https://x.com/MFarajtabar/status/1844456880971858028
| ninetyninenine wrote:
| You're still suffering from the biases of the parent
| poster. You are picking and choosing papers that
| illustrate failure instances when there are also an equal
| amount of papers that verify successful instances.
|
| It's bloody obvious that when I classify success I mean
| that the llm is delivering a correct and unique answer
| for a novel prompt that doesn't exist in the original
| training set. No need to go over the same tired analogies
| that have been regurgitated over and over again that you
| believe LLMs are reusing memorized answers. It's a stale
| point of view. The overall argument has progressed
| further then that and we now need more complicated
| analysis of what's going on with LLMs
|
| Sources: https://typeset.io/papers/llmsense-harnessing-
| llms-for-high-...
|
| https://typeset.io/papers/call-me-when-necessary-llms-
| can-ef...
|
| And these two are just from a random google search.
|
| I can find dozens and dozens of papers illustrating
| failures and successes of LLMs which further nails my
| original point. LLMs both succeed and fail at reasoning.
|
| The main problem right now is that we don't really
| understand how LLMs work internally. Everyone who claims
| they know LLMs can't reason are just making huge leaps of
| irrational conclusions because not only does their
| conclusion contradict actual evidence but they don't even
| know how LLMs work because nobody knows.
|
| We only know how LLMs work at a high level and we only
| understand these things via the analogy of a best fit
| curve in a series of data points. Below this abstraction
| we don't understand what's going on.
| fhdsgbbcaA wrote:
| Claim is LLM exhibit reasoning, particularly in coding
| and logic. Observation is mere parroting of training
| data. Observations trump claims.
| yapyap wrote:
| Lol, it's insane how some people will track everything back to
| AI
| tempodox wrote:
| Can't escape the hype.
| fhdsgbbcaA wrote:
| My first thought as well - "AGI via LLM" implies that our grey
| matter is merely a substrate for executing language tasks: just
| swap out bio-neurons for a few H100s and viola, super
| intelligence.
| mountainriver wrote:
| It's AGI via transformer
| NeuroCoder wrote:
| I'm not convinced the result is as important here as the
| methods. Separating language from complex cognition when
| evaluating individuals is difficult. But many of the people
| I've met in neuroscience that study language and cognitive
| processes do not hold the opinion that one is absolutely
| reliant on the other in all cases. It may have been a strong
| argument a while ago, but everytime I've seen a presentation on
| this relationship it's been to emphasize the influence culture
| and language inevitably have on how we think about things. I'm
| sure some people believe that one cannot have complex thoughts
| without language, but most people in speech neuro I've met in
| language processing research find the idea ridiculous enough
| they wouldn't bother spending a few years on that kind of
| project just to disapprove a theory.
|
| On the other hand, further understanding how to engage complex
| cognitive processes in nonverbal individuals is extremely
| useful and difficult to accomplish.
| red75prime wrote:
| > What this tells us for AI is that we need something else
| besides LLMs
|
| You mean besides a few layers of LLMs near input and output
| that deal with tokens? We have the rest of the layers.
| alephnerd wrote:
| Those "few layers" sum up all of linguistics.
|
| 1. Syntax
|
| 2. Semantics
|
| 3. Pragmatics
|
| 4. Semiotics
|
| These are the layers you need to solve.
|
| Saussure already pointed out these issues over a century ago,
| and Linguists turned ML Researchers like Stuart Russell and
| Paul Smolensky tried in vain to resolve this.
|
| It basically took 60 years just to crack syntax at scale, and
| the other layers are still fairly far away.
|
| Furthermore, Syntax is not a solved problem yet in most
| languages.
|
| Try communicating with GPT-4o in colloquial Bhojpuri, Koshur,
| or Dogri, let alone much less represented languages and
| dialects.
| sojournerc wrote:
| Linguistics is not living! Language does not capture
| reality! So no matter how much you solve you're no closer
| to AGI
| CSMastermind wrote:
| When you look at how humans play chess they employ several
| different cognitive strategies. Memorization, calculation,
| strategic thinking, heuristics, and learned experience.
|
| When the first chess engines came out they only employed one of
| these: calculation. It wasn't until relatively recently that we
| had computer programs that could perform all of them. But it
| turns out that if you scale that up with enough compute you can
| achieve superhuman results with calculation alone.
|
| It's not clear to me that LLMs sufficiently scaled won't
| achieve superhuman performance on general cognitive tasks even
| if there are things humans do which they can't.
|
| The other thing I'd point out is that all language is
| essentially synthetic training data. Humans invented language
| as a way to transfer their internal thought processes to other
| humans. It makes sense that the process of thinking and the
| process of translating those thoughts into and out of language
| would be distinct.
| PaulDavisThe1st wrote:
| > It's not clear to me that LLMs sufficiently scaled won't
| achieve superhuman performance on general cognitive tasks
|
| If "general cognitive tasks" means "I give you a prompt in
| some form, and you give me an incredible response of some
| form " (forms may differ or be the same) then it is hard to
| disagree with you.
|
| But if by "general cognitive task" you mean "all the
| cognitive things that human do", then it is really hard to
| see why you would have any confidence that LLMs have any hope
| of achieving superhuman performance at these things.
| jhrmnn wrote:
| Even in cognitive tasks expressed via language, something
| like a memory feels necessary. At which point it's not a
| LLM as in a generic language model. It would become a
| language model conditioned on the memory state.
| ddingus wrote:
| More than a memory.
|
| Needs to be a closed loop, running on its own.
|
| We get its attention, and it responds, or frankly if we
| did manage any sort of sentience, even a simulation of
| it, then the fact is it may not respond.
|
| To me, that is the real test.
| nox101 wrote:
| It sounds like you think this research is wrong? (it claims
| llms can not reason)
|
| https://arstechnica.com/ai/2024/10/llms-cant-perform-
| genuine...
|
| or do you maybe think no logical reasoning is needed to do
| everything a human can do? Tho humans seem to be able to do
| logical reasoning
| astrange wrote:
| It says "current" LLMs can't "genuinely" reason. Also, one
| of the researchers then posted an internship for someone to
| work on LLM reasoning.
|
| I think the paper should've included controls, because we
| don't know how strong the result is. They certainly may
| have proven that humans can't reason either.
| mannykannot wrote:
| If they had human controls, they might well show that
| some humans can't do any better, but based on how they
| generated test cases, it seems unlikely to me that doing
| so would prove that humans cannot reason (of course, if
| that's actually the case, we cannot trust ourselves to
| devise, execute and interpret these tests in the first
| place!)
|
| Some people will use any limitation of LLMs to deny there
| is anything to see here, while others will call this
| 'moving the goalposts', but the most interesting
| questions, I believe, involve figuring out what the
| differences are, putting aside the question of whether
| LLMs are or are not AGIs.
| CSMastermind wrote:
| The later.
|
| While I generally _do_ suspect that we need to invent some
| new technique in the realm of AI in order for software to
| do everything a human can do, I use analogies like chess
| engines to caution myself from certainty.
| bbor wrote:
| I'll pop in with a friendly "that research is definitely
| wrong". If they want to prove that LLMs can't reason,
| shouldn't they stringently define that word somewhere in
| their paper? As it stands, they're proving something small
| (some of today's LLMs have XYZ weaknesses) and claiming
| something big (humans have an ineffable calculator-soul).
|
| LLMs absolutely 100% can reason, if we take the dictionary
| definition; it's trivial to show their ability to answer
| non-memorized questions, and the only way to do that is
| _some_ sort of reasoning. I personally don't think they're
| the most efficient tool for deliberative derivation of
| concepts, but I also think any sort of categorical
| prohibition is anti-scientific. What is the brain other
| than a neural network?
|
| Even if we accept the most fringe, anthropocentric theories
| like Penrose & Hammerhoff's quantum tubules, that's just a
| neural network with fancy weights. How could we possibly
| hope to forbid digital recreations of our brains from
| "truly" or "really" mimicking them?
| visarga wrote:
| Chasing our own tail with concepts like "reasoning".
| Let's move the concept a bit - "search". Can LLMs search
| for novel ideas and discoveries? They do under the right
| circumstances. You got to provide idea testing
| environments, the missing ingredient. Search and learn,
| it's what humans do and AI can do as well.
|
| The whole issue with "reasoning" is that is an
| incompletely defined concept. Over what domain, what
| problem space, and what kind of experimental access do we
| define "reasoning"? Search is better as a concept because
| it comes packed with all these things, and without
| conceptual murkiness. Search is scientifically studied to
| a greater extent.
|
| I don't think we doubt LLMs can learn given training
| data, we already accuse them of being mere interpolators
| or parrots. And we can agree to some extent the LLMs can
| recombine concepts correctly. So they got down the
| learning part.
|
| And for the searching part, we can probably agree its a
| matter of access to the search space not AI. It's an
| environment problem, and even a social one. Search is
| usually more extended than the lifetime of any agent, so
| it has to be a cultural process, where language plays a
| central role.
|
| When you break reasoning/progress/intelligence into
| "search and learn" it becomes much more tractable and
| useful. We can also make more grounded predictions on AI,
| considering the needs for search that are implied, not
| just the needs for learning.
|
| How much search did AlphaZero need to beat us at go? How
| much search did humans pack in our 200K years history
| over 10,000 generations? What was the cost of that
| journey of search? That kind of questions. In my napkin
| estimations we solved 1:10000 of the problem by learning,
| search is 10000x to a million times harder.
| shkkmo wrote:
| You can't breakdown cognition into just "search" and
| "learn" without either ridiculously overloading those
| concepts or leaving a ton out.
| shkkmo wrote:
| > LLMs absolutely 100% can reason, if we take the
| dictionary definition; it's trivial to show their ability
| to answer non-memorized questions, and the only way to do
| that is some sort of reasoning.
|
| Um... What? That is a huge leap to make.
|
| 'Reasoning' is a specific type of thought process and
| humans regularly make complicated decisions without doing
| it. We uses hunches and intuition and gut feelings. We
| make all kinds of snap assessments that we don't have
| time to reason through. As such, answering novel
| questions doesn't necessarily show a system is capable of
| reasoning.
|
| I see absolutely nothing resumbling an argument for
| humans having an "ineffable calculator soul", I think
| that might be you projecting. There is no 'categorical
| prohibition', only an analysis of the current flaws of
| specific models.
|
| Personally, my skepticism about imminent AGI has to do
| believing we may be underestimating the complexity of the
| software running on our brain. We've reached the point
| where we can create digital "brains", or atleast portions
| of them. We may be missing some other pieces of a digital
| brain, or we may just not have the right software to run
| on it yet. I suspect it is both but that we'll have fully
| functional digital brains well before we figure out the
| software to run on them.
| bbor wrote:
| All well said, and I agree on many of your final points!
| But you beautifully highlighted my issue at the top:
| 'Reasoning' is a specific type of thought process
|
| If so, what exactly is it? I don't need a universally
| justified definition, I'm just looking for an objective,
| scientific one. A definition that would help us say for
| sure that a particular cognition is or isn't a product of
| reason.
|
| I personally have lots of thoughts on the topic and look
| to Kant and Hegel for their definitions of reason as the
| final faculty of human cognition (after sensibility,
| understanding, and judgement), and I even think there's
| good reason (heh) to think that LLMs are not a great tool
| for that on their own. But my point is that none of the
| LLM critics have a definition anywhere close to that
| level of specificity.
|
| Usually, "reason" is used to mean "good cognition", so
| "LLMs can't reason" is just a variety of cope/setting up
| new goalposts. We all know LLMs aren't flawless or
| infinite in their capabilities, but I just don't find
| this kind of critique specific enough to have any sort of
| scientific validity. IMHO
| mannykannot wrote:
| I feel you are putting too much emphasis on the
| importance and primacy of having a definition of words
| like 'reasoning'.
|
| As humanity has struggled to understand the world, it has
| frequently given names to concepts that seem to matter,
| well before it is capable of explaining with any sort of
| precision what these things are, and what makes them
| matter - take the word 'energy', for example.
|
| It seems clear to me that one must have these vague
| concepts _before_ one can begin to to understand them,
| and also that it would be bizarre _not_ to give them a
| name at that point - and so, at that point, we have a
| word without a locked-down definition. To insist that we
| should have the definition locked down before we begin to
| investigate the phenomenon or concept is precisely the
| wrong way to go about understanding it: we refine and
| rewrite the definitions as a consequence of what our
| investigations have discovered. Again, 'energy' provides
| a useful case study for how this happens.
|
| A third point about the word 'energy' is that it has
| become well-defined within physics, and yet retains much
| of its original vagueness in everyday usage, where, in
| addition, it is often used metaphorically. This is not a
| problem, except when someone makes the lexicographical
| fallacy of thinking that one can freely substitute the
| physics definition into everyday speech (or vice-versa)
| without changing the meaning.
|
| With many concepts about the mental, including
| 'reasoning', we are still in the learning-and-writing-
| the-definition stage. For example, let's take the
| definition you bring up: reasoning as good cognition.
| This just moves us on to the questions of what
| 'cognition' means, and what distinguishes good cognition
| from bad cognition (for example, is a valid logical
| argument predicated on what turns out to be a false
| assumption an example of reasoning-as-good-cognition?) We
| are not going to settle the matter by leafing through a
| dictionary, any more than Pedro Carolino could write a
| phrase book just from a Portugese-English dictionary (and
| you are probably aware that looking up definitions-of-
| definitions recursively in a dictionary often ends up in
| a loop.)
|
| A lot of people want to jump the gun on this, and say
| definitively either that LLMs have achieved reasoning (or
| general intelligence or a theory of mind or even
| consciousness, for that matter) or that they have not (or
| cannot.) What we should be doing, IMHO, is to put aside
| these questions until we have learned enough to say more
| precisely what these terms denote, by studying humans,
| other animals, and what I consider to be the surprising
| effectiveness of LLMs - and that is what the interviewee
| in the article we are nominally discussing here is doing.
|
| You entered this thread by saying (about the paper
| underlying an article in Ars Tech [1]) _I'll pop in with
| a friendly "that research is definitely wrong". If they
| want to prove that LLMs can't reason..._ , but I do not
| think there is anything like that claim in the paper
| itself (one should not simply trust what some person on
| HN says about a paper. That, of course, goes as much for
| what I say about it as what the original poster said.) To
| me, this looks like the sort of careful, specific and
| objective work that will lead to us a better
| understanding of our concepts of the mental.
|
| [1] https://arxiv.org/pdf/2410.05229
| NemoNobody wrote:
| This is one of my favorite comments I've ever read on HN.
|
| The first three paragraphs you wrote very succinctly and
| obviously summarize the fundamental flaw of our modern
| science - that it can't make leaps, at all.
|
| There is no leap of faith in science but there is science
| that requires such leaps.
|
| We are stuck bc those most capable of comprehending
| concepts they don't understand and are unexplainable -
| they won't allow themselves to even develop a vague
| understanding of such concepts. The scientific method is
| their trusty hammer and their faith in it renders all
| that isn't a nail unscientific.
|
| Admitting that they don't kno enough would be akin to
| societal suicide of their current position - the deciders
| of what is or isn't true, so I don't expect them to
| withhold their conclusions til they are more able to.
|
| They are the "priest class" now ;)
|
| I agree with your humble opinion - there is much more we
| could learn if that was our intent and considering the
| potential of this, I think we absolutely ought to make
| certain that we do everything in our power to attain the
| best possible outcomes of these current and future
| developments.
|
| Transparent and honest collaboration for the betterment
| of humanity is the only right path to an AGI god - to
| oversimplify a lil bit.
|
| Very astute, well formulated position, presented in
| accessible language and with humility even!
|
| Well done.
| shkkmo wrote:
| > don't need a universally justified definition, I'm just
| looking for an objective, scientific one. A definition
| that would help us say for sure that a particular
| cognition is or isn't a product of reason.
|
| Unfortunately, you won't get one. We simply don't know
| enough about cognition to create rigourous definitions of
| the type you are looking for.
|
| Instead, this paper, and the community in general are
| trying to perform practical capability assessments. The
| claim that the GSM8k measures "mathematical reasoning" or
| "logical reasoning" didn't come from the skeptics.
|
| Alan Turring didn't try to define intelligence, he
| created a practical test that he thought would be a good
| benchmark. These days we believe we have better ones.
|
| > I just don't find this kind of critique specific enough
| to have any sort of scientific validity. IMHO
|
| "Good cognition" seems like dismisal of a definition, but
| this is exactly the definition that the people working on
| this care about. They are not philosphers, they are
| engineers who are trying to make a system "better" so
| "good cognition" is exactly what they want.
|
| The paper digs into finding out more about what types of
| changes impacts peformance on established metrics. The
| "noop" result is pretty interesting since "relevancy
| detection" isn't something we commonly think of as key to
| "good cognition", but a consequence of it.
| tsimionescu wrote:
| > Even if we accept the most fringe, anthropocentric
| theories like Penrose & Hammerhoff's quantum tubules,
| that's just a neural network with fancy weights.
|
| First, while it is a fringe idea with little backing it,
| it's far from the most fringe.
|
| Secondly, it is not at all known that animal brains are
| accurately modeled as an ANN, any more so than any other
| Turing-compatible system can be modeled as an ANN.
| Biological neurons are themselves small computers, like
| all living cells in general, with not fully understood
| capabilities. The way biological neurons are connected is
| far more complex than a weight in an ANN. And I'm not
| talking about fantasy quantum effects in microtubules,
| I'm talking about well-established biology, with many
| kinds of synapses, some of which are "multicast" in a
| spatially distinct area instead of connected to specific
| neurons. And about the non-neuronal glands which are
| known to change neuron behavior and so on.
|
| How critical any of these differences are to cognition is
| anyone's guess at this time. But dismissing them and
| reducing the brain to a bigger NN is not wise.
| adrianN wrote:
| It is my understanding that Penrose doesn't claim that
| brains are needed for cognition, just that brains are
| needed for a somewhat nebulous ,,conscious experience",
| which need not have any observable effects. I think that
| it's fairly uncontroversial that a machine can produce
| behavior that is indistinguishable from human
| intelligence over some finite observation time. The
| Chinese room speaks Chinese, even if it lacks
| understanding for some definitions of the term.
| jstanley wrote:
| But conscious experience does produce observable effects.
|
| For that not to be the case, you'd have to take the
| position that humans _experience consciousness_ and they
| _talk about consciousness_ but that there is no causal
| link between the two! It 's just a coincidence that the
| things you find yourself saying about consciousness line
| up with your internal experience?
|
| https://www.lesswrong.com/posts/fdEWWr8St59bXLbQr/zombies
| -zo...
| adrianN wrote:
| That philosophers talk about p-zombies seems like
| evidence to me that at least some of them don't believe
| that consciousness needs to have observable effects that
| can't be explained without consciousness. I don't say
| that I believe that too. I don't believe that there is
| anything particularly special about brains.
| GoblinSlayer wrote:
| If brain isn't more special than Chinese room, then brain
| understands Chinese no better than Chinese room?
| mannykannot wrote:
| The brain is faster than the Chinese room, but other than
| that, yes, that's the so-called systems reply; Searle's
| response to it (have the person in the room memorize the
| instruction book) is beside the point, as you can teach
| people to perform all sorts of algorithms without them
| needing to understand the result.
|
| As many people have pointed out, Searle's argument begs
| the question by tacitly assuming that if anything about
| the room understands Chinese, it can only be the person
| within it.
| mannykannot wrote:
| The p-zombie argument is the best-known of a group of
| conceivability arguments, which ultimately depend on the
| notion that if a proposition is conceivably true, then
| there is a metaphysically possible world in which it is
| true. Skeptics suppose that this is just a complicated
| way of equivocating over what 'conceivable' means, and
| even David Chalmers, the philosopher who has done the
| most to bring the p-zombie argument to wide attention,
| acknowledges that it depends on the assumption of what he
| calls 'perfect conceivability', which is tantamount to
| irrefutable knowledge.
|
| To deal with the awkwardly apparent fact that
| consciousness certainly seems to have physical effects,
| zombiephiles challenge the notion that physics is
| causally closed, so that it is conceivable that something
| non-physical can cause physical effects. Their approach
| is to say that the causal closure of physics is not
| provable, but at this point, the argument has become a
| lexicographical one, about the definition of the words
| 'physics' and 'physical' (if one insists that 'physical'
| does not refer to a causally-closed concept, then we
| still need a word for the causal closure within which the
| physical is embedded - but that's just what a lot of
| people take 'physical' to mean in the first place.) None
| of the anti-physicalists have been able, so far, to shed
| any light on how the mind is causally effective in the
| physical world.
|
| You might be interested in the late Daniel Dennett's "The
| Unimagined Preposterousness of Zombies":
| https://dl.tufts.edu/concern/pdfs/6m312182x
| lanstin wrote:
| Like what is magic - it turns out to be the ability to go
| from interior thoughts to stuff happening in the shared
| world - physics is just the mechanism of the particular
| magical system we have.
| Koala_ice wrote:
| There's a lot of other interesting biology besides
| propagation of electrical signals. Examples include: 1/
| Transport of mRNAs (in specialized vesicle structures!)
| between neurons. 2/ Activation and integration of
| retrotransposons during brain development (which I have
| long hypothesized acts as a sort of randomization
| function for the neural field). 3/ Transport of proteins
| between and within neurons. This isn't just adventitious
| movement, either - neurons have a specialized
| intracellular transport system that allows them to
| deliver proteins to faraway locations (think >1 meters).
| ddingus wrote:
| Can they reason, or is the volume of training data
| sufficient for them to match relationships up to
| appropriate expressions?
|
| Basically, if humans have had meaningful discussions
| about it, the product of their reasoning is there for the
| LLM, right?
|
| Seems to me, the "how many R's are there in the word
| "strawberry" problem is very suggestive of the idea LLM
| systems cannot reason. If they could, the question is not
| difficult.
|
| The fact is humans may never have actually discussed that
| topic in any meaningful way captured in the training
| data.
|
| And because of that and how specific the question is, the
| LLM has no clear relationships to map into a response. It
| just does best case, whatever the math deemed best.
|
| Seems plausible enough to support the opinion LLM'S
| cannot reason.
|
| What we do know is LLMs can work with anything expressed
| in terms of relationships between words.
|
| There is a ton of reasoning templates contained in that
| data.
|
| Put another way:
|
| Maybe LLM systems are poor at deduction, save for
| examples contained in the data. But there are a ton of
| examples!
|
| So this is hard to notice.
|
| Maybe LLM systems are fantastic at inference! And so
| those many examples get mapped to the prompt at hand very
| well.
|
| And we do notice that and see it like real thinking, not
| just some horribly complex surface containing a bazillion
| relationships...
| chongli wrote:
| The "how many R's are in the word strawberry?" problem
| can't be solved by LLMs specifically because they do not
| have access to the text directly. Before the model sees
| the user input it's been tokenized by a preprocessing
| step. So instead of the string "strawberry", the model
| just sees an integer token the word has been mapped to.
| threeseed wrote:
| > It's not clear to me that LLMs sufficiently scaled won't
| achieve superhuman performance
|
| To some extent this is true.
|
| To calculate A + B you could for example generate A, B for
| trillions of combinations and encode that within the network.
| And it would calculate this faster than any human could.
|
| But that's not intelligence. And Apple's research showed that
| LLMs are simply inferring relationships based on the tokens
| it has access to. Which you can throw off by adding useless
| information or trying to abstract A + B.
| Dylan16807 wrote:
| > To calculate A + B you could for example generate A, B
| for trillions of combinations and encode that within the
| network. And it would calculate this faster than any human
| could.
|
| I don't feel like this is a very meaningful argument
| because if you can do that generation then you must already
| have a superhuman machine for that task.
| shkkmo wrote:
| Sure, when humans use multiple skill to address a specific
| problem, you can sometimes outperform them by scaling a
| spefic one of those skills.
|
| When it comes to general intelligence, I think we are trying
| to run before we can walk. We can't even make a computer with
| a basic, animal level understanding of the world. Yet we are
| trying to take a tool that was developed on top of system
| that already had an understanding of the world and use it to
| work backwards to give computers an understanding of the
| world.
|
| I'm pretty skeptical that we're going to succeed at this. I
| think you have to be able to teach a computer to climb a tree
| or hunt (subhuman AGI) before you can create superhuman AGI.
| senand wrote:
| This seems quite reasonable, but I recently heard a podcast (
| https://www.preposterousuniverse.com/podcast/2024/06/24/280-.
| ..) that LLMs are more likely to be very good at navigating
| what they have been trained on, but very poor at abstract
| reasoning and discovering new areas outside of their
| training. As a single human, you don't notice, as the
| training material is greater than everything we could ever
| learn.
|
| After all, that's what Artificial General Intelligence would
| at least in part be about: finding and proving new math
| theorems, creating new poetry, making new scientific
| discoveries, etc.
|
| There is even a new challenge that's been proposed:
| https://arcprize.org/blog/launch
|
| > It makes sense that the process of thinking and the process
| of translating those thoughts into and out of language would
| be distinct
|
| Yes, indeed. And LLMs seem to be very good at _simulating_
| the translation of thought into language. They don't actually
| do it, at least not like humans do.
| klabb3 wrote:
| > As a single human, you don't notice, as the training
| material is greater than everything we could ever learn.
|
| This bias is real. Current gen ai works proportionally well
| the more known it is. The more training data, the better
| the performance. When we ask something very specific, we
| have the impression that it's niche. But there is tons of
| training data also on many niche topics, which essentially
| enhances the magic trick - it looks like sophisticated
| reasoning. Whenever you truly go "off the beaten path", you
| get responses that are (a) nonsensical (illogical) and (b)
| "pulls" you back towards a "mainstream center point" so to
| say. Anecdotally of course..
|
| I've noticed this with software architecture discussions. I
| would have some pretty standard thing (like session-based
| auth) but I have some specific and unusual requirement
| (like hybrid device- and user identity) and it happily
| spits out good sounding but nonsensical ideas. Combining
| and interpolating entirely in the the linguistic domain is
| clearly powerful, but ultimately not enough.
| NemoNobody wrote:
| What part of AI today leads you to believe that an AGI
| would be capable of self directed creativity? Today that is
| impossible - no AI is truly generating "new" stuff, no
| poetry is constructed creatively, no images are born from a
| feeling, inspiration is only part of AI generation is you
| consider it utilizing it's training data, which isn't
| actually creativity.
|
| I'm not sure why everyone assumes an AGI would just
| automatically do creativity considering most people are not
| very creative, despite them quite literally being capable,
| most people can't create anything. Why wouldn't an AGI have
| the same issues with being "awake" that we do? Being
| capable of knowing stuff - as you pointed out, far more
| facts than a person ever could, I think an awake AGI may
| even have more "issues" with the human condition than us.
|
| Also - say an AGI comes into existence that is awake, happy
| and capable of truly original creativity - why tf does it
| write us poetry? Why solve world hunger - it doesn't
| hunger. Why cure cancer - what can cancer do to to it?
|
| AGI as currently envisioned is a mythos of fantasy and
| science fiction.
| TheOtherHobbes wrote:
| Chess is essentially a puzzle. There's a single explicit,
| quantifiable goal, and a solution either achieves the goal or
| it doesn't.
|
| Solving puzzles is a specific cognitive task, not a general
| one.
|
| Language is a continuum, not a puzzle. The problem with LLMs
| is that testing has been reduced to performance on language
| puzzles, mostly with hard edges - like bar exams, or letter
| counting - and they're a small subset of general language
| use.
| shepherdjerred wrote:
| > What this tells us for AI is that we need something else
| besides LLMs.
|
| Humans not taking this approach doesn't mean that AI cannot.
| earslap wrote:
| Not only that but also LLMs "think" in a latent
| representation that is several layers deep. Sure, the first
| and last layers make it look like it is doing token
| wrangling, but what is happening in the middle layers is
| mostly a mystery. First layer deals directly with the tokens
| because that is the data we are observing (a "shadow" of the
| world) and last layer also deals with tokens because we want
| to understand what the network is "thinking" so it is a human
| specific lossy decoder (we can and do remove that translator
| and plug the latent representations to other networks to
| train them in tandem). There is no reason to believe that the
| other layers are "thinking in language".
| sidewndr46 wrote:
| I believe what this tells is that thought requires blood flow
| in the brain of mammals.
|
| Stepping back a level, it may only actually tell us that MRIs
| measure blood flow.
| agentcoops wrote:
| For those interested in the history, this is in fact the Neural
| Network research path that predated LLMs. Not just in the sense
| that Hinton et al and the core of the "Parallel Distributed
| Processing"/Connectionist school were always opposed to
| Chomsky's identification of brain-thought-language, but that
| the original early 2000s NSF grant awarded to Werbos, Ng, LeCun
| et al was for "Deep Learning in the Mammalian Visual Cortex."
| In their research program, mouse intelligence was posited as
| the first major challenge.
| avazhi wrote:
| It's impossible to overstate how crude and useless blood flow
| MRI studies are, at least relative to the hype they receive.
|
| Spoiler alert: brains require a lot of blood, constantly, just
| to not die. Looking at blood flow on an MRI to determine neural
| circuitry has to deal with the double whammy of both an
| extremely crude tool and a correlation/causation fallacy.
|
| This article and the study are arguably useless.
| agumonkey wrote:
| The connectome and brain mapping efforts might be a better
| research path for the coming years I guess
| nickpsecurity wrote:
| The projects mapping the brain, combined with research on what
| areas do, should tell us what components are necessary for our
| design. Studying the behavior of their specialist structures
| will tell us how to make purpose-built components for these
| tasks. Even if not, just attempting to split up the global
| behavior in that many ways with specialist architecture might
| help. We can also imitate how the components synchronize
| together, too.
|
| An example was the problem of memory shared between systems. ML
| people started doing LLM's with RAG. I looked into neuroscience
| which suggested we need a hippocampus model. I found several
| papers with hippocampus-like models. Combining LLM's, vision,
| etc with hippocampus-like model might get better results. Rinse
| repeat for these other brain areas wherever we can understand
| them.
|
| I also agree on testing the architectures with small, animal
| brains. Many do impressive behaviors that we should be able to
| recreate in simulators or with robotics. Some are useful, too,
| like how geese are good at security. Maybe embed a trained,
| goose brain into a camera system.
| yarg wrote:
| > What this tells us for AI is that we need something else
| besides LLMs.
|
| Perhaps, but the relative success of trained LLMs acting with
| apparent generalised understanding may indicate that it is
| simply the interface that is really an LLM post training;
|
| That the deeper into the network you go (the further from the
| linguistic context), the less things become about words and
| linguist structure specifically and the more it becomes about
| things and relations in general.
|
| (This also means that multiple interfaces can be integrated,
| sometimes making translation possible, e.g.: image <=>
| tree<string>)
| necovek wrote:
| You seem to be conflating "different hardware" with proof that
| "language hardware" uses "software" equivalent to LLMs.
|
| LLMs basically become practical when you simply scale compute
| up, and maybe both regions are "general compute", but language
| ends up on the "GPU" out of pure necessity.
|
| So to me, these are entirely distinct questions: is the
| language region able to do general cognitive operations? What
| happens when you need to spell out "ubiquitous" or declense a
| foreign word in a language with declension (which you don't
| have memory patterns for)?
|
| I agree it seems obvious that for better efficiency (size of
| training data, parameter count, compute ability), human brains
| use different approach than LLMs today (in a sibling comment, I
| bring up an example of my kids at 2yo having a better grasp of
| language rules than ChatGPT with 100x more training data).
|
| But let's dive deeper in understanding what each of these
| regions _can_ do before we decide to compare to or apply stuff
| from AI /CS.
| ninetyninenine wrote:
| >What this tells us for AI is that we need something else
| besides LLMs.
|
| No this is not true. For two reasons.
|
| 1. We call these things LLMs and we train it with language but
| we can also train it with images.
|
| 2. We also know LLMs develop a sort of understanding that goes
| beyond language EVEN when the medium used for training is
| exclusively language.
|
| The naming of LLMs is throwing you off. You can call it a Large
| Language Model but this does not mean that everything about
| LLMs are exclusively tied only to language.
|
| Additionally we don't even know if the LLM is even remotely
| similar to the way human brains process language.
|
| No such conclusion can be drawn from this experiment.
| agumonkey wrote:
| At times I had impaired brain function (lots of soft
| neurological issues, finger control, memory loss, balance
| issues) but surprisingly the core area responsible for
| mathematical reasoning was spared .. that was a strange
| sensation, almost schizophrenic.
|
| And yeah it seems that core primitives of intelligence exist
| very low in our brains. And with people like Michael Levin,
| there may even be a root beside nervous systems.
| ddingus wrote:
| We should look to the animals.
|
| Higher order faculties aside, animals seem like us, just
| simpler.
|
| The higher functioning ones appear to have this missing thing
| too. We can see it in action. Perhaps all of them do and it is
| just harder for us when the animal thinks very differently or
| maybe does not think as much, feeling more, for example.
|
| ----
|
| Now, about that thing... and the controversy:
|
| Given an organism, or machine for this discussion, is of
| sufficiently robust design and complexity that it can precisely
| differentiate itself from everything else, it is a being.
|
| This thing we are missing is an emergent property, or artifact
| that can or maybe always does present when a state of being
| also presents.
|
| We have not created a machine of this degree yet.
|
| Mother nature has.
|
| The reason for emergence is a being can differentiate sensory
| input as being from within, such as pain, or touch, and from
| without, such as light or motion.
|
| Another way to express this is closed loop vs open loop.
|
| A being is a closed loop system. It can experience cause and
| effect. It can be the cause. It can be the effect.
|
| A lot comes from this closed loop.
|
| There can be the concept of the self and it has real meaning
| due to the being knowing what is of itself or something,
| everything else.
|
| This may be what forms consciousness. Consciousness may require
| a closed loop, and organism of sufficient complexity to be able
| to perceive itself.
|
| That is the gist of it.
|
| These systems we make are fantastic pieces. They can pattern
| match and identify relationships between the data given in
| amazing ways.
|
| But they are open loop. They are not beings. They cannot
| determine what is part of them, what they even are,or anything
| really.
|
| I am both consistently amazed and dismayed at what we can get
| LLM systems to do.
|
| They are tantalizingly close!
|
| We found a piece of how all this works and we are exploiting
| the cral out of it. Ok fine. Humans are really good at that.
|
| But it will all taper off. There are real limits because we
| will eventually find the end goal will be to map out the whole
| problem space.
|
| Who has tried computing that? It is basically all possible
| human thought. Not going to happen.
|
| More is needed.
|
| And that "more" can arrive at thoughts without having first
| seen a few bazillion to choose from.
| afiodorov wrote:
| > What this tells us for AI is that we need something else
| besides LLMs
|
| I am not convinced it follows. Sure LLMs don't seem complete
| however there's a lot of unspoken inference going on in LLMs
| that don't map into a language directly already - the inner
| layers of the deep neural net that operates on abstract
| neurons.
| reverius42 wrote:
| Interestingly though for AI, this doesn't necessarily mean we
| need a different model architecture. A single large multimodal
| transformer might be capable of a lot that an LLM is not
| (besides the multimodality).
| jll29 wrote:
| Most pre-deep learning architectures had separate modules like
| "language model", "knowledge base" and "inference component".
|
| Then LLMs came along, and ML folks got rather too excited that
| they contain implicit knowledge (which, of course, is required
| to deal with ambiguity). Then the new aspiration as "all in
| one" and "bigger is better", not analyzing what components are
| needed and how to orchestrate their interplay.
|
| From an engineering (rather than science) point of view, the
| "end-to-end black box" approach is perhaps misguided, because
| the result will be a non-transparent system by definition.
| Individual sub-models should be connected in a way that retains
| control (e.g. in dialog agents, SRI's Open Agent Architecture
| was a random example of such "glue" to tie components together,
| to name but one).
|
| Regarding the science, I do believe language adds to the power
| of thinking; while (other) animals can of course solve simple
| problems without language, language permits us to define layers
| of abstractions (by defining and sharing new concepts) that
| goes beyond simple, non-linguistic thoughts. Programming
| languages (created by us humans somewhat in the image of human
| language) and the language of mathematics are two examples
| where we push this even further (beyond the definition of new
| named concepts, to also define new "DSL" syntax) - but all of
| these could not come into beying without human language: all
| formal specs and all axioms are ultimately and can only be
| formulated in human language. So without language, we would
| likely be stuck at a very simple point of development,
| individually and collectively.
|
| EDIT: 2 typos fixed
| djtango wrote:
| Is beying another typo?
|
| In my personal learning journey I have been exploring the
| space of intuitive learning which is dominant in physical
| skills. Singing requires extremely precise control of actions
| we can't fully articulate or even rationalise. Teaching those
| skills requires metaphors and visualising and a whole lot of
| feedback + trial & error.
|
| I believe that this kind of learning is fundamentally non
| verbal and we can achieve abstraction of these skills without
| language. Walking is the most universal of these skills and
| we learn it before we can speak but if you study it (or
| better try to program a robot to walk with as many degrees of
| freedom as the human musculoskeletal system) you will
| discover that almost all of us don't understand what all the
| things that go into the "simple" task of walking!
|
| My understanding is that people who are gifted at sports or
| other physical skills like musical instruments have developed
| the ability to discover and embed these non verbal
| abstractions quickly. When I practise the piano and am
| working on something fast, playing semiquavers at anything
| above 120bpm is not really conscious anymore in the sense of
| "press this key then that key"
|
| The concept of arpeggio is verbal but the action is non
| verbal. In human thought where does verbal and non-verbal
| start and end? Its probably a continuum
| wh0knows wrote:
| I think it's not entirely accurate to say that we "learn"
| to walk from a zero state. It's clear that our DNA has
| embedded knowledge of how to walk and it develops our body
| appropriately. Our brains might also have preconditioning
| to make learning to walk much easier.
|
| Music or sports are more interesting to investigate (in my
| opinion) since those specific actions won't be
| preprogrammed and must be learned independently.
|
| The same way we build abstractions for language in order to
| perform "telepathy" it seems like for music or sports we
| build body-specific abstractions. They work similar to
| words within our own brain but are not something easily
| communicated since they're not tied to any language, it's
| just a feeling.
|
| I think it's an interesting point that quite often the best
| athletes or musicians are terrible coaches. They probably
| have a much more innate internal language for their body
| that cannot be communicated easily. Partially, I think,
| that their body is more different than others which helps
| them be exceptional. Or that weaker athletes or musicians
| need to focus much more on lessons from others, so their
| body language gets tied much closer to human language and
| that makes it much easier for them to then communicate the
| lessons they learn to others.
| throwaway4aday wrote:
| I don't think motor skills are a good object to use in an
| argument about verbal vs non-verbal thinking. We have large
| regions of our brains primarily dedicated to motor skills
| and you can't argue that humans are any more talented or
| capable at controlling our bodies than other animals, we're
| actually rather poor performers in this area. You're right
| to say that you aren't conscious of the very highly trained
| movements you are making because they likely have only a
| tenuous connection with any part of your brain that we
| would recognize as possessing consciousness or thought,
| they are mostly learned reflexes and responses to internal
| and external stimuli at this point like a professional
| baseball player who can automatically catch a ball flying
| at him before he's even aware of it.
| lolinder wrote:
| > I do believe language adds to the power of thinking; while
| (other) animals can of course solve simple problems without
| language, language permits us to define layers of
| abstractions (by defining and sharing new concepts) that goes
| beyond simple, non-linguistic thoughts.
|
| Based on my experience with toddlers, a rather smart dog, and
| my own thought processes, I disagree that language is a
| fundamental component of abstraction. Of sharing
| abstractions, sure, but not developing them.
|
| When I'm designing a software system I will have a mental
| conception of the system as layered abstractions before I
| have a name for any component. I invent names for these
| components in order to define them in the code or communicate
| them to other engineers, but the intuition for the
| abstraction comes first. This is why "naming things" is one
| of the hard problems in computer science--because the name
| comes second as a usually-inadequate attempt to capture the
| abstraction in language.
| calf wrote:
| The conception here is that one's layered abstractions is
| basically an informal mathematics... which is formally
| structured... which is a formal grammar. It's your internal
| language, using internal symbols instead of English names.
|
| Remember in CS theory, a language is just a set of strings.
| If you think in pictures that is STILL a language if your
| pictures are structured.
|
| So I'm really handwaving the above just to suggest that it
| all depends on the assumptions that each expert is making
| in elucidating this debate which has a long history.
| JumpCrisscross wrote:
| > _conception here is that one 's layered abstractions is
| basically an informal mathematics... which is formally
| structured... which is a formal grammar. It's your
| internal language, using internal symbols instead of
| English names_
|
| Unless we're getting metaphysical to the point of
| describing quantum systems as possessig a language, there
| are various continuous analog systems that can compute
| without a formal grammar. The language system could be
| the one that thinks in discrete 'tokens'; the conscious
| system something more complex.
| visarga wrote:
| > the "end to end black box" approach is perhaps misguided,
| because the result will be a non transparent system by
| definition
|
| A black box that works in human language and can be
| investigated with perturbations, embedding visualizations and
| probes. It explains itself as much ore more than we can.
| dboreham wrote:
| Not sure about that. The same abstract model could be used for
| both (symbols generated in sequence). For language the symbols
| have meaning in the context of language. For non-language
| thought they don't. Nature seems to work this way in general:
| re-using/purposing the same underlying mechanism over and over
| at different levels in the stack. All of this could be a fancy
| version of very old hardware that had the purpose of
| controlling swimming direction in fish. Each symbol is a flick
| of the tail.
| exe34 wrote:
| I like to think of the non-verbal portions as the biological
| equivalents of ASICs. even skills like riding a bicycle might
| start out as conscious effort (a vision model, a verbal
| intention to ride and a reinforcement learning teacher) but
| is then replaced by a trained model to do the job without
| needing the careful intentional planning. some of the skills
| in the bag of tricks are fine tuned by evolution.
|
| ultimately, there's no reason that a general algorithm
| couldn't do the job of a specific one, just less efficiently.
| winwang wrote:
| I mean, the QKV part of transformers is like an "ASIC" ...
| well, for an (approximate) lookup table.
|
| (also important to note that NNs/LLMs operate on...
| abstract vectors, not "language" -- not relevant as a
| response to your post though).
| xtrapol8 wrote:
| You highlight an expectation that the "truer intelligence" is a
| singular device, once isolated would mobilize ultimate AGI.
|
| All intelligence is the mitigation of uncertainty (the
| potential distributed problem.) if it does not mitigate
| uncertainty it is not intelligence, it is something else.
|
| Intelligence is a technology. For all life intelligence and the
| infrastructure of performing work efficiently (that whole
| entropy thing again) is a technology. Life is an arms race to
| maintain continuity (identity, and the very capacity of
| existential being.)
|
| The modern problem is achieving reliable behavioral
| intelligence (constrained to a specific problem domain.) AGI is
| a phantasm. What manifestation of intelligence appears whole
| and complete and is always right? These are the sorts of lies
| you tell yourself, the ones that get you into trouble. They
| distract from tangible real world problems, perhaps causing
| some of them. True intelligence is a well calibrated "scalar"
| domain specific problem (uncertainty) reducer. There are few
| pressing idempotent obstructions in the real world.
|
| Intelligence is the mitigation of uncertainty.
|
| Uncertainty is the domain of negative potential
| (what,where,why,how?)
|
| Mitigation is the determinant resolve of any constructive or
| destructive interference affecting (terminal resolve within)
| the problem domain.
|
| Examples of this may be piled together mountains high, and you
| may call that functional AGI, though you would be self
| deceiving. At some point "good enough" may be declared for
| anything so passing as yourselves.
| erichocean wrote:
| > _This has been suspected for years, but now there 's an
| experimental result._
|
| You would think the whole "split-brain" thing would have been
| the first clue; apparently not.
| og_kalu wrote:
| You are getting derailed because of the name we've chosen to
| call these models but only the first few and last few layers of
| LLMs deal with tokens. The rest deal with abstract
| representations and models learnt during training. Language
| goes in and Language comes out but Language is not the in-
| between for either LLMs or Humans.
| erichocean wrote:
| > _So if someone figures out to do this, it will probably take
| less hardware than an LLM._
|
| We have, it's called DreamCoder. There's a paper and
| everything.
|
| Everything needed for AGI exists today, people simply have
| (incorrect) legacy beliefs about cognition that are holding
| them back (e.g. "humans are rational").
|
| https://arxiv.org/abs/2006.08381
| visarga wrote:
| > No idea how to do this
|
| We need to add the 5 senses, of which we have now image, audio
| and video understanding in LLMs. And for agentic behavior they
| need environments and social exposure.
| NemoNobody wrote:
| This is actually exactly what is needed. We think the dataset
| is the primary limitation to an LLMs capability but in
| reality we are only developing one part of their
| "intelligence" - a functional and massive model isn't the end
| of their training - its kinda just the beginning.
| layer8 wrote:
| > What this tells us for AI is that we need something else
| besides LLMs.
|
| Despite being an LLM skeptic of sorts, I don't think that
| _necessarily_ follows. The LLM matrix multiplication machinery
| may well be implementing an equivalent of the human non-
| language cognitive processing as a side effect of the training.
| Meaning, what is separated in the human brain may be mixed
| together in an LLM.
| taeric wrote:
| I'm curious why "simulation" isn't the extra thing needed? Yes,
| we need language to communicate ideas. But you can simulate in
| your mind things happening that you don't necessarily have
| words for, yet. Right?
| mountainriver wrote:
| Transformers are just sequence predictors, it doesn't need to
| be language, increasingly it's not
| codebolt wrote:
| > What this tells us for AI is that we need something else
| besides LLMs.
|
| An easy conclusion to jump to but I believe we need to be more
| careful. Nothing in these findings proves conclusively that
| non-verbal reasoning mechanism equivalent to humans couldn't
| evolve in some part of a sufficiently large ANN trained on text
| and math. Even though verbal and non-verbal reasoning occurs in
| two distinct parts of the brain, it doesn't mean they're not
| related.
| yarg wrote:
| Is this not obvious?
|
| Language is a very poor substitute for freely flowing electrical
| information - it is evolved to compensate for the bottlenecks to
| external communication - bottlenecks that are lacking an internal
| analogue.
|
| It's also a highly advanced feature - something as heavily
| optiimised as evolved life would not allow something as vital as
| cognition to be hampered by a lack of means for high fidelity
| external expression.
| IIAOPSW wrote:
| It is not at all obvious that "freely flowing electrical
| information" isn't just language in a different medium, much
| the same as video on a cassette tape.
| yarg wrote:
| Yes it is.
|
| Language is designed to be expressible with low fidelity
| vibrating strings - it is very clear that the available
| bandwidth is in the order of bytes per second.
|
| Verses a fucking neural network with ~100 billion neurons.
|
| Come on man, seriously - the two communication modalities are
| completely incomparable.
| IIAOPSW wrote:
| Versus a fucking phone network with ~10 billion active
| numbers.
|
| Come on man, seriously - the two communication modalities
| are completely incomparable.
|
| Clearly the information traveling around on the phone
| network couldn't possibly be the same as the low bandwidth
| vibrating strings used in face to face communication.
| Obviously.
| yarg wrote:
| There's a major difference - the phone network takes in
| prerequisite constraints on the nature of the information
| that it's encoding; it is forced by its functionality to
| be a reflection of spoken language.
|
| The internal communications of the mind have no need for
| such constraints (and evolved hundreds of millions of
| years beforehand).
|
| Anyway, I don't know what you were actually trying to
| argue here: you just built a simulated brain out of
| people, and the massively multi-agent distributed nature
| of the language of that machine is (emergently)
| incomparable with vocalised language.
| mcswell wrote:
| "Is this not obvious?"
|
| No. But I'm going to stop there, because there are pages of
| comments saying the exact opposite (and of course some agreeing
| with you) above.
| slashdave wrote:
| Perhaps. But one could argue that the development of language
| (as necessary for communication, its original purpose) was the
| seed that lead to evolutionary development of deeper thinking.
| yarg wrote:
| 100% deeper thought - and to a large extent the capacity for
| linguistic categorisation of objects is incredibly helpful in
| developing a deeper cognitive understanding of the world
| around us...
|
| But the most fundamental boon that it offered was in terms of
| planning and organisation. Before language we'd point and
| grunt and go there and do the thing that we were gesturing
| that we were going to do.
|
| But that's a very crude form of planning - you're pretty much
| just going all in on Leeroy Jenkins.
|
| But actually (and horrifically) I think it's the gift of Kane
| that speaking and planning permitted; well organised humans
| (the smartest things on the planet) have been figuring out
| increasingly better ways to both kill each other and not to
| die themselves in a brutal feedback loop for a very long time
| now.
|
| It's brutal as fuck, but it's Darwinian gold.
| bmacho wrote:
| The claim in the title is indeed obvious.
|
| Also the title is editoralized for no reason. It makes
| searching, recognizing, citing etc waaay harder, and full of
| errors. I'll flag it.
| psychoslave wrote:
| >And in fact, most of the things that you probably learned about
| the world, you learned through language and not through direct
| experience with the world.
|
| Most things we know, we are probably not aware of. And for most
| of us, direct experience of everything that surrounds us in the
| world certainly exceeds by several order of magnitude the best
| bandwidth we can ever dream to achieve through any human
| language.
|
| Ok, there are no actual data to back this, but authors of the
| article don't have anything solid either to back such a bold
| statement, from what is presented in the article.
|
| If most of what we know of the world would mostly be things we
| were told, it would obviously be mostly a large amount of phatic
| noises, lies and clueless random assertions that we would have no
| mean to distinguish from the few stable credible elements
| inferable by comparing with a far more larger corpus of self
| experiments with realty.
| dang wrote:
| All: please don't comment based on your first response to an
| inevitably shallow title. That leads to generic discussion, which
| we're trying to avoid on HN. _Specific_ discussion of what 's new
| or different in an article is a much better basis for interesting
| conversation.
|
| Since we all have language and opinions about it, the risk of
| genericness is high with a title like this. It's like this with
| threads about other universal topics too, such as food or health.
| WiSaGaN wrote:
| I think we need to distinguish between the language e.g. the
| native language the person uses like English and the concept of
| language. Your information exchanging binary messages over PCI
| bus is also part of a language.
| jostmey wrote:
| Progress with LLMs would seem to support the title. The language
| abilities of LLMs does not seem to lead to higher thought, so
| there must be additional processes that are required for higher
| thought or process that don't depend on language
| mcswell wrote:
| You may be right, but there is another hypothesis that would
| need to be rejected: at question is whether LLMs "do" language
| the same way we do. For certain they learn language much
| differently, with orders of magnitude more input data. It could
| be that they just string sentence fragments together, whereas
| (by hypothesis) we construct sentences hierarchically. The
| internal representation of semantics might also be different,
| more compositional in humans.
|
| If I had time and free use of an LLM, I'd like to investigate
| how well it understands constructional synonymy, like "the red
| car" and "the car that is red" and "John saw a car on the
| street yesterday. It was red." I guess models that can draw
| pictures can be used to test this sort of thing--surely someone
| has looked into this?
| fjfaase wrote:
| As some who has a dis-harmonic intelligence profile, this has
| been obvious for a very long time. In the family of my mother
| there are several individuals struggling with language while
| excelling in the field of exact sciences. I very strongly suspect
| that my non-verbal (performal) IQ is much higher (around 130)
| than my verbal IQ (around 100). I have struggled my whole life to
| express my ideas with language. I consider myself an abstract
| visual thinker. I do not think in pictures, but in abstract
| structures. During my life, I have met several people, especially
| among software engineers, who seem to be similar to me. I also
| feel that people who are strong verbal thinkers have the greatest
| resistance against idea that language is not essential for higher
| cognitive processes.
| eliaspro wrote:
| Growing up, I never used words or even sentences for thinking.
|
| The abstract visualizations I could build in my mind where
| comparable to semi-transparent buildings that I could freely
| spin, navigate and bend to connect relations.
|
| In my mid-twenties, someone introduced me to the concept of
| people using words for mental processes, which was completely
| foreign to me up to this point.
|
| For some reason, this made my brain move more and more towards
| this language-based model and at the same time, I felt like I
| was losing the capacity for complex abstract thoughts.
|
| Still to this day I (unsuccessfully) try to revive this and
| unlearn the language in my head, which feels like it imposes a
| huge barrier and limits my mental capacity to the capabilities
| of what the language my brain uses at the given time (mostly
| EN, partially DE) allows to express.
| ryandv wrote:
| This reminds me of my experiences working with a software
| developer transplanted from the humanities who was highly
| articulate and capable of producing language _about_
| programming, yet seemed to not be able to write many actual
| computer programs themselves.
|
| I think that I ultimately developed an obsessive need to cite
| all my ideas against the literature and formulate natural
| language arguments for my claims to avoid being bludgeoned
| over the head with wordcelry and being seen as inferior for
| my lesser verbal fluency despite having written software for
| years at that point, since early childhood, and even studied
| computer science.
| kerblang wrote:
| > During my life, I have met several people, especially among
| software engineers, who seem to be similar to me
|
| This begs a question though: Since programming is mostly done
| with language - admittedly primitive/pidgin ones - why isn't
| that a struggle? Not sure if you're a programmer yourself, but
| if so do you prefer certain programming languages for some
| sense of "less-verbalness" or does it even matter?
|
| Just wondering, not attacking your claim per se.
| alserio wrote:
| The idea that programming languages and natural languages are
| processed with the same wetware should be testable with
| something like the tests described in this submission. I
| don't expect it to be true, but only expecting something is
| not science
| dleeftink wrote:
| Some progress has been made in this area, see [0], [1], [2]
| and [3], observing both similarities and dissimilarities in
| terms of language processing:
|
| Siegmund, J., Kastner, C., Apel, S., Parnin, C., Bethmann,
| A., Leich, T. & Brechmann, A. (2014). Understanding
| understanding source code with functional magnetic
| resonance imaging. In Proceedings of the 36th International
| Conference on Software Engineering (pp. 378-389).
|
| Peitek, N., Siegmund, J., Apel, S., Kastner, C., Parnin,
| C., Bethmann, A. & Brechmann, A. (2018). A look into
| programmers' heads. IEEE Transactions on Software
| Engineering, 46(4), 442-462.
|
| Krueger, R., Huang, Y., Liu, X., Santander, T., Weimer, W.,
| & Leach, K. (2020). Neurological divide: An fMRI study of
| prose and code writing. In Proceedings of the ACM/IEEE 42nd
| International Conference on Software Engineering (pp.
| 678-690).
|
| Peitek, N., Apel, S., Parnin, C., Brechmann, A. & Siegmund,
| J. (2021). Program comprehension and code complexity
| metrics: An fmri study. In 2021 IEEE/ACM 43rd International
| Conference on Software Engineering (ICSE) (pp. 524-536).
| IEEE.
|
| [0]: https://www.frontiersin.org/10.3389/conf.fninf.2014.18
| .00040...
|
| [1]: https://ieeexplore.ieee.org/abstract/document/8425769
|
| [2]: https://dl.acm.org/doi/abs/10.1145/3377811.3380348
|
| [3]: https://ieeexplore.ieee.org/abstract/document/9402005
| alserio wrote:
| thank you! fascinating reads
| branko_d wrote:
| Other than the word "language", programming languages and
| natural languages really have very little in common.
|
| Anecdotally, when I write code, I don't "talk in my head".
| The structures that I have in my brain are in fact
| difficult to put into words, and I can only vaguely
| describe them as interconnected 3D shapes evolving over
| time, or even just "feelings" and "instincts" in some
| cases.
|
| The code that comes out of that process does not, in fact,
| describe the process fully, even though it describes
| exactly what the computer should do. That's why reading
| someone else's code can be so difficult - you are accessing
| just the _end product_ of their thinking process, without
| seeing the process itself.
| alserio wrote:
| I do subjectively agree with this. I, too, don't "code by
| words". However, it's the first time someone has
| described their personal experience to me as
| interconnected 3d shapes. Really fascinating and really
| distant from my own experience. For the second part of
| your message, code comments are a possible place where
| you can store the process, via the medium of words, this
| time.
| superb_dev wrote:
| A programming language has a ton more rules and way less
| ambiguity than a speaking language.
| makeitdouble wrote:
| I see your general point on needing language proficiency to
| program, but I think it's just a very low requirement.
|
| Parent isn't saying they can't handle language (and we
| wouldn't have this discussion in the first place), just that
| they better handle complexity and structure in non verbal
| ways.
|
| To get back to programming, I think this do apply to most of
| us. Most of us probably don't think in ruby or JS, we have a
| higher vision of what we want to build and "flatten" it into
| words that can be parsed and executed. It's of course more
| obvious for people writing in say basic or assembly, some
| conversion has to happen at some point.
| tines wrote:
| > As some who has a dis-harmonic intelligence profile, this has
| been obvious for a very long time. In the family of my mother
| there are several individuals struggling with language while
| excelling in the field of exact sciences. I very strongly
| suspect that my non-verbal (performal) IQ is much higher
| (around 130) than my verbal IQ (around 100)
|
| I used to rationalize to myself along similar lines for a long
| time, then I realized that I'm just not as smart as I thought I
| was.
| NemoNobody wrote:
| I'm a brilliant genius according to IQ tests. Think me
| arrogant or conceited or whatever - that is literally the
| truth, fact - proven many times in the educational system (I
| was homeschooled and didn't follow any sort of curriculum and
| was allowed to do whatever I wanted bc I kept testing higher
| than almost everyone) and just for kicks also - the last time
| I took an IQ test I was in my late 20s and a friend and I had
| a bet about who could score higher completely stoned off of
| our ass. We rolled enough blunts apiece that we could be
| continuously smoking marijuana as we took the IQ test, which
| followed several bongs finished between the two of us. I was
| so high that I couldn't keep the numbers straight on one of
| the number pattern questions - it was ridiculous. I scored
| 124, my lowest "serious" attempt ever - all of this is 100%
| true. I need anyone to believe me - take this how you will
| but I have an opinion that is a bit different.
|
| I'm brilliant - I've read volumes of encyclopedias, my
| hobbies include comparative theology, etymology, quantum
| mechanics and predicting the future with high accuracy (I
| only mention stuff I'm certain of tho ;) but so much so it
| disturbs my friends and family.
|
| The highest I scored was in the 160s as a teenager but I
| truly believe they were over compensating for my age - only
| as an adult have I learned most children are stupid and they
| maybe in fact didn't over compensate. I am different than
| anyone else I've ever personally met - I fundamentally see
| the world different.
|
| All of that is true but that's a rather flawed way of
| assessing intelligence - fr. I'm being serious. The things we
| know can free us as much as they can trap us - knowledge
| alone doesn't make a man successful, wealthy, happy or even
| healthy - I'm living evidence of this. That doesn't cut it as
| a metric for prediction of much. There are other qualities
| that are far more valuable in the societal sense.
|
| Every Boss I've ever worked for has been dumber than me -
| each one I've learned invaluable stuff from. I was a boss
| once - in my day I owned and self taught/created an entire
| social network much like FB was a few years ago, mine
| obviously didn't take off and now I'm a very capable bum.
| Maybe someday something I'm tinkering with will make me
| millions but prolly not, for many reasons, I could write
| books if I wanted ;)
|
| At the end of the day, the facts are what they are - there is
| an optimal level of intelligence that is obviously higher
| than the bottom but is nowhere near the top tier, very likely
| near that 100 IQ baseline. What separates us all is our
| capabilities - mostly stuff we can directly control, like
| learning a trade.
|
| A Master Plumber is a genius plumber by another name and that
| can and obviously is most often, learned genius. What you sus
| about yourself is truth - don't doubt that. No IQ test ever
| told me I lacked the tenacity of the C average student that
| would employ me someday - they can't actually measure the
| extent of our dedicated capacity.
|
| I kno more than most people ever have before or rn presently
| - I don't know as much about plumbing as an apprentice with 2
| years of a trade school dedicated to plumbing and a year or
| two of experience in the field, that's the reality of it. I
| could learn the trade - I could learn most every trade, but I
| won't. That's life. I can tell you how you the ancients
| plumbed bc that piqued my curiosity and I kno far more about
| Roman plumbing than I do how a modern city sewer system
| works. That's also life.
|
| It isn't what we kno or how fast we can learn it - it's what
| we do that defines us.
|
| Become more capable if you feel looked down on - this is the
| way bc even if what you hone your capabilities of can be
| replicated by others most won't even try.
|
| That's my rant about this whole intelligence perception we
| currently have as a society. Having 100 IQ is nowhere near
| the barrier that having 150 IQ is.
|
| Rant aside, to the article - how isn't this obvious? I mean
| feelings literally exist - not just the warm fuzzy ones, like
| the literal feeling of existence. Does a monkey's mind
| require words to interpret pain or pleasure for example. Do I
| need to know what "fire" or "hot" is in a verbal context to
| sufficiently understand "burn" - words exists to convey to to
| others what doesn't need to be conveyed to us. That's their
| function. Communication. To facilitate communication with our
| social brethren we adopt them fundamentally as our Lego
| blocks for understanding the world - we pretend that words
| comprising language are the ideas themselves. A banana is a -
| the word is the fruit, they are the same in our minds but if
| I erase the word banana and all it's meaning of the fruit and
| I randomly encounter a banana - I still can taste it. No
| words necessary.
|
| Also, you can think without words, deliberately and
| consciously - even absentmindedly.
|
| And LLMs can't reason ;)
|
| Truthfully, the reality is that a 100 IQ normal human is far
| more capable than any AI I've been given access to - in
| almost every metric I attempted to asses I ultimately didn't
| even bother as it was so obvious that humans are functionally
| superior.
|
| When AI can reason - you, and everyone else, will kno it. It
| will be self evident.
|
| Anyways, tldr: ppl are smarter than given credit for, smarter
| and much more capable - IQ is real and matters but far less
| than we are led to believe. People are awesome - the epitome
| of biological life on Earth and we do a lot of amazing things
| and anyone can be amazing.
|
| I hate it when the Hacker News collective belittles itself -
| don't do that. I rant here bc it's one of the most
| interesting places I've found and I care about what all of
| you think far more than I care about your IQ scores.
| fallingknife wrote:
| > predicting the future with high accuracy
|
| You can't do this. It's not a matter of IQ, it's a matter
| of math. Higher order effects are essentially impossible to
| predict because the level of detail you need to know the
| initial conditions in is not possible. Even in simple
| systems where all the rules are known like a billiards
| table. Furthermore, if you could do this, you would be a
| billionaire by now just from trading the stock market. This
| claim alone makes me doubt the rest of your comment.
| midtake wrote:
| Meta: I have noticed many comments written in this style
| lately. Long-winded with out-of-place internet shorthand.
| It is as if someone is deliberately trying to sound
| youthful while not really knowing how. It is the "how do
| you do, fellow kids" of writing styles.
|
| I am not sure what sort of LLM-powered bot is behind them,
| or whether it's one person with some sort of schizophrenia,
| but once you notice it you will see at least one of these
| per popular post.
| beepbooptheory wrote:
| I have noticed the style, hadn't thought to attribute it
| to "sounding youthful". To me it just sounds genuinely
| from a younger person.
|
| Fixations around "intelligence"/IQ is huge these days, I
| have found, among young men, not just because of the AI
| stuff.
|
| And humans in general can still, for now, write and be
| passionate and maybe have some misplaced enthusiasm on
| internet forums!
| torginus wrote:
| _these days_
| GoblinSlayer wrote:
| Do you understand quantum mechanics? It's one thing to be
| smart, it's another to use your ability.
| beepbooptheory wrote:
| I think you just need to go further in your thought process
| here: if you recognize that your amazing IQ scores have
| only very local relevance, that they don't capture
| everything, why feel the need to have any investment in
| them at all? Would it be too much of a conspiracy to you if
| I told you that IQ is rather a lot of BS?
|
| If the category you are working with is the kind of thing
| that you have to construct such nuance, and circles, and
| "yes but also..."s around, perhaps you might question your
| category outright?
|
| Just to say, have you ever maybe thought that what we call
| "intelligence" is somewhat determined more by time and
| place than it is by our collective answers to multiple
| choice questions? Just maybe something to think about.
| dadarecit wrote:
| one of the better memes on hn well played
| torginus wrote:
| if you're not a native english speaker, it's normal to score
| lower on the (English) verbal tests
| sva_ wrote:
| That was a difficult thing for me as well -- if you have such
| great ideas in your head but they fall apart once you try to
| bring them down on paper, maybe those ideas simply aren't
| that great.
| makeitdouble wrote:
| I think people who can manipulate complex structures but
| struggle with language tend to see language in a more formal
| way, putting more effort into understanding its structure and
| inner working.
|
| Basically what to most people is so obvious that it becomes
| transparent ("air") isn't to us, which apparently is an
| incredible gift for becoming a language researcher. Or a
| programmer.
| bertylicious wrote:
| > I very strongly suspect that my non-verbal (performal) IQ is
| much higher (around 130) than my verbal IQ (around 100).
|
| I very strongly suspect that you're overestimating yourself.
| GoblinSlayer wrote:
| IME fast talking people simply give half assed formulations of
| half assed ideas.
| segalord wrote:
| You'd still be reasoning using symbols, language is inherently
| an extension of symbols and memes. Think of a person
| representing a complex concept in their mind with a symbol and
| using it for further reasoning
| mmooss wrote:
| A concept in every human culture - i.e., created in every
| culture, not passed from one to some others - is _mentalese_ [0]:
| "A universal non-verbal system of concepts, etc., conceived of as
| an innate representational system resembling language, which is
| the medium of thought and underlies the ability to learn and use
| a language." [1]
|
| If you look up 'mentalese' you can find a bunch written about it.
| There's an in-depth article by Daniel Gregory and Peter Langland-
| Hassan, in the incredible Stanford Encyclopedia of Philosophy, on
| _Inner Speech_ (admittedly, I 'm taking a leap to think they mean
| precisely the same thing). [2]
|
| [0] Steven Pinker, _The Blank Slate: The Modern Denial of Human
| Nature_ (2002)
|
| [1] Oxford English Dictionary
|
| [2] https://plato.stanford.edu/entries/inner-speech/
| andai wrote:
| When I was 13 or so, a friend asked me, "So, you speak three
| languages. Which one do you think in?" and the question left me
| speechless, because until that moment I hadn't considered that
| people think in words. It seemed a very inefficient way to go
| about things!
|
| Much later, I did begin to think mostly in words, and (perhaps
| for unrelated reasons?) my thinking became much less efficient.
|
| Also related, I experienced temporarily enhanced cognition while
| under the influence of entheogens. My thoughts, which normally
| fade within seconds, became stretched out, so that I could stack
| up to 7 layers of thought on top of each other and examine them
| simultaneously.
|
| I remember feeling greatly diminished, mentally, once that
| ability went away.
| etcd wrote:
| With the drugs were you able to be more efficient for example
| code quicker, or was it more like better insights. Or perhaps
| both?
| andai wrote:
| It might be that my working memory was temporarily expanded.
| Research has found its possible to massively increase it by
| disabling parts of the brain with electronmagnets.
|
| What it seemed like subjectively though is that my thoughts
| themselves became "longer", imagine planks of wood. You can
| stack them (slightly offset, like a video timeline with
| layers), and the wider they are, the more ideas you can stack
| before it topples over.
|
| I have unfortunately been unable to replicate the experience.
| There were after-effects for a few weeks where my senses and
| cognition were markedly enhanced, but this faded after a few
| weeks.
|
| My main take-away here is "why are we trying to make machines
| smarter than humans, we should try to make humans smarter"!
| (I guess Neuralink _kinda_ does that, but it doesn 't
| actually make the human part smarter...)
| aniijbod wrote:
| Thought and language are intertwined in ways we don't fully
| grasp. The fact that certain cognitive tasks, like comprehension,
| can proceed without engaging traditional language-related brain
| regions doesn't mean thought doesn't use language--it just means
| we might not yet understand how it does. Thought could employ
| other forms of linguistic-like processes that Fedorenko's
| experiments, or even current brain-imaging techniques, fail to
| capture.
|
| There could be functional redundancies or alternative systems at
| play that we haven't identified, systems that allow thought to
| access linguistic capabilities even when the specialized language
| areas are offline or unnecessary. The question of what "language
| in thought" looks like remains open, particularly in tasks
| requiring comprehension. This underscores the need for further
| exploration into how thought operates and what role, if any,
| latent or alternative linguistic functionalities play when
| conventional language regions aren't active.
|
| In short, we may have a good understanding of language in
| isolation, but not necessarily in its broader role within the
| cognitive architecture that governs thought, comprehension, and
| meaning-making.
| dragonwriter wrote:
| > The fact that certain cognitive tasks, like comprehension,
| can proceed without engaging traditional language-related brain
| regions doesn't mean thought doesn't use language
|
| All other things being equal, its is a reason to provisionally
| reject the hypothesis that those kinds of thought use language
| as introducing entities (the ties between those kinds of
| thought and language) into the model of reality being generated
| that are not needed to explain any observed phenomenon.
| adrian_b wrote:
| Moreover, I believe that one should distinguish between
| "language" and "words".
|
| The parent article is mostly about thinking without "words",
| not necessary without a "language".
|
| Some thoughts might be completely different from sentences in a
| language, probably when they have a non-sequential nature, but
| other thoughts are exactly equivalent to a sentence in a
| language, except that they do not use the words.
|
| I can look and see to things that I recognize, e.g. A and B,
| and I can see that one is bigger than the other and I can think
| "A is bigger than B" without thinking at the words used in the
| spoken language, but nonetheless associating some internal
| concepts of "A", "B" and "is greater than", exactly like when
| formulating a spoken sentence.
|
| I do not believe that such a thought can be considered as an
| example of thinking without language, but just as an example
| that for a subset of the words used in a spoken language there
| is an internal representation that is independent of the
| sequence of sounds or letters that compose a spoken or written
| language.
| jumping_frog wrote:
| I would like to propose that reasoning needs an intermediate
| representation for it to be effective. Consider the scene graph
| representation in computer graphics. This scene graph is the
| intermediate representation. The algorithm is not reasoning
| about individual pixels of two objects interacting in the scene
| graph. It uses IR. Now for some that IR takes the form of
| language/words. For some it takes the form of visuals. For
| some, these are just abstract feelings.
| joelignaatius wrote:
| Is this the part where someone will attempt to have me poisoned
| so I can't hold an interior dialog anymore and take notes? And if
| I say anything this will definitely happen?
|
| You are now aware that just about every rat model described on
| pubmed is just an experiment done on someone the mafia doesn't
| like.
|
| Look up mellowsadistic on tumblr. Compare and contrast the number
| of articles about autism with hackernews and metafilter. If this
| is about someone else I don't care. I just don't want to be
| poisoned and tortured anymore.
|
| Everyone around me is hellbent on making the case that
| civilization isn't worth it because they want to play cowboys and
| indians and use the poor for medical experiments. How about not
| doing that.
|
| Language is my favorite thing. And I'm having everyone around me
| act psychotic on purpose while I'm being gaslit and drugged. This
| is in San Francisco. It's such a shit show. They're just all
| assholes. If I ever have any power or authority in any way
| whatsoever I'm just going to mail everyone in California a letter
| that says "you have the society you deserve" and a nickel. You're
| all assholes that torture people and you deserve each other.
|
| Fucking with someone's ability to read and write is a useful way
| to make it so someone can't read the novels you're fucking them
| over to. Oh let's see if we can gaslit this guy to the plot of
| this novel and so on.
|
| Don't come to San Francisco. They have _problems_.
|
| Oh and if the police don't like you they'll just turn the x-ray
| machines to high on coordination with the homeless and the
| shelter gang members. I may be dying of x-rays and I can't eat.
| _From going into a federal building to get a printout of my
| taxes_. They 're fucking insane. Which means you can't get
| medical care unless you want to be murdered extra-judicially.
| This is from Latino and romani gangsters in coordination with
| sick crooked cops. I'm shitting water and my bones hurt. What am
| I supposed to do, walk across a hellscape where I'll be drugged
| in the way to a hospital where they'll (at best)give me another
| x-ray to check to see how sick I am.
|
| Their largest building is falling over because they're
| incompetent. I hate this place so fucking much. They wipe
| tryptamines on the books in the libraries. Just everything here
| wants to kill you because they're selfish ignorant pricks. You
| can't trust them with any amount of authority in any way
| whatsoever or they'll just use it to hurt people.
|
| Oh goody it's probably about this
|
| https://www.metafilter.com/205950/MIT-Researchers-Build-Sola...
|
| It's _pretty_ cool. I love how we 're fucking up language (
| _kinda_ ) because some asswipes want to destroy the English
| language while gaslighting you. _I got you bro_. _WOW_. Yeah -
| neat. Sounds like a good way to piss off just about everyone and
| make them decide that Christianity is a shit show that propagates
| plagues and destroys culture. Or, you know, it 's the french
| being pissy that the Notre Dame burnt down and want to foist a
| cultural language institute crutch in the US like they have to
| have. This is fucking stupid.
|
| Have a good nice wonderful great day. And then be sure to say
| shit twice around me to attempt to modify my behavior or
| otherwise have me poisoned. You all suck.
|
| Oh and be sure to read Helstrom's Hive by Frank Herbert so when
| people use affected language around me because they're psychotic
| proto Christians that are pissed that no one wants to convert to
| their bullshit excuse for bad psychology and so are fucking with
| me to a science fiction book you'll be able to find out what
| chapter they're on. The stalking by ignorant assholes is just
| fucking sad. You'd think they'd work on making buildings not fall
| over or just smack themselves in the head with a brick or
| something.
|
| But hey you effectively bugged my phone with a stingray and
| followed me around the city acting stupid so I told everyone what
| book I'm reading. On that note if you get one of the free phones
| for the poor don't call poison control, the suicide hotline, or
| the FDA because they'll just route the call through a black site
| call center. Only if you create trouble and report when your food
| is poisoned of course. They suck so fucking badly.
|
| And then when I bitch about all this I get all the anarchist
| weenies that like to cross their legs and grunt at each other all
| riled up. It's a massive tire fire.
|
| Of course now that I've commented on this I'll have someone
| attempt to destroy my ability to read and write. Because everyone
| here is a psychotic nutjob. Please don't come to San Francisco.
| Anywhere else but not here. I wouldn't wish this place on the
| people that are poisoning me. Everyone here is just mentally ill.
|
| Case in point - at the shelter I'm staying at (555 Beule street)
| I'll have staff members or others find what I've written and then
| delete parts of speech so it makes it seem like "I'm a Russian
| scientist" that can't speak English. I'll then have my bags
| x-rayed on high when I go through security screenings. These
| people shouldn't have anything to do with running a shelter they
| should be in prison. And the cops participate - it's how they
| control the homeless population - they just find every way they
| can to poison them including giving them cancer via x-rays. They
| did a "sugar test" with a snip in an ambulance that may have just
| had polonium in it. They're fucked up, they can't just arrest
| drug users no they have to fucking poison them to death. I hate
| them all so fucking much.
|
| Go on then. Delete a bunch of adverbs and so forth to prove me
| right.
| joelignaatius wrote:
| Also the crow. Yes the stupid movie. Stalking with idiots
| quoting the movie or acting out parts of it. Neat. You're
| assholes. I already knew that and now I can't enjoy this movie.
| Gee whiz you're so fucking sophisticated. What a waste of
| talent. You could have been doing literally _anything_ else.
| Ugh.
|
| This would also explain making me repeatedly sick and then
| seeing if you can field test a cure on me and use that as a pay
| off not to have me murdered. Like tin tin from the movie.
| Wonderful. Am I going to be made sick and hurt to every death
| of a character in every film ever until I die?
|
| This is what they're doing rather than make buildings that
| don't fall over. _slow clap_.
|
| And the water at 555 Beule street in San Francisco has been
| making me sick so I've been drinking large amounts of milk and
| now I wonder if it has morphine in it. I'm in physical pain all
| the time. These people are just so fucking evil and shitty. If
| I had a billion dollars I'd just put it in front of the ferry
| building and burn it out of spite.
|
| They're probably just having sick homeless cough on my things
| when I'm not looking. They're a walking public health hazard.
| The best part is when idiots will cough on each other, throw a
| mask on so no one coughs on them, and then cough on someone
| else on the other side of the city. It's just so fucking sad.
| joelignaatius wrote:
| Oh and the black guy that was in hackers was ton tin in the
| crow. So what he "learned" was essentially bird flu. And milk
| makes you sick because birds don't drink milk. Did I mention
| there are swarms of crowd flying all over the city. Also
| someone put a rusty spring in the bathroom and so now I'm
| wondering if the headaches are caused by iron filings in my
| food that respond to radiation that end up in my brain.
|
| Anyway. I'll keep writing this all out because A) based on
| this stupid fucking movie there may or may not be bird flu in
| San Francisco B) I don't like being poisoned and stalked
| across the city by idiots re enacting every type of media
| they can think of (oh look this "crow"/phylactery is the way
| you kill people good fucking job) C) it encourages people to
| stay the fuck out of San Francisco which is a dangerous hell
| hole D) no one is helping me and everyone actively is
| attempting to harm me F) it's the right fucking thing to do
| and if I can encourage as many people as I possibly can not
| to come here I will E) yes they did in fact say alphabetical
| order when they killed tin tin. Did I mention there a stupid
| fucking wizard of Oz play and security company?
|
| I just don't want to be sick. I'm going to write all this
| shit out so long as I'm ill and hurting.
|
| So. Bird flu. Fucking wonderful. It's probably nothing, the
| hundreds of crows on the embarcadero are just there for no
| reason. Is someone going to give me rabies? Are my kidneys
| going to fail now? When I write shit people don't like do
| they hack the free phone I have to fuck with the iron filings
| in my head?
|
| I'm sick and I hurt. 555 Beule street San Francisco.
| meiraleal wrote:
| > Language is my favorite thing. And I'm having everyone around
| me act psychotic on purpose while I'm being gaslit and drugged.
| This is in San Francisco. It's such a shit show. They're just
| all assholes.
|
| I think I have seen some of your hallucinated posts under
| another account name but the same history. That was a few
| months ago. Seems like nothing changed.
|
| I'm wondering. Why do you fantasize such big story and take
| such a long time to write it here while obviously nobody read
| it all or care about it and the things you write makes you look
| so bad like you are in the middle of a psychotic break (for
| months)?
|
| In your crazy mind, am I part of the conspiracy now that I
| replied to you?
| mannyv wrote:
| Imo just like in computers, language can make certain thoughts
| easier to think.
| orwin wrote:
| I will add an anecdata, then ask a question.
|
| I could enter what we all here call the "Zone" quite often when i
| was young (once while doing math :D). I still can, but rarely on
| purpose, and rarely while coding. I have a lot of experience in
| this state, and i can clearly say that a marker of entering the
| zone is that your thoughts are not "limited" by language anymore
| and the impression of clarity and really fast thinking. This is
| why i never thought that language was required for thinking.
|
| Now the question: would it be possible to scan the brain of
| people while they enter the zone? I know it isn't a state you can
| reach on command, but isn't it worth to try? understand the
| mechanism of this state? And maybe understand where our thought
| start?
| toomuchtodo wrote:
| Also known as "Flow".
|
| https://en.wikipedia.org/wiki/Flow_(psychology)
| riiii wrote:
| Nice idea. In the zone, I don't think about the code. I am the
| code and the code is me.
|
| That is, until the code refuses to work. Then the code is a
| bitch and I need a break.
| AnimalMuppet wrote:
| Makes sense. If I am the code and the code is me, and the
| code doesn't work, then I'm done working too.
| Razengan wrote:
| Those without, do you feel "jealous" of people with a "mind's
| eye"?
|
| Or vice versa?
| HKH2 wrote:
| You seem to be conflating inner monologues and imagination.
|
| I don't use an inner monologue but my imagination is fairly
| good at creating new images.
| slashdave wrote:
| What? No. The idea that your thoughts have to be expressed in
| something as crude as language sounds very tedious and
| limiting.
| YeGoblynQueenne wrote:
| >> They're basically the first model organism for researchers
| studying the neuroscience of language. They are not a biological
| organism, but until these models came about, we just didn't have
| anything other than the human brain that does language.
|
| I think this is completely wrong-headed. It's like saying that
| until cars came about we just didn't have anything other than
| animals that could move around under its own power, therefore in
| order to understand how animals move around we should go and
| study cars. There is a great gulf of unsubstantiated assumptions
| between observing the behaviour of a technological artifact, like
| a car or a statistical language model, and thinking we can learn
| something useful from it about human or more generally animal
| faculties.
|
| I am really taken aback that this is a serious suggestion: study
| large language models as in-silico models of human linguistic
| ability. Just putting it down in writing like that rings alarm
| bells all over the place.
| upghost wrote:
| I've been trying to figure out to respond to this for a while.
| I appreciate the fact that you are pretty much the lone voice
| on this thread voicing this opinion, which I also share but
| tend to keep my mouth shut since it seems to be unpopular.
|
| It's hard for me to understand where my peers are coming from
| on the other side of this argument and respond without being
| dismissive, so I'll do my best to steelman the argument later.
|
| Machine learning models are function approximators and by
| definition do not have an internal experience distinct from the
| training data any more than the plus operator does. I agree
| with the sentiment that even putting it in writing gives more
| weight to the position than it should, bordering on absurdity.
|
| I suppose this is like the ELIZA phenomena on steroids, is the
| only thing I can think of for why such notions are being
| entertained.
|
| However, to be generous, lets do some vigorous hand waving and
| say we could find a way to have an embodied learning agent
| gather sublinguistic perceptual data in an online reinforcement
| learning process, and furthermore that the (by definition) non-
| quantifiable subjective experience data could somehow be
| extracted, made into a training set, and fit to a nicely
| parametric loss function.
|
| The idea then is that could find some architecture that would
| allow you to fit a model to the data.
|
| And voila, machine consciousness, right? A perfect model for
| sentience.
|
| Except for the fact that you would need to ignore that in the
| RL model gathering the data and the NN distilled from it, even
| with all of our vigorous hand waving, you are once again
| developing function approximators that have no subjective
| internal experience distinct from the training data.
|
| Let's take it one step further. The absolute simplest form of
| learning comes in the form of habituation and sensitization to
| stimuli. Even microbes have the ability to do this.
|
| LLMs and other static networks do not. You can attempt to
| attack this point by fiatting online reinforcement learning or
| dismissing it as unnecessary, but I should again point out that
| you would be attacking or dismissing the _bare minimum_
| requirement for _learning_ , let alone a higher order
| subjective internal experience.
|
| So then the argument, proceeding from false premises, would
| claim that the compressed experience in the NN _could_ contain
| mechanical equivalents of higher order internal subjective
| experiences.
|
| So even with all the might vigorous hand waving we have
| allowed, you have _at best_ found a way to convert internal
| subjective processes to external mechanical processes fit to a
| dataset.
|
| The argument would then follow, well, what's the difference?
| And I could point back to the microbe, but if the argument
| hasn't connected by this point, we will be chasing our tails
| forever.
|
| A good book on the topic that examines this in much greater
| depth is "The Self Assembling Brain".
|
| https://a.co/d/1FwYxaJ
|
| That being said, I am hella jealous of the VC money that the
| grifters will get for advancing the other side of this
| argument.
|
| For enough money I'd probably change my tune too. I can't by a
| loaf of bread with a good argument lol
| cognitif wrote:
| What does consciousness or subjective experience have to do
| with the relationship between language and cognition? I'm not
| following your argument.
| gibsonf1 wrote:
| The key to human intelligence are concepts. We just use whatever
| language we choose to symbolize the concepts.
| shsbdncudx wrote:
| When we eventually nail agi, I think we will look at llm's as
| nothing more than the interface to ai, how we interact with it,
| but we won't consider it to be ai.
| Tagbert wrote:
| I've been hearing/reading about people who don't have an inner
| monologue. Their experience of cognition is not verbally-based.
|
| https://www.cbc.ca/news/canada/saskatchewan/inner-monologue-...
| slashdave wrote:
| Honestly, for some of us the idea that all your thoughts have
| to filter through language sounds very tedious.
|
| I want to remind everyone that your experiences are unique and
| do not necessarily translate to all other people.
| crooked-v wrote:
| As one of those people most of the time (communicating with
| other people is the main exception), I still find it astounding
| that it's hard for some people to understand.
|
| Take riding a bike: I presume even people with an overactive
| inner monologue aren't constantly planning their actions
| (brakes, steering, turns) in words. Then just extend that out
| to most other stuff.
| chongli wrote:
| What about when reading and writing? My inner monologue
| internally voices the words as I'm reading and writing. Do
| you not do that?
| ryandv wrote:
| It's worth noting the precise and narrow sense in which the term
| "language" is used throughout these studies: it is those
| particular "word sequences" that activate particular regions in
| the brain's left hemisphere, to the exclusion of other forms of
| symbolic representation such as mathematical notation. Indeed, in
| two of the studies cited, [0] [1] subjects with language deficits
| or brain lesions in areas associated with the "language network"
| are asked to perform on various mathematical tasks involving
| algebraic expressions [0] or Arabic numerals [1]:
|
| > DA was impaired in solving simple addition, subtraction,
| division or multiplication problems, but could correctly simplify
| abstract expressions such as (bxa)/(axb) or (a+b)+(b+a) and make
| correct judgements whether abstract algebraic equations like b -
| a = a - b or (d/c)+a=(d+a)/(c+a) were true or false.
|
| > Sensitivity to the structural properties of numerical
| expressions was also evaluated with bracket problems, some
| requiring the computation of a set of expressions with embedded
| brackets: for example, 90 [(3 17) 3].
|
| Discussions of whether or not these sorts of algebraic or
| numerical expressions constitute a "language of mathematics"
| aside (despite them not engaging the same brain regions and
| structures associated with the word "language"); it may be the
| case that these sorts of word sequences and symbols processed by
| structures in the brain's left hemisphere are not _essential_ for
| thought, but can still serve as a useful psychotechnology or
| "bicycle of the mind" to accelerate and leverage its innate
| capabilities. In a similar fashion to how this sort of
| mathematical notation allows for more concise and precise
| expression of mathematical objects (contrast "the number that is
| thrice of three and seventeen less of ninety") and serves to
| amplify our mathematical capacities, language can perhaps be seen
| as a force multiplier; I have doubts whether those suffering from
| aphasia or an agrammatic condition would be able to rise to the
| heights of cognitive performance.
|
| [0] https://pubmed.ncbi.nlm.nih.gov/17306848/
|
| [1] https://pubmed.ncbi.nlm.nih.gov/15713804/
| upghost wrote:
| Well this comment is about the article not LLMs so I doubt it
| will have much in the way of legs, but this work has already been
| covered extensively and to a fascinating depth by Jaak Panksepp
| [1].
|
| His work explores the neuropsychology of emotions WAIT DON'T GO
| they are actually the _substrate of consciousness_ , NOT the
| other way around.
|
| We have 7 primary affective processes (measurable hardware level
| emotions) and they are not what you think[2]. They are considered
| primary because they are _sublinguistic_. For instance,
| witnessing the color red is a primary experience, you cannot
| explain in words the color red to _someone who has not ever seen
| it before_.
|
| His work is a really fascinating read if you ever want to take a
| break from puters for a minute and learn how people work.
|
| PS the reason this sort of research isn't more widely known is
| because the behaviorist school was so incredibly dominant since
| the 1970s they made it completely taboo to discuss subjective
| experience in the realm of scientific discourse. In fact the
| emotions we are usually taught are not based on emotional states
| but on muscle contractions in the face! Not being allowed to talk
| about emotions in psychological studies or the inner process of
| the mind is kinda crazy when you think about it. So only recently
| with neuroimaging has it suddenly become ok to acknowledge that
| things happen in the brain independent of externally observable
| behavior.
|
| [1] https://a.co/d/6EYULdP
|
| [2] - seeking - fear - anxiety and grief - rage - lust - play!!!
| - caring
|
| [3] if this sounds familiar at all it's because Jordan Peterson
| cites Jaak Panksep all the time. Well 50% of the time, the other
| 50% is CG Jung and the final 50% is the book of Exodus for some
| reason.
| sebmellen wrote:
| Fascinating comment and I'm glad I caught it! Thank you!
| ziofill wrote:
| I'm wondering about the "non-verbal language" that scientists use
| to communicate with people affected by aphasia. What makes a
| brain with aphasia understand it? Do brains have dedicated
| circuitry to process words? (as opposed to, say, sounds which are
| a more general concept)
| necovek wrote:
| While getting confirmation of this relationship (or lack of it)
| is exciting, none of this is surprising: language is a tool we
| "developed" further through our cognitive processes, but
| ultimately other primates use language as well.
|
| The one thing I wonder is if it's mostly "code duplication": iow,
| would we be able to develop language by using a different region
| of the brain, or do we actually do cognitive processes in the
| language part too?
|
| In other words, is this simply deciding to send language
| processing to the GPU even if we could do it with the CPU (to
| illustrate my point)?
|
| How would one even devise an experiment to prove or disprove
| this?
|
| To me it seems obvious that our language generation and
| processing regions really involve cognition as well, as languages
| are very much rule based (even of they came up in reverse: first
| language then rules): could we get both regions to light up in
| brain imaging when we get to tricky words that we aren't sure how
| to spell or adapt to context like declensions of foreign words
|
| > But you can build these models that are trained on only
| particular kinds of linguistic input or are trained on speech
| inputs as opposed to textual inputs.
|
| As someone from this side of the "fence" (mathematics and CS,
| though currently obly a practicing software engineer), I don't
| think LLMs provide this opportunity that is in any way comparable
| to human minds.
|
| Comparing performance of small kids developing their language
| skills (I've only had two, but one is enough to prove by
| contradiction) to LLMs (in particular for Serbian), LLMs like
| ChatGPT had a much broader vocabulary, but kids were much better
| at figuring out complex language rules with very limited number
| of inputs (noticed by them making mistakes on exceptions by
| following a "rule" at 2 years of age or younger).
|
| The amount of training input GenAI needs is multiple orders of
| magnitude larger compared to young kids.
|
| Though it's not a fair comparison: kids learn language by
| listening, immitation, watching, smelling, hearing and in context
| (you'll talk about bread at breakfast).
|
| So let's be careful in considering LLMs a model of a human
| language process.
| keepamovin wrote:
| When I was in junior high, I remember a friend saying to me "you
| can't think in images, you think in words." She insisted, and
| couldn't believe that I actually thought in images a lot of the
| time. she was pretty smart and creative.
|
| But I thought in images and I still do in part. so I don't think
| you need words to think.
|
| I thought the people who did were overly computerized, maybe
| thinking in an over defined way.
| Peteragain wrote:
| I know a little about this area and there is certainly a movement
| (glacial) away from thinking that thinking uses symbols,
| distributed or not. The argument cannot be made in a popular
| science article and so such articles inevitably fall back on
| popular ideas of what thinking is. The alternatives: the embodied
| nature of reasoning is one direction and many talk of an
| "enacivist" approach. There are certainly some kinds of thinking
| that require symbols, but a surprisingly large and diverse range
| of intelligent behaviour can be done by just wiring stuff up.
| Interestingly, a significant amount seems amenable to a mechanism
| based on "glorified auto-complete" (cf Hinton) and I have written
| something on the sociological variant - something readable I hope
| - arxiv.org/abs/2402.08403
| adrian_b wrote:
| I can think without language about all the things that I have
| experienced directly through some of my senses, but there is a
| huge number of things that I have never experienced directly and
| about which I can think only using language.
|
| I doubt that this is different for other people. I believe that
| those people who claim that they never think using language are
| never thinking about the abstract or remote things about which I
| think using language.
|
| For instance, I can think about a model of CPU without naming it,
| if it has been included in some of the many computers that I have
| used during the years, by recalling an image of the computer, or
| of its motherboard, or of the CPU package, or recalling some
| experiences when running programs on that computer, how slow or
| how responsive that felt, and so on.
|
| I cannot think about a CPU that I have never used, e.g. Intel
| 11900K, without naming it.
|
| Similarly, I can think without language about the planet Jupiter,
| which I have seen directly many times, or even about the planet
| Neptune, which I have never seen with my eyes, but I have seen in
| photographs, but I cannot think otherwise than with words about
| some celestial bodies that I have never seen.
|
| The same for verbs, some verbs name actions about which I can
| think by recalling images or sounds or smells or tactile feelings
| that correspond with typical results of those actions. Other
| verbs are too abstract, so I can think about the corresponding
| action only using the word that names it.
|
| For some abstract concepts, one could imagine a sequence of
| images, sounds etc. that would suggest them, but that would be
| like a pantomime puzzle and it would be a too slow way of
| thinking.
|
| I can look at a wood plank thrown over a precipice and I can
| conclude that it may be safe to walk on it without language, but
| if I were to design a bridge guaranteed to resist to the weight
| of some trucks passing on it, I could not do that design without
| thinking with language.
|
| Therefore I believe that language is absolutely essential for
| complex abstract thinking, even if there are alternative ways of
| thinking that may be sufficient even most of the time for some
| people.
| crooked-v wrote:
| > but there is a huge number of things that I have never
| experienced directly and about which I can think only using
| language.
|
| This makes me think of the Tao Te Ching, which opens with
| (translation dependent, of course) The Tao
| that can be spoken is not the eternal Tao The name that
| can be named is not the eternal name
| bmacho wrote:
| There is a _non-verbal me_. E.g. who moves my limbs, feels the
| feelings (hunger, happiness, ..), and sometimes helps my _verbal
| me_ to think (in math or in chess the answer just appears for the
| verbal me), or in sudden situations it takes over, and it makes
| decisions very fast.
|
| Since it controls my limbs, I consider it to be the _real me_. My
| inner monologue is a bit frustrated that it can 't control my
| limbs, and it can't really communicate with whoever controls my
| limbs.
|
| Then there is my inner monologue, which does my thinking almost
| always, in an auditory way: imagine _the sound of_ spoken words
| in an ~5 sec long duration, and let the answer appear. I consider
| it as an auditory deducing thingy, and also an intelligence on
| its own.
|
| I am mostly fine with this, tho I am curious about my non-verbal
| me, and I wish I'd know more about it.
| ryandv wrote:
| Julian Jaynes has written on this verbal/non-verbal dichotomy
| in _The Origin of Consciousness in the Breakdown of the
| Bicameral Mind_ , in which he _literally defines god_ to mean
| those phenomena related to right hemispheric structures and
| activities in the brain that are communicated over the anterior
| commissures and interpreted by left hemispheric language
| centers as speech; hence the many mystical reports of "hearing
| the voice of god" as passed down through the aeons. Such
| phenomena have gone by many other names: gods, the genius, the
| higher self, the HGA... though this metaphysical and spiritual
| terminology is best understood as referring to non-verbal, non-
| rational, non-linear forms of cognition that are closer to free
| association and intuitive pattern matching (similar to
| Kahneman's "System 1" thinking). There even exist certain
| mystical traditions which purport to facilitate deeper
| connections with this subsystem of the mind; see for instance
| Eshelman's accounting of the western esoteric tradition in _The
| Mystical and Magical System of the A. '.A.'._ at [0] (currently
| defunct pending the restoration of the Internet Archive).
|
| [0] https://archive.org/details/a-a-the-mystical-and-magical-
| sys...
| zmmmmm wrote:
| Knowing someone with a brain injury, something that is hugely
| apparent is how much we take for granted "sequencing" - that is,
| the ability for the brain to hold a sequence of events, ideas or
| actions in a coherent order over a period of time. It's much more
| fragile than you would think. People with specific brain injuries
| suddenly can't work out whether to put their shoes on before
| their socks etc.
|
| Why I mention this is that I see both language and reasoning as
| rooted in this more fundamental cognitive ability of "coherent
| sequencing". This sits behind all kinds of planning and puzzling
| tasks where you have to project forward a sequence of theoretical
| actions and abstractly evaluate the outcome.
|
| Which is all to say, I don't think language and reasoning are
| _the same_ , but I do think it is likely they stem from the same
| underlying fundamental mechanisms in our brain. And as a
| consequence, it's actually quite plausible that LLMs can
| reconstruct mechanisms of reasoning from language, in a
| regressive model kind of fashion. ie: just because their are
| other ways to reason doesn't exclude language as a path to it.
| mstipetic wrote:
| Man, brain is so weird. The weirdest brain injury symptom I
| can't wrap my head around is when people lose the ability to
| understand the number 0. Like everything else works but this is
| beyond their understanding. Like what's so special about this
| number?
| ajb wrote:
| One interesting corollary of this is the need to rethink the
| underpinnings of therapy. Eg, CBT is based around verbal thoughts
| and replacing bad ones with good ones. I've had CBT practitioners
| insist to me that thoughts _always_ include words. But once you
| recognise that there are kinds of thinking, both processing and
| "mental actions" , not linked to words, it's not so easy. How do
| you identify and replace a maladaptive mental process, if it's
| not linked to a verbalisation? If it is, does replacing the
| verbalisation really do anything?
|
| This I think is why so much popular psychology is so vacuous -
| the slogans are merely things that triggered some people to
| figure out how to improve their mental actions, but there's no
| strong linkage between the two.
| SecuredMarvin wrote:
| Thanks, dang.
|
| I think that using a LLM as the referred telepathy device to a
| wolfram-alpha/mathematica like general reasoning module is the
| way to AGI. The reasoning modules we have today are still much to
| narrow because of the very broad and deep search trees exploding
| in complexity. There is the need for a kind of pathfinder which
| could come from common knowledge already encoded in LLMs, like in
| o1. An system playing with real factual reasoning but exploring
| in directions coming from world knowledge.
|
| What is still missing is the dialectic between possible and
| right, a physics engine, the motivation of analysed agents, the
| effects of emergent behavior and a lot of other -isms. But they
| may be encoded in the reasoning-explorer. And of course loops,
| more loops, refinement, working hypotheses and escaping cul-de-
| sacs.
|
| There are people with great language skills and next to no
| reasoning skills. Some of them have general knowledge. If you
| ever talked to them, for a at least an hour freely meandering
| topics you will know. They seem intelligent for a couple of
| minutes but after a while you realise that they can refer fact,
| even interpret metaphors, but they will not find an elegant one,
| to navigate abstraction levels, even to differentiate root cause
| from effect or motivation and culture from cold logic. Some of
| them even ace IQ or can program but none did math so far. They
| hate, fear or despise rational results violation their learned
| rules. Sorry, chances are if you hate reading this, maybe you are
| one (or my English is annoyingly bad).
|
| I love talking to people outside my bubble. They have an
| incredible broad diversity in abilities and experiences.
| ilaksh wrote:
| One thing that always seemed important to these discussions is
| that the serial structure of language is probably not an
| optimization but just due to the reality that we can only handle
| uttering or hearing one sound at a time.
|
| In my mind there should be some kind of parallel/hierarchical
| model that comes after language layers and then optionally can be
| converted back to a series of tokens. The middle layers are
| trained on world models such as from videos, intermediary layers
| on mapping, and other layers on text, including quite a lot of
| transcripts etc. to make sure the middle layers fully ground the
| outer layers.
|
| I don't really understand transformers and diffusion transformers
| etc., but I am optimistic that as we increase the compute and
| memory capacity over the next few years it will allow more video
| data to be integrated with language data. That will result in
| fully grounded multimodal models that are even more robust and
| more general purpose.
|
| I keep waiting to hear about some kind of manufacturing/design
| breakthroughs with memristors or some kind of memory-centric
| computing that gives another 100 X boost in model sizes and/or
| efficiency. Because it does seem that the major functionality
| gains have been unlocked through scaling hardware which allowed
| the development of models that took advantage of the new scale.
| For me large multimodal video datasets with transcripts and more
| efficient hardware to compress and host them are going to make AI
| more robust.
|
| I do wish I understood transformers better though because it
| seems like somehow they are more general-purpose. Is there
| something about them that is not dependant on the serialization
| or tokenization that can be extracted to make other types of
| models more general? Maybe they are tokens that have scalars
| attached which are still fully contextualized but are computed as
| many parallel groups for each step.
| smallerfish wrote:
| A little late to the thread, but this is obvious if you've done
| any reasonably serious mindfulness practice. When you are
| meditating, you can get to the point where the internal monolog
| (the yabbering of the "crazy monkey mind") is completely
| silenced. "You" are still present, and can direct your attention,
| and can observe all of the perceptions with full comprehension,
| without the verbal layer interpreting for you.
| zeroxfe wrote:
| Came here to say something similar. You also notice that before
| any verbal thoughts arise, there are "primordial" thoughts,
| which are "felt" (sometimes as emotions.) These can instigate
| huge chains of verbal, visual, auditory thought, in turn
| generating more emotions, causing a (occasionally vicious)
| feedback loop.
| ryandv wrote:
| This sounds similar to a fairly early realization in the
| practice of meditation. Daniel M Ingram refers to it as
| "Cause and Effect" in _Mastering the Core Teachings of the
| Buddha:_ [0]
|
| > In the stage of Cause and Effect, the relationships between
| mental and physical phenomena become very clear and sometimes
| ratchet-like. There is a cause, such as intention, and then
| an effect, such as movement. There is a cause, such as a
| sensation, and there is an effect, namely a mental
| impression.
|
| Trying to increase the frequency at which you oscillate
| between physical sensations and mental sensations is a
| fascinating exercise.
|
| [0] https://www.mctb.org/mctb2/table-of-contents/part-iv-
| insight...
| jumping_frog wrote:
| This "feeling" of full comprehension can be an illusion.
| Similar to how we think we are taking in 140 degree of full
| visual information through our eyes. In truth, we can only take
| in accurate information about the size of our thumb at arms
| length. The so called saccades phenomenon.
| talkingtab wrote:
| A _very_ long time ago I took a programming aptitude test,
| supposedly from IBM. The test was essentially detecting pattern
| anomalies. Two straight lines, one crooked. Pick the crooked. The
| patterns became increasingly more complex. I remember a little
| voice in my head verbalizing "two straight, one crooked". But at
| some point the voice stopped but I was sure which item broke the
| pattern.
|
| My take away is that language is secondary to thinking - aka
| intuitive pattern detection. Language is the Watson to Sherlock.
|
| The corollary is that treating language as primary in decision
| making is (sometimes) not as effective as treating it as
| secondary. At this point in my life (I'm old) I seem to have
| spent much of my life attempting to understand why my pattern
| matching/intuition made a choice that turned out to be so
| superior to my verbal language process.
| swayvil wrote:
| Does the act of assigning meaning to any thing count as language?
|
| What if the things are part of a set, chosen for uniqueness and
| distinguishability. Meanings determined by tradition?
|
| There's a lot of territory between the two.
| alok-g wrote:
| How specifically do you define 'meaning' and (the domain of)
| 'any thing'? Pls. consider if your definition of language would
| lead to an inference that most animals have language abilities.
| kensai wrote:
| Wasn't this known at least empirically for centuries? I mean
| obviously persons and animals without language capabilities
| (uneducated, deaf, mute) manage some cognitive processes that
| underlie thought. They might not be the brightest, but it's
| there.
|
| I guess this was the experiment the proved the point.
| orobus wrote:
| I'm not a neuroscience expert, but I do have a degree in
| philosophy. The Russell quote immediately struck me as misleading
| (especially without a citation). The author could show more
| integrity by including Russell's full quote:
|
| > Language serves not only to express thoughts, but to make
| possible thoughts which could not exist without it. It is
| sometimes maintained that there can be no thought without
| language, but to this view I cannot assent: I hold that there can
| be thought, and even true and false belief, without language. But
| however that may be, it cannot be denied that all fairly
| elaborate thoughts require words.
|
| > Human Knowledge: Its Scope and Limits by Bertrand Russell,
| Section: Part II: Language, Chapter I: The Uses of Language Quote
| Page 60, Simon and Schuster, New York.
|
| Of course, that would contravene the popular narrative that
| philosophers are pompous idiots incapable of subtlety.
| usgroup wrote:
| I think it's a nicely summarised challenge to boot.
|
| It's doubtless to me that thinking happens without intermediary
| symbols; but it's also obvious that I can't think deeply
| without the waypoints and context symbols provide. I think it
| is a common sense opinion.
| Izkata wrote:
| "Language" is a subset of "symbols". I agree with what you
| said, but it's not representative of the quote in GP.
|
| Just a few days ago was "What do you visualize while
| programming?", and there's a few of us in the comments that,
| when programming, think symbolically _without_ language:
| https://news.ycombinator.com/item?id=41869237
| photochemsyn wrote:
| Is Russell aligned with Ludwig Wittgenstein's statement, "The
| limits of my language mean the limits of my world."? Is he
| talking about how to communicate his world to others, or is he
| saying that without language internal reasoning is impossible?
|
| Practically, I think the origins of fire-making abilities in
| humans tend to undermine that viewpoint. No other species is
| capable of reliably starting a fire with a few simple tools,
| yet the earliest archaeological evidence for fire (1 mya) could
| mean the ability predated complex linguistic capabilities.
| Observation and imitation could be enough for transmitting the
| skill from the first proto-human who successfully accomplished
| the task to others.
|
| P.S. This is also why _Homo sapiens_ should be renamed _Homo
| ignis_ IMO.
| karaterobot wrote:
| > British philosopher and mathematician Bertrand Russell answered
| the question with a flat yes, asserting that language's very
| purpose is "to make possible thoughts which could not exist
| without it." But even a cursory glance around the natural world
| suggests why Russell may be wrong.
|
| I don't know why Russell is catching strays. Saying language
| exists to make possible thoughts which could not exist without it
| does not in any way imply that you can't think without language.
| Geee wrote:
| The important question is: what is considered a language?
|
| > You can ask whether people who have these severe language
| impairments can perform tasks that require thinking. You can ask
| them to solve some math problems or to perform a social reasoning
| test, and all of the instructions, of course, have to be
| nonverbal because they can't understand linguistic information
| anymore.
|
| Surely these "non-verbal instructions" are some kind of language.
| Maybe all human action can be considered language.
|
| A contrarian example to this research might be feral children,
| i.e people who have been raised away from humans.[0] In most
| cases they are mentally impaired; as in not having human-like
| intelligence. I don't think there is a good explanation why this
| happens to humans. And why it doesn't happen to other animals,
| which develop normally in species-typical way whether they are in
| the wild or in human captivity. It seems that most human behavior
| (even high-level intelligence) is learned / copied from other
| humans, and maybe this copied behavior can be considered
| language.
|
| If humans are "copy machines", there's also a risk of completely
| losing the "what's it like to be a human" behavior if children of
| the future are raised by AI and algorithmic feeds.
|
| [0] https://en.wikipedia.org/wiki/Feral_child
| wnmurphy wrote:
| I think the argument is in whether "thought" only applies to
| conscious articulation or whether non-linguistic, non-symbolic
| processes also qualify.
|
| We only consciously "know" something when we represent it with
| symbols. There are also unconscious processes that some would
| consider "thought", like driving a car safely without thinking
| about what you're doing, but I wouldn't consider those thoughts.
|
| I find an interesting parallel to Chain of Thought techniques
| with LLMs. I personally don't (consciously) know what I think
| until I articulate it.
|
| To me this is similar to giving an LLM space to print out
| intermediary thoughts, like a JSON array of strings. Language is
| our programming language, in a sense. Without representing
| something in a word/concept, it doesn't exist.
|
| "Ich vermute, dass wir nur sehen, was wir kennen." - Nietzsche,
| where "know" refers to labeling something by projecting a
| concept/word onto it.
| stevebrown wrote:
| Language plays a role similar to that of paper and pen in solving
| certain math problems. As a tool, it aids deeper thinking. It
| serves two key functions: facilitating communication and
| enhancing thought processes. This is why "chain of thought" type
| of intermediate language prompts improve reasoning in OpenAI's o1
| model.
___________________________________________________________________
(page generated 2024-10-20 23:01 UTC)