[HN Gopher] The Norvig - Chomsky debate (2017)
___________________________________________________________________
The Norvig - Chomsky debate (2017)
Author : rrampage
Score : 69 points
Date : 2023-02-19 13:09 UTC (9 hours ago)
(HTM) web link (web.cse.ohio-state.edu)
(TXT) w3m dump (web.cse.ohio-state.edu)
| dang wrote:
| Related. Others?
|
| _Debunking Statistical AI - Noam Chomsky, Gary Marcus, Jeremy
| Kahn [video]_ - https://news.ycombinator.com/item?id=33857543 -
| Dec 2022 (19 comments)
|
| _Noam Chomsky: Where Artificial Intelligence Went Wrong (2012)_
| - https://news.ycombinator.com/item?id=30937760 - April 2022 (2
| comments)
|
| _On Chomsky and the Two Cultures of Statistical Learning (2011)_
| - https://news.ycombinator.com/item?id=16489828 - March 2018 (12
| comments)
|
| _On Chomsky and the Two Cultures of Statistical Learning (2011)_
| - https://news.ycombinator.com/item?id=11951444 - June 2016 (102
| comments)
|
| _Norvig vs. Chomsky and the Fight for the Future of AI_ -
| https://news.ycombinator.com/item?id=5318292 - March 2013 (2
| comments)
|
| _Noam Chomsky on Where Artificial Intelligence Went Wrong_ -
| https://news.ycombinator.com/item?id=4729068 - Nov 2012 (177
| comments)
|
| _Norvig vs. Chomsky and the Fight for the Future of AI_ -
| https://news.ycombinator.com/item?id=4290604 - July 2012 (147
| comments)
|
| _Norvig vs. Chomsky and the Fight for the Future of AI_ -
| https://news.ycombinator.com/item?id=2710733 - June 2011 (4
| comments)
|
| _On Chomsky and the Two Cultures of Statistical Learning_ -
| https://news.ycombinator.com/item?id=2591154 - May 2011 (107
| comments)
| hackandthink wrote:
| The debate is mostly about:
|
| Are opaque probabilistic models scientific?
|
| David Mumford's stance:
|
| "This paper is a meant to be a polemic which argues for a very
| fundamental point: that stochastic models and statistical
| reasoning are more relevant i) to the world, ii) to science and
| many parts of mathematics and iii) particularly to understanding
| the computations in our own minds, than exact models and logical
| reasoning"
|
| https://www.dam.brown.edu/people/mumford/beyond/papers/2000b...
| jsenn wrote:
| Interesting paper, thanks for linking. I think this is slightly
| tangential to the Norvig-Chomsky controversy though. What
| Mumford is saying is that probability and statistics is a more
| useful basis for modelling natural phenomena _than classical
| logic_. I don 't think Chomsky would disagree with this! What
| he disagrees with is the idea (also raised by Mumford) that
| merely reproducing surface-level aspects of a given natural
| phenomenon (like human language) with no insight or
| understanding is sufficient for a scientific theory. It isn't
| sufficient: one also has to show that the theory cannot produce
| phenomena that _are not_ natural, and give some insight into
| what 's going on. In fact, it's not even necessary: Galileo
| advanced physics by imagining a frictionless plane. This
| doesn't and can't exist in the real world, but it helps us
| understand what _does_ happen in the real world.
|
| As an example, Mumford talks about how particle filters work
| much better than any "classical AI" technique for certain
| tracking tasks. This is true, and particle filters are still
| important in engineering for this reason, but it's not a theory
| of how humans accomplish this for the simple reason that
| particle filters can just as happily do other things that
| humans don't/can't.
| foobarqux wrote:
| > The debate is mostly about: Are opaque probabilistic models
| scientific?
|
| No the debate is about whether probabilistic models are
| scientific _when applied to the human language faculty_ (they
| aren 't).
|
| Probabilistic models are scientific when they tell you
| something about the natural world. In some cases they do and in
| others they don't.
| hackandthink wrote:
| Yes, the scope of the Norvig - Chomsky ist the human language
| faculty.
|
| But I think Mumford's more general discussion is relevant.
| And Mumford explicitly refers to speech:
|
| "This approach denies that statistical inference can have
| anything to do with real thought ...
|
| The new applications of Bayesian statistics to vision,
| speech, expert systems and neural nets have now started an
| explosive growth in these ideas."
| foobarqux wrote:
| It isn't relevant. There isn't any evidence that the human
| language faculty is a statistical process at its core, in
| fact there is evidence that it isn't. Vision, etc are
| different processes but in any case you have to show
| evidence of biology using a statistical process not simply
| postulate it.
|
| There is lots of talk like "these statistical models could
| yield information about how the mind or some other system
| works" but they never do. In fact no one even tries. People
| don't really care about doing science but because science
| is high prestige they want to make sure to be classified in
| that way.
| candiodari wrote:
| What's wrong with "humans/biologists have no evidence how
| the human language works. AI practicioners have one
| important piece of evidence: they can demonstrate
| somewhat-human-like language processing using statistical
| techniques, therefore they have the best available
| evidence". Nobody else can demonstrate it.
|
| But of course, this is simply the non-religious version
| of "humans have a soul, and machines, by definition
| don't". If necessary people drag quantum physics into
| that argument ...
|
| The truly forbidden argument is that we don't have any
| definition of a soul, and in fact plenty of evidence
| humans don't have a soul, such as large "soul"/character
| changes with occuring with physical damage to the
| neocortex.
|
| This also means the discussion is moot: people are now
| using LLM's to pass the Turing test on a large scale for
| all sorts of purposes. From scamming people to management
| (let's assume there's a difference there). From
| programming to teaching ugly bags of mostly water new
| languages. The point where people discover their
| wife/kids/... have more of a relationship with an LLM
| speaking for them than with them personally is now just a
| matter of time away.
| foobarqux wrote:
| AI practitioners have no evidence because they can't
| relate the thing they produced to how humans work. And
| it's false that scientists have no evidence, they don't
| have much but some of the little evidence they do have
| demonstrates that humans don't use statistical processes
| as the core of human language.
| candiodari wrote:
| ChatGTP, someone gave me this answer. It's about you and
| the algorithm you run:
|
| "AI practitioners have no evidence because they can't
| relate the thing they produced to how humans work. And
| it's false that scientists have no evidence, they don't
| have much but some of the little evidence they do have
| demonstrates that humans don't use statistical processes
| as the core of human language."
|
| Do you think he's right?
|
| > While it is true that the workings of artificial
| intelligence algorithms and the workings of the human
| brain are not identical, this does not necessarily mean
| that AI practitioners have no evidence to support their
| work. In fact, there is a large body of research that has
| been done on the effectiveness and limitations of
| different machine learning algorithms, as well as on
| their applications in various domains.
|
| > Additionally, while it is true that human language
| processing is not solely based on statistical processes,
| it is also not accurate to say that humans do not use
| statistical processes as part of their language
| processing. There is evidence to suggest that humans do
| rely on statistical learning to some extent in order to
| acquire and use language.
|
| > Overall, it is important to approach these topics with
| nuance and recognize that the relationship between AI and
| human cognition is complex and multifaceted.
|
| This blows anything biological researchers can do to
| reproduce human behavior out of the water by a margin
| from here to Proxima Centauri and back. Therefore I'll
| believe the model behind this is a _far_ closer
| approximation to human behavior than anything every to
| come out of any other field of research, not using humans
| themselves. Hell, I would comfortably declare this
| algorithm (far) more intelligent than our closest living
| relatives, primates.
| foobarqux wrote:
| Science isn't trying to best mimic human output it is
| trying to understand how it works.
| 0x486566e7279 wrote:
| And you would probably be wrong as we have underestimated
| the intelligence of animals.
|
| You are also allowed to believe what you want, but that's
| not science.
| mistermann wrote:
| Neither is probability calculated using heuristics, not
| to be too serious or anything and ruin the fun.
| fock wrote:
| The weirdest thing to me is that a person with a high personal,
| financial involvement in the subject went and took that quote
| from an old man (which to your and my understanding only states
| that these things are not linguistic "science", but they solve
| problems alright), then created a strawman from thin air (points
| A-E) to then say "oh, all my arguments are void, statistical
| models are great, don't you dare criticizing me, you old fool".
|
| And then he went and took "Science", aka the epitome of publish
| or perish academia and tried to argue that all this looks the
| same as the thing he does. Oh well, who would have guessed...
|
| The looks on this are weird, even more so as GPT nowadays works
| wonders, but still doesn't help explaining why and how language
| evolved (which seems to be the idea of linguistics, no?).
| LudwigNagasena wrote:
| Have you read the original question by Pinker and the response
| by Chomsky
| (http://languagelog.ldc.upenn.edu/myl/PinkerChomskyMIT.html)?
| It doesn't look like a strawman, though it's a bit hard to get
| what he was gesturing at as the answer was impromptu.
| dang wrote:
| I'm not sure who you're slurring worse here but can you please
| make your substantive points without personal attack and
| generally not post in the flamewar style? We're trying to avoid
| that here.
|
| We detached this subthread from
| https://news.ycombinator.com/item?id=34857541.
| hackandthink wrote:
| Norvig has strong arguments, but this in bad faith:
|
| "Chomsky has a philosophy based on the idea that we should
| focus on the deep whys and that mere explanations of reality
| don't matter. In this, Chomsky is in complete agreement with
| O'Reilly." (O'Really stands for mythmaking, religion or
| philosophy)
|
| Chomsky is no mysticist - he is an old fashioned scientist
| looking for a parsimonious theory. Maybe there is no simple
| theory for language with great explanatory power but there
| should be some people looking for it.
| foobarqux wrote:
| > Norvig has strong arguments
|
| What was one of those arguments? I didn't see any.
| hackandthink wrote:
| "We all saw the limitations of the old tools, and the
| benefits of the new"
|
| Probabilistic models work incredibly well, much better than
| transformational-generative grammars.
| foobarqux wrote:
| > Probabilistic models work incredibly well, much better
| than transformational-generative grammars.
|
| You've missed everything Chomsky said even though it is
| repeated in the article: Probabilistic models can be
| useful tools but they tell you nothing about the human
| language faculty (i.e. they are not science).
| visarga wrote:
| This kind of top-down approach misses the real hero -
| it's not the model, it's the data. 500GB of text can
| transform a randomly initialised neural net into a
| chatting, problem solving AI. And language turns babies
| into modern, functional adults. It doesn't matter how the
| model is implemented, they all learn more or less, the
| real hero is text. Let's talk about it more.
|
| It would have been interesting if Chomsky's approach
| could have predicted at what size of text we see the
| emergence of AI that passes the Turing test. Or even if
| it predicted that there is an emergent process in there.
| YeGoblynQueenne wrote:
| I'm not well-informed on the subject, but I seem to
| remember that Chomsky's point was exactly on the data:
| his hypothesis about the human language faculty being
| innate (a "universal grammar", or "linguistic endowment"
| as he's been calling it more recently) was about the so-
| called "poverty of the stimulus". Meaning that human
| infants learn human languages while being exposed to
| pitiably insufficient amounts of data.
|
| Again, to my recollection, he based this on Mark E.
| Gold's result about language identification in the limit,
| which, simplifying, is that any language more complex
| than a finite language (in the Chomsky hierarchy of
| languages) is learnable only from an infinite number of
| positive examples, and languages more complex than
| regular languages also need an infinite number of
| _negative_ examples. And those are labelled examples-
| labelled by an oracle. Since human language is usually
| considered to be at least context-free, and since infants
| are not exposed to infinite numbers of examples of their
| maternal languages, there must be some other element that
| allows them to learn such a language, and Chomsky called
| that a "universal grammar" etc.
|
| Still from memory, Chomsky's proposition also took
| account of data that showed that human parents do not
| give negative examples of language to their children,
| they only correct by giving positive examples (e.g. a
| parent would correct a child's grammar by saying
| something along the lines of "we don't say 'eated', we
| say 'eaten'"; so they would label a grammar rule learned
| by the child as incorrect -the rule that produced
| 'eated'- but they wouldn't give further negative examples
| of the same, or other rules, that produced similarly
| wrong instances, only a positive example of the correct
| rule. That's my interpretation anyway).
|
| Again all this is from memory, and probably half-
| digested. Wikipedia has an article on Gold's famous
| result:
|
| https://en.wikipedia.org/wiki/Language_identification_in_
| the...
|
| Incidentally, Gold's result, derived in the context of
| the field of Inductive Inference, a sort of precursor to
| modern machine learning, caused a revolution in machine
| learning itself. The very negative result caused Leslie
| Valiant to develop his PAC-Learning setting, that
| basically loosens the strong requirements for precision
| of Gold's identification in the limit, and so justified
| the focus of modern machine learning research to
| approximate, and efficient, learning. But that's another
| story.
| morelisp wrote:
| A child needs about 100KB of "text" a day to learn a
| language. If anything the data requirements of LLMs are
| proof positive they can't bear any relation to the human
| language faculty.
| foobarqux wrote:
| No, again it's missing the point: None of this explains
| how the human language faculty works.
| YeGoblynQueenne wrote:
| >> Chomsky is no mysticist - he is an old fashioned scientist
| looking for a parsimonious theory.
|
| When did that become old fashioned?
| mkoubaa wrote:
| I don't know but I hope that Chomsky is less wrong. Because if
| statistical methods reach an asymptote, we will have no choice
| but to try to better understand the principles and foundations.
|
| If statistical methods do not reach an asymptote, I don't think
| we will have the incentive to reach a deeper understanding
| rfrey wrote:
| This reminds me of something I heard Geoff Hinton say .. that
| it was a shame and a sorrow that neural networks as currently
| used worked so well.
| LudwigNagasena wrote:
| I think it should be marked as being from 2017. Also, I don't see
| much point in this article. It just butchers Norvig's article
| into a bunch of quotes even though his article is quite
| accessible and not very long.
| hackandthink wrote:
| I agree. Start with reading:
|
| https://norvig.com/chomsky.html
| pafje wrote:
| Actually this seems to be from 2013.
|
| https://github.com/joshuaeckroth/cse3521-website/blob/master...
| bsaul wrote:
| Not sure about the article itself, but from an epistemological
| point of view, i believe this debate will remain one of the
| most famous of the 21st century (provided NN still keep giving
| new fantastic results, and doesn't stop there).
|
| Never in history did we manage to achieve so much about a given
| complex problem (let's say producing meaningful text) while at
| the same time understand so little (chatgpt didn't provide any
| single useful result to the linguistic science department).
|
| If this approach spreads to other fields, it will lead to an
| immense scientific crysis.
| narag wrote:
| Actually this explains a lot. Language might be _the stuff
| dreams are made of_ , but not the stuff consciousness is made
| of, so a specialized form of perception, a layer on top of
| visuals and hearing, with just another layer of logic on top
| of it.
|
| We still have a coarse understanding of brain processes, but
| if they use parallelism, they could be probabilistic in
| nature, so these language models would be more similar to
| ours than it seems.
| wodenokoto wrote:
| I feel like that point was equally made with "colorless
| green ideas sleep furiously"
|
| Moreover, in linguistics 101, students are often introduced
| to case studies of people with aphasia and similar issues
| which illustrates how humans can produce coherent and
| grammatical speech without meaning (just like chatgpt) and
| how people can lose the understanding of certain classes of
| words.
|
| Lastly, NN are often seen as a way to model functions
| (again, students are often asked to produce different logic
| gates by hand, to convince themselves that NN can be Turing
| complete) so rather than language being inherently
| probabilistic chatgpt might just have reasonably inferred
| the rules of language.
| narag wrote:
| Thank you, I didn't know about that sentence.
|
| Anyway my point is not that language is inherently
| probabilistic, but that the way our brains implement
| language could be. Or more precisely, one layer of
| language could be this way, with other "watchdog" layer
| on top of it filtering when the less rigorous layer goes
| rogue and spits nonsense.
|
| The base layer could be the graphical one, middle layer
| language, top layer logic. Between layers, the jokes.
| vouwfietsman wrote:
| This is cool, especially because its already 6 years old and I
| think not much has changed. Can anyone here speak to the current
| SOTA of explaining whats going on inside a neural net?
|
| If we go: problem -> nnet solution -> explanation of nnet ->
| insight into problem that would still be very significant to the
| scientific process.
| nuc1e0n wrote:
| I think the thing to note about today's large language models is
| that they aren't purely statistical. The topology of the neural
| networks behind them has been explicitly defined by the creators
| of those systems. They are not 'tabula rasa' as some might
| suppose them to be.
| bsaul wrote:
| Yet their structure being generic for all kind of problems, it
| doesn't tell much in itself about the things it managed to <<
| understand >>. Much like studying einstein's brain biology
| can't teach you much about general relativity.
| nuc1e0n wrote:
| But none of them do have a generic structure. For example,
| GPT-3 can't produce images from text prompts, and stable
| diffusion cannot generate language. The possible
| relationships of words are written into GPT-3's code, in
| python, by its developers. In a way all this proves is that
| that written language can convey meaning to people.
| Ologn wrote:
| > GPT-3 can't produce images from text prompts
|
| Me: Give me the entire hexadecimal format of an example
| PNG. Only give me the hexadecimal format.
|
| GPT-3:
|
| 89504E470D0A1A0A0000000D49484452000000640000006408020000000
| 065238226000000014944415478DAECFD07780D44204C60F81EADAEF777
| F7E7E62F1BDE7DEBDED710EC15C7AC81CEEC17069C59B99A1698BEE7A48
| 4D68FDE782A7C41A8A0E7D2A2C9B00A99F32FBCED
| nuc1e0n wrote:
| That's not a valid png. It's just plausible hex tokens.
| GPT-3 is confidently wrong yet again.
| Ologn wrote:
| $ echo 89504E470D0A1A0A0000000D49484452000000640000006408
| 020000000065238226000000014944415478DAECFD07780D44204C60F
| 81EADAEF777F7E7E62F1BDE7DEBDED710EC15C7AC81CEEC17069C59B9
| 9A1698BEE7A484D68FDE782A7C41A8A0E7D2A2C9B00A99F32FBCED
| |xxd -r -p > output.png
|
| $ file output.png
|
| output.png: PNG image data, 100 x 100, 8-bit/color RGB,
| non-interlaced
|
| $ eog output.png Fatal error reading PNG image file:
| IHDR: CRC error
|
| It seems you are correct.
| nuc1e0n wrote:
| Yeah, I did something similar before replying. Now maybe
| GPT-3 could be modified to make PNGs, but someone would
| have to go do that.
| theGnuMe wrote:
| You can go from an image to text and vice versa. People
| have done it.
| nuc1e0n wrote:
| Yeah, with specifically crafted models.
| WithinReason wrote:
| There are generic models that can do both
| nuc1e0n wrote:
| _have been made_ to do both.
| visarga wrote:
| DALL-E 1 used a GPT approach to generate both text and
| images. Images are divided into patches, about 1024 patches
| for one image. Each patch is like a text token.
|
| > We describe a simple approach for this task based on a
| transformer that autoregressively models the text and image
| tokens as a single stream of data.
|
| https://arxiv.org/abs/2102.12092
|
| The moral - you can just stream together text and images
| into a GPT style model.
| nuc1e0n wrote:
| I don't mean to say it can't be done. Only that it has to
| be made to be done.
| jsenn wrote:
| This is true, and interesting, but doesn't address Chomsky's
| concerns. While a LLM has structure, it's still not as
| structured--or structured in the same way--as the human
| language faculty. This is easy to see by observing that LLMs
| can and do just as easily learn to produce things that are
| _not_ human language as things that are. For something to count
| as a model of human language, it has to be able to produce
| language _and not produce non-language_.
| [deleted]
| alecst wrote:
| What do LLMs produce that counts as non-language?
| jsenn wrote:
| My understanding is that they work well as arbitrary
| sequence predictors. For example, they can write HTML
| markup or C++ code just as easily as they can write English
| sentences. If you trained them on character sequences other
| than text from the internet, they would likely perform just
| as well on that data.
| nuc1e0n wrote:
| HTML literally has language in the name, and C++ is a
| programming language.
| jsenn wrote:
| Sure, but the type of "language" that includes HTML and
| C++ is very different from the type of "language" that
| includes English and French. Chomsky's point is that
| there's something special about human brains that makes
| it very easy for them to learn English and French, even
| with very sparse and poorly-defined inputs, but doesn't
| necessarily help them learn to produce other types of
| structured sequences. For example, a baby raised next to
| a forest will effortlessly learn to speak their parents'
| native language (you couldn't _stop_ them from doing
| that!) but won 't learn to produce the birdsong they hear
| coming from the forest. This indicates that there's
| something special about our brains that leads us to
| produce English and not birdsong.
|
| Similarly, it's true that some humans can, with lots of
| cognitive effort, produce HTML and C++, but some can't.
| Even the ones that can don't do it the same way that they
| produce English or French.
| nuc1e0n wrote:
| Orphaned humans raised by animals can never learn to
| speak natural languages either. But yeah they won't
| produce birdsong. There's no utility to that. I guess
| it's a matter of environment. And btw for me writing HTML
| is effortless, but then I've spent a lot of time around
| other programmers.
| jsenn wrote:
| > But yeah they won't produce birdsong. There's no
| utility to that. I guess it's a matter of environment.
|
| This is the crux of the issue. GPT-3 _would_ happily
| learn birdsong instead of human language, just like it
| has learned to produce snippets of code or a sequence of
| chess moves or various other things found in its training
| data. For that reason, it 's not by itself useful as a
| model of human cognition. Which is not to say it isn't
| interesting, or that studying LLMs won't lead to
| interesting insights into the human mind--I suspect it
| will!
| ribit wrote:
| What's really interesting is that entire Chomskian syntax
| worldview is fairly pseudo-scientific in nature. Most of these
| papers are about taking an essentially Turing-complete
| computation system and tweaking it until it can solve a specific
| riddle. Rinse and repeat. Most of the arguments (like the poverty
| of stimulus) are purely authoritarian as well.
| foobarqux wrote:
| Chomsky has already addressed Norvig's objections before he even
| made them as anyone how has actually listened to what Chomsky has
| said would know.
|
| > Norvig's reply: He agrees, but engineering success often
| facilitates scientific success.
|
| There has been virtually no progress in understanding of the
| human language faculty from probabilistic models or LLMs.
|
| > Norvig's reply: Science is both description and explanation;
| you can't have one without the other; in the history of science,
| the laborious accumulation of data is the usual mode of
| operation.
|
| Probabilistic models don't describe anything in a way that leads
| to understanding (as the fact that no progress in understanding
| has been made shows).
|
| > people actually generate and understand language in some rich
| statistical sense (maybe with statistical models several layers
| deep, like the modern AI models of speech recognition).
|
| They do not; there are studies which Chomsky cites involving
| trying to learning "impossible" non-structural languages that
| give strong evidence that this is not the case.
|
| > Norvig's reply: Certain advances in statistical learning
| methods provide reason to believe that such learning methods will
| be able to do the job.
|
| It has nothing to do with the human language faculty.
|
| > My conclusion is that 100% of these articles and awards are
| more about "accurately modeling the world" than they are about
| "providing insight," although they all have some theoretical
| insight component as well.
|
| If you have a black box machine and you write a paper that says
| the black box reproduces some natural phenomenon with a billion
| of its knobs to turned to these specific settings you have wasted
| everyone's time.
|
| > Norvig illustrates that rules about language do not always
| capture the right phenomena. (i before e)
|
| The fundamental character of human language has nothing to do
| with spelling.
|
| > [Norvig]: so any valid criticism of probabilistic models would
| have to be because they are too expressive, not because they are
| not expressive enough.
|
| Yes Chomsky has explicitly said this. Any model that accepts
| virtually everything is a bad model.
|
| I don't have time to go through the rest.
| morelisp wrote:
| As a connoisseur of the Chomsky-Foucault debate summarized as
| one intellectual giant completely missing the other's point,
| it's quite funny to see Chomsky in the opposite role here. The
| more operationalist you are, the more depressingly wrong you
| get - but also it seems the more likely you are to win debate
| club antics.
| voidhorse wrote:
| Unfortunately I think the person who presents the simpler
| theory, even if it's less correct or less powerful, typically
| wins debates on theory.
|
| In some sense, theorists are working at different levels of
| abstraction across temporal dimensions. Foucault was
| concerned about deeper structures and a longer time horizon
| than Chomsky was at the time they debated, Chomsky is
| concerned about deeper issues and a longer time horizon than
| Norvig. This is why they wind up talking past each other.
| alecst wrote:
| > They do not; there are studies which Chomsky cites involving
| trying to learning "impossible" non-structural languages that
| give strong evidence that this is not the case.
|
| I was looking for these studies. I found some similar stuff by
| Jennifer Culbertson, along these lines
| https://www.annualreviews.org/doi/pdf/10.1146/annurev-
| lingui..., but didn't quite know what to Google. Can you point
| me to something?
| foobarqux wrote:
| I believe Chomsky mentioned two studies in one of the 4
| episodes in the "Closer to Truth" series, you'll have to
| search the transcript for the exact timestamp.
|
| The first is an fMRI study that shows that the brain doesn't
| engage the language processing centers when trying to
| understand a made-up non-structural language (i.e. a
| "statistical language") but does when trying to learn a made-
| up structural language.
|
| The second is about a man who had brain damage except in the
| language processing centers. A similar study showed that he
| could learn made-up structural languages but not
| "statistical" languages.
|
| Poverty of stimulus arguments might also be relevant. There
| might be an energy argument in his book "Why Only Us" as
| well.
___________________________________________________________________
(page generated 2023-02-19 23:01 UTC)