[HN Gopher] Is my toddler a stochastic parrot?
       ___________________________________________________________________
        
       Is my toddler a stochastic parrot?
        
       Author : zwieback
       Score  : 156 points
       Date   : 2023-11-15 20:17 UTC (2 hours ago)
        
 (HTM) web link (www.newyorker.com)
 (TXT) w3m dump (www.newyorker.com)
        
       | atleastoptimal wrote:
       | I wonder when the rate of improvement of SOTA benchmarks will
       | exceed the rate of improvement of irl early childhood cognitive
       | development
        
         | 082349872349872 wrote:
         | Jared Diamond mentions several attributes which make a species
         | suitable for domestication. Humans fit all but one: we are
         | almost too altricial to be suitable for economic exploitation.
         | 
         | Imagine the contribution to GNP if we in practice, like
         | Bokanovsky* (by Ford!) in fiction, could greatly reduce the
         | 10-30 year lead time currently required to produce new
         | employees...
         | 
         | Edit: * cf
         | https://www.depauw.edu/sfs/notes/notes47/notes47.html
        
       | carlossouza wrote:
       | Great essay; impressive content.
       | 
       | The fact that it's very unlikely for any of the current models to
       | create something that even remotely resembles this article tells
       | me we are very far away from AGI.
        
         | atleastoptimal wrote:
         | don't underestimate exponentials
        
           | breuleux wrote:
           | Let's not see exponentials everywhere either. It's not
           | because things seem to be progressing very fast that
           | exponentials are involved. More often than not they are
           | logistic curves.
        
           | dekhn wrote:
           | or the power of sigmoids to replace exponentials when the
           | exponential peters out.
        
         | pcthrowaway wrote:
         | Did you miss the disclaimer at the bottom that both visual
         | artwork and prose were produced by a combination of generative
         | AI tools and creative prompting? The entire seamless watercolor
         | style piece was just a long outpainting
        
           | carlossouza wrote:
           | hahaha that's why I love HN :)
        
           | iwanttocomment wrote:
           | I read the article, read your comment, went back to review
           | the article, and there was no such disclaimer. If this is /s,
           | whoosh.
        
       | LASR wrote:
       | Often the "stochastic parrot" line is used as a reduction on what
       | an LLM truly is.
       | 
       | I firmly believe that LLMs are stochastic parrots and also that
       | humans are too. To the point where I actually think even
       | consciousness itself is a next-token predictor.
       | 
       | Where the industry is headed - multi-modal models. This really I
       | think is the remaining frontier of LLM <> Human parity.
       | 
       | I also have a 15 month old son. It's totally obvious to me that
       | he's definitely learning by repetition. But the sources of
       | training data is much more high bandwidth than whatever we're
       | training our LLMs on.
       | 
       | It's been a couple of years since GPT-3. It's time to abandon
       | this notion of "stochastic parrot" as a derogatory. Anyone stuck
       | in this mindset really is going to be hindered from making
       | significant progress in developing utility from AI.
        
         | Probiotic6081 wrote:
         | > I firmly believe that LLMs are stochastic parrots and also
         | that humans are too. To the point where I actually think even
         | consciousness itself is a next-token predictor.
         | 
         | Almost every time I'm on hackernews I end up baffled by
         | software engineers feeling entitled to have an unfunded opinion
         | on scientific disciplines outside of their own field of
         | expertise. I've literally never encountered that level of
         | hubris from anyone else. It's always the software people!
         | 
         | Consciousness is far from being fully understood but having a
         | body and sensorimotor interactions with the environment are
         | already established as fundamental preconditions for cognition
         | and in turn consciousness.
         | 
         | Margaret Wilsons paper from 2002 is a good read:
         | https://link.springer.com/content/pdf/10.3758/BF03196322.pdf
         | 
         | peace
        
           | oh_sigh wrote:
           | I suspect you don't know what OPs field of expertise is. I
           | also doubt OP would disagree with that statement that the
           | only conscious things we know of have a body and sensorimotor
           | interactions with the environment.
        
           | Chabsff wrote:
           | The befuddlement goes even farther for me. LLMs are,
           | effectively, black-box systems that interface with the world
           | via a stochastic parrot interface.
           | 
           | You'd _think_ that software engineers would be a group that
           | easily understands how making radical assumptions about
           | implementation details when looking at nothing but an
           | interface is generally misguided.
           | 
           | I'm not saying that there isn't a strong case to be made
           | against LLMs being intelligent. It's pointing at the
           | stochastic parrot as evidence enough in of itself that
           | confuses me.
        
             | lainga wrote:
             | Ah, but HN is a platform for not just any software
             | engineer, but the entrepreneurial type.
        
             | throw0101a wrote:
             | > _You 'd_ think _that_ [...]
             | 
             | As a stochastic parrot I'm unable to do that.
        
           | mirekrusin wrote:
           | Are you saying that ie. paralyzed people don't have
           | consciousness?
        
             | bena wrote:
             | First of all, paralyzed people do have bodies. And
             | sensorimotor functions.
             | 
             | Second of all, it wouldn't matter if individually they did
             | or did not. The species does and now our species has
             | developed consciousness. It's part of the package.
             | 
             | If you wanted a counterexample, you should look to plant
             | life. There is some discussion on whether or not plant
             | systems have a form on consciousness. But, then again,
             | plants have bodies at the very least.
        
             | zhynn wrote:
             | There is a spectrum between conscious and unconscious. You
             | could say that under general anesthesia you are a 0/10 on
             | the conscious scale, asleep is 1 or 2, just woken from
             | sleep is maybe 3.... and up to a well rested, well fed,
             | healthy sober adult human near the top of the scale. These
             | are common sense notions of objective consciousness and
             | they are testable and noncontroversial outside of a
             | philosophy argument.
             | 
             | Does this make sense as a rebuttal to your reductio
             | argument?
        
           | fragmede wrote:
           | To be fair, it's any of the exalted professions that the
           | blessed extend their expertise to. Doctors, lawyers, software
           | engineers. they (we) start with the notion that I'm a smart
           | person, so from first principles, I can conquer the world.
           | nevermind that that's an existing body of work, with their
           | own practitioners to build off of.
        
           | danielmarkbruce wrote:
           | Yeah, it's only software people. No one else has unfounded
           | opinions.
           | 
           | But... a parrot has a body. And sure, you'll say "they don't
           | literally mean parrot".. but it's a vague term when you
           | unpack it, and people saying "we are stochastic parrots" are
           | also making a pretty vague comment (they clearly don't mean
           | literally). Anyone who has a small child and understands LLMs
           | is shocked by how much similar they seem to be when it comes
           | to producing output.
        
           | dekhn wrote:
           | Some of us who believe that humans are at least partly
           | statistical parrots have PhDs in relevant fields- for
           | example, my PhD is in Biophysics, I've studied cognitive
           | neuroscience and ML for decades, and while I think embodiment
           | may very well be a necessary condition to reproduce the
           | subjective experience of consciousness, I don't think "having
           | a body and sensorimotor interactions are established as
           | fundamental preconditions for cognition and in turn
           | consciousness". Frankly I think that's an impractical
           | question to answer.
           | 
           | Instead, I work with the following idea: it seems not
           | unlikely that we will, in the next decade or so, create non-
           | embodied machine learning models which simply can't be told
           | apart from a human (through a video chat-like interface). If
           | you can do that, who really cares about whether it's
           | conscious or not?
           | 
           | I don't really think philosophy of the mind is that important
           | here; instead, we should treat this as an engineering problem
           | where we assume brains are subjectively conscious, but that's
           | not a metric we are aiming for.
        
           | nix0n wrote:
           | > software engineers feeling entitled to have an unfunded
           | opinion on scientific disciplines outside of their own field
           | of expertise
           | 
           | There's an XKCD about this behavior[0]. The title is actually
           | "Physicists", but I also have seen it on HN (especially with
           | psychology).
           | 
           | [0]https://xkcd.com/793/
        
             | User23 wrote:
             | Well with psychology it's more fair. Thanks to the
             | replication crisis we can say with a straight face that
             | psychologists aren't even experts on psychology. As usual
             | the demonstrable expertise in the field lies with the
             | pragmatic types. For psychology that means salesmen, pickup
             | artists, advertisers, conmen, propagandists, high school
             | queen bees, and so on.
        
             | dekhn wrote:
             | This is known as the "Why don't you just assume a spherical
             | cow?"
        
           | hackinthebochs wrote:
           | Embodiment is an intellectual dead end in explaining
           | consciousness/sentience. Sure, its relevant to understanding
           | human cognition as we are embodied entities, but it's not
           | much relevant to consciousness as such. The fact that some
           | pattern of signals on my perceptual apparatus is caused by an
           | apple in the real world does not mean that I have knowledge
           | or understanding of an apple in virtue of this causal
           | relation. That my sensory signals are caused by apples is an
           | accident of this world, one we are completely blind to. If
           | all apples in the world were swapped with fapples (fake
           | apples), where all sensory experiences that have up to now
           | been caused by apples are now caused by fapples, we would be
           | none the wiser. The wide content of our perceptual
           | experiences is irrelevant to literally everything we know and
           | how we interact with the world. Our knowledge of the world is
           | limited to our sensory experiences and our deductions,
           | inferences, etc derived from our experiences. Our
           | situatedness in the world is only relevant insofar as it
           | entails the space of possible sensory experiences.
           | 
           | Our sensory experience is the medium by which we learn about
           | the external world. We learn of apples not because of the
           | redness of the sensory experience, but because the pattern of
           | red/not-red experience entails the shape of apples. Conscious
           | experience provides the medium, modulations of which provide
           | the information about features of the external world. It is
           | analogous to how modulations of electromagnetic waves
           | provides information about some distant information source.
           | Understanding consciousness is an orthogonal matter to one's
           | situatedness in the world, just like understanding
           | electromagnetic waves is orthogonal to understanding the
           | information source being modulated into them.
        
         | function_seven wrote:
         | I'm in the same boat. It feels wrong to contemplate that our
         | consciousness might not be a magical independent agent with
         | supernatural powers, but is rather an emergent property of
         | complex-but-deterministic actions and reactions.
         | 
         | Like it somehow diminishes us. Reduces us to cogs and levers
         | and such.
         | 
         | But I can't imagine how it could be otherwise, though. I'm
         | still baffled by the existence of qualia, phenomenology, etc.
         | Awareness. But bafflement on that front isn't a good reason to
         | reject the possibility that the only thing that separates me
         | from a computer is the level of complexity. Or the structure of
         | the computation. Sometimes things are just weird.
        
           | jocaal wrote:
           | > emergent property of complex-but-deterministic actions and
           | reactions
           | 
           | I think you mean non-deterministic. The last century of
           | physics was dominated by work showing how deterministic
           | systems emerged from non-deterministic foundations. It seems
           | that probability and statistics were the branches of maths
           | behind everything. Who would have thought.
        
             | function_seven wrote:
             | Thanks. I did actually mean to use "deterministic", but
             | only as it sits in opposition to "free will". Is there a
             | better word for what I meant?
             | 
             | Of course there is randomness as well. So, yeah, I should
             | clarify: We don't impose upon the world any kind of
             | "uncaused cause", even if it feels like we do. Everything
             | we think and do is a direct result of some other action.
             | Sometimes that action can trace its lineage to a random
             | particle decay. (Maybe--ultimately--all of them can?) Maybe
             | we even have a source of True Randomness inherent in our
             | minds. But even so, that doesn't lend any support to the
             | common notion of our minds and consciousnesses as being
             | somehow separate from the physical world or the chains of
             | information that run through everything.
        
               | jocaal wrote:
               | I get what you are saying. I was just thinking about the
               | stochastic parrot analogy for consciousness, but I see
               | your comment is more about there not being special sauce
               | to conciousness. But hey, the fact that such behaviour
               | can emerge from simple processes is still pretty damn
               | cool.
        
             | dale_glass wrote:
             | I think "deterministic" is the more correct option.
             | 
             | Yes, of course deep down there's unimaginable numbers of
             | atoms randomly bumping around. But so far everything
             | suggests that biological organisms do their damnedest to
             | abstract that away and operate on a deterministic basis.
             | 
             | Think like how you keep changing -- gaining and losing
             | cells, changing chemical balance, and on the whole it takes
             | a lot of damage, even brain damage, to produce measurable
             | results.
        
           | hiAndrewQuinn wrote:
           | I don't think we're ever going to arrive at a satisfactory
           | answer for where qualia comes from, for much the same reason
           | it's impossible to test the quantum suicide hypothesis
           | without actually putting yourself through it enough times to
           | convince yourself statistically.
           | 
           | The only "real" evidence of qualia you have is your own
           | running stream of it; you can try to carefully pour yourself,
           | like a jug of water, into the body of another, and if you go
           | carefully enough you may even succeed to carry that qualia-
           | stream with you. But it's also possible that you pour too
           | quickly, or that you get bumped in the middle of the pouring,
           | and poof - the qualia is just gone. You're left with a
           | p-zombie.
           | 
           | Or maybe not. Maybe it comes right back as soon as the
           | transfer is done, like the final piece of a torrented movie.
           | The important part is you won't know unless you try - and
           | past success never guarantees future success. Maybe you just
           | got lucky on the first pour.
        
           | somewhereoutth wrote:
           | Indeed our consciousness is an emergent property from a
           | complex system, however the complexity of that system - the
           | human brain - is almost unfathomably beyond the complexity of
           | anything we might make in silicon now or in the foreseeable
           | future.
           | 
           | For example, it is possible to write down a (natural/whole)
           | number that completely describes the state of an LLM - its
           | connections and weights etc - for example by simply taking
           | the memory space and converting into a number as a very long
           | sequence of 1s and 0s. The number will be very big, but still
           | indescribably smaller than a number that could perfectly
           | describe the state of a human brain - not least as _that_
           | number is likely to lie on the real line, or beyond, even if
           | it was calculable. See Cantor for a discussion on the various
           | infinities and their respective cardinalities.
        
             | BlueTemplar wrote:
             | Under the assumption that human brains are limited by our
             | current understanding of physics, their state is finite -
             | infinities are forbidden by definition.
             | 
             | ( See the relationship between information theory and
             | Heisenberg's uncertainty principle that results in the law
             | of paraconservation of information :
             | http://www.av8n.com/physics/thermo/entropy-more.html#sec-
             | pha... )
        
         | otabdeveloper4 wrote:
         | LLMs don't create new information, they only compress existing
         | complexity in their train and inference data sets.
         | 
         | Humans definitely create new information. (Well, at least some
         | humans do.)
        
           | Gringham wrote:
           | Do they though? Or do humans just combine things they have
           | learned about the world?
        
           | danielmarkbruce wrote:
           | How are you defining "create new information" ?
        
           | Zambyte wrote:
           | Lossy compression + interpolated decompression = new
           | information
        
           | wongarsu wrote:
           | Do we?
           | 
           | Humans can _observe_ new information, but that 's obviously
           | not that unique. We can reason about existing information,
           | creating new hypotheses, but that is arguably a compression
           | of existing information. When we act on them and observe the
           | effects they become information, but that's not _us_ creating
           | the information (and LLMs can both act on the environment and
           | have the effects fed back to their input to observe, so it 's
           | not really unique).
           | 
           | There is this whole field of art, but art is constantly going
           | through a mental crisis whether anyone is creating anything
           | new, or if it's all just derivations of what has come before.
           | Same with dreams, which appear "novel" but might just be an
           | artefact of our brain's process that compresses the
           | experiences of the day.
        
             | zhynn wrote:
             | Life creates novelty. Humans are a member of that category,
             | but all life is producing novelty.
             | 
             | More information:
             | https://www.nature.com/articles/s41586-023-06600-9
        
             | chx wrote:
             | let's say you wanted to count the number of coins on a
             | table
             | 
             | you organize them into piles of ten
             | 
             | this created new information
        
               | BlueTemplar wrote:
               | More like you converted information you had about energy
               | powering your muscles into that one - resulting in less
               | total information for you in the end.
        
           | chrbr wrote:
           | I agree. A thought experiment I had recently:
           | 
           | Let's say we could somehow train an LLM on all written and
           | spoken language from the western Roman civilization (Republic
           | + Western Empire, up until 476 AD/CE, just so I don't muddy
           | the experiment with near-modern timelines). Would it, without
           | novel information from humans, ever be able to spit out a
           | correct predecessor of modern science like atomic theory?
           | What about steam power, would that be feasible since Romans
           | were toying with it? How far back do we have to go on the
           | tech tree for such an LLM be able to "discover" something
           | novel or generate useful new information?
           | 
           | My thought is that the LLM would forever be "stuck" in the
           | knowledge of the era it was trained in. Something in the
           | complexity of human brains working together is what drives
           | new information. We can continue training new LLMs with new
           | information, and LLMs might be able to find new patterns in
           | data that humans can't see and can augment our work, but the
           | LLM's capability for novelty is stuck on a complexity
           | treadmill, rooted in its training data.
           | 
           | I don't view this ability of humans as some magic
           | consciousness, just a system so complex to us right now that
           | we can't fully understand or re-create it. If we're
           | stochastic parrots, we seem to be ones that are magnitudes
           | more powerful and unpredictable than current LLMs, and maybe
           | even constructed in a way that our current technology path
           | can't hope to replicate.
        
         | davedx wrote:
         | Those who don't understand a concept are doomed to reduce it to
         | concepts they do understand.
         | 
         | I'm currently reading I Am A Strange Loop, a pretty extensive
         | dive into the nature of consciousness. I'm reserving final
         | judgment on how much I agree with the author, but I find it
         | laughable to claim consciousness itself is on the same level as
         | an LLM.
        
         | IanCal wrote:
         | I disagree they're stochastic parrots, I find othello-gpt very
         | convincing that these models can create world models and
         | respond accordingly.
        
         | gardenhedge wrote:
         | Did you teach your child to crawl? To laugh? To get excited?
        
         | meindnoch wrote:
         | That's a pretty bold statement, coming from someone with the
         | subjective experience of consciousness.
        
         | mo_42 wrote:
         | > I firmly believe that LLMs are stochastic parrots and also
         | that humans are too. To the point where I actually think even
         | consciousness itself is a next-token predictor.
         | 
         | I agree with the first sentence but not with the second one.
         | Consciousness most probably does not arise from just a next-
         | token predictor. At least not from an architecture similar to
         | current LLMs.
         | 
         | Both humans and LLMs basically learn to predict what happens
         | next. However, LLMs only predict when we ask them. In contrast,
         | humans predict something all the time. Even when we don't have
         | any sensory input, our brain plays scenarios. Maybe
         | consciousness arises because the result of our thinking is fed
         | back as input. In that sense, we simulate a world that includes
         | us acting and communicating in that world.
         | 
         | Also noteworthy, the human brain handles a variety of sensory
         | information and it's output is not only language. LLMs are
         | restricted to only language. But to me it seems like it's
         | enough for consciousness if we can give it the self-referential
         | property.
        
           | throwaway4aday wrote:
           | In order to predict what happens next we need to create a
           | model of the world. We exist as part of the world so we need
           | to model ourselves within it. We also have to model our mind
           | for it to be complete, including the model of the world it
           | contains. Oops, I just created an infinite loop.
        
             | mo_42 wrote:
             | Not necessarily infinite. It stops when there's reasonable
             | accuracy. Similar to how we would implement this in
             | software.
        
         | esjeon wrote:
         | > Anyone stuck in this mindset really is going to be hindered
         | from making significant progress in developing utility from AI.
         | 
         | I think this specific line shouts out that this is a typical
         | tribalism comment. Once people identify themselves as a part of
         | something, they start to translate the value of that something
         | as their own worth. It's a cheap trick that even young kids
         | play, but can LLM do this? No.
         | 
         | Some might say multi-modal this, train on that-thing, but it
         | already takes tens of thousands of the most advanced hardware
         | and gigawatts of energy to push around numbers to reach where
         | it is. TBH, I don't see it going anywhere, considering ROI on
         | research will decrease as we dig deeper into the same paradigm.
         | 
         | What I want to say is that today's LLM is certainly not the
         | last stop of AI technology, but a lot of advocates tend to
         | consider it as the final form of _intelligence_. It 's
         | certainly a case of extrapolation, and I don't think LLM can do
         | that.
        
       | barbazoo wrote:
       | People are not gonna like that they have to scroll so much here
       | :)
        
         | ale42 wrote:
         | Usually I don't, but this one I enjoyed it... also because,
         | it's just pure scrolling, no strange animations that change
         | while you scroll.
        
       | Mattasher wrote:
       | Humans have a long history of comparing ourselves, and the
       | universe, to our latest technological advancement. We used to be
       | glorified clocks (as was the universe), then we were automatons,
       | then computers, then NPC's, and now AI's (in particular LLM's).
       | 
       | Which BTW I don't think is a completely absurd comparison, see
       | https://mattasher.substack.com/p/ais-killer-app
        
         | ChuckMcM wrote:
         | I always enjoyed the stories of 'clock work' people (robots).
        
         | MichaelZuo wrote:
         | Each successive comparison is likely getting closer and closer
         | to the truth.
        
           | beezlebroxxxxxx wrote:
           | Or each successive comparison is just compounding and
           | reiterating the same underlying assumption (and potentially
           | the same mistake) whether it's true or not.
        
             | bigDinosaur wrote:
             | The jump to 'information processing machines' seems far
             | more correct than anything that came before, I'm curious
             | how you would argue against that? Yes, there are more
             | modern and other interesting theories (e.g. predictive
             | coding) but they seem much closer to cognitive psychology
             | than say, the human brain working like a clock.
        
           | beepbooptheory wrote:
           | Very curious to know what the telos of "truth" is here for
           | you? A comparison is a comparison, it can get no more "true."
           | If you want to say: the terms of the comparisons seem to
           | verge towards identity, then you aren't really talking about
           | the same thing anymore. Further, you would need to assert
           | that our conceptions of ourselves have remained static
           | throughout the whole ordeal (pretty tough to defend), and you
           | would also need to put forward a pretty crude idea of
           | technological determinism (extremely tough to defend).
           | 
           | Its way more productive and way less woo-woo to understand
           | that humans have a certain tendency towards comparison, and
           | we tend to create things that reflect our current values and
           | conceptions of ourselves. And that "technological progress"
           | is not a straight line, but a labyrinthine route that traces
           | societal conceptions and priorities.
           | 
           | The desire for the llm to be like us is probably more
           | realistically our desire to be like the llm!
        
         | ImHereToVote wrote:
         | Except LLM's are built on neural networks. That are based on
         | how neurons work. The first tech that actually copies aspects
         | of us.
        
           | TaylorAlexander wrote:
           | _sigh_
           | 
           | Neural networks are not based on how neurons work. They do
           | not copy aspects of us. They call them neural networks
           | because they are sort of conceptually like networks of
           | neurons in the brain but they're so different as to make
           | false the statement that they are based on neurons.
        
             | Terr_ wrote:
             | *brandishes crutches*
             | 
             | "Behold! The Mechanical Leg! The first technology that
             | actually copies aspects of our very selves! Think of what
             | wonders of self-discovery it shall reveal!" :p
             | 
             | P.S.: "My god, it is stronger in compression rather than
             | shear-stresses, how eerily similar to real legs! We're on
             | to something here!"
        
             | robwwilliams wrote:
             | If you study retinal synaptic circuitry you will not sigh
             | so heavily and you will in fact see striking homologies
             | with hardware neural networks, including feedback between
             | layers and discretized (action potential) outputs via the
             | optic nerve.
             | 
             | I recommend reading Synaptic Organization of the Brain or
             | getting into if you are brave, the primary literature on
             | retinal processing of visual input.
        
               | smokel wrote:
               | The book "The Synaptic Organization of the Brain" appears
               | to be from 2003. Is it still relevant, or is there
               | perhaps a more recent book worth checking out?
        
             | martindbp wrote:
             | Sigh... Everyone knows artificial neurons are not like
             | biological neurons. The network is the important part,
             | which really is analogous to the brain, while what came
             | before (SVMs and random forests) are nothing like it.
        
               | mecsred wrote:
               | Sigh... Every man knows the mechanisms of the mind are
               | yet unlike the cogs and pinions of clockwork. It remains
               | the machinery, the relation of spring and escapement,
               | that is most relevant. Hitherto in human history, I
               | think, such structure has not been described.
        
             | renewiltord wrote:
             | Doesn't really matter to modern CS, but Rosenblatt's
             | original perceptron paper is a good read on this. ANNs were
             | specifically inspired by Natural NNs and there were many
             | attempts to build ANNs using models of how the human brain
             | works, specifically down to the neuron.
        
               | dekhn wrote:
               | I;m sure you know but one of the best ways to get neuro
               | folks worked up is to say anything about neural networks
               | being anything like neurons in brains.
               | 
               | (IMHO, Rosenblatt is an underappreciated genius; he had a
               | working shallow computer vision hardware computer long
               | before people even appreciated what an accomplishment
               | that was. The hardware was fascinating- literally self-
               | turning potentiometer knobs to update weights.
        
         | Tallain wrote:
         | Not just technological advancements; we have a history of
         | comparing ourselves to that which surrounds us, is relatively
         | ubiquitous, and easily comprehended by others when using the
         | metaphor. Today it's this steady march of technological
         | advancement, but read any older work of philosophy and you will
         | see our selves (particularly, our minods) compared to monarchs,
         | cities, aqueducts.[1]
         | 
         | I point this out because I think the idea of comparing
         | ourselves to recent tech is more about using the technology as
         | a metaphor for self, and it's worth incorporating the other
         | ways we have done so historically for context.
         | 
         | [1]: https://online.ucpress.edu/SLA/article/2/4/542/83344/The-
         | Bra...
        
         | cmrdporcupine wrote:
         | Step A: build a machine which reflects a reduced and simplified
         | model of how some part of a human works
         | 
         | Step B: turn it on its head "the human brain is nothing more
         | than... <insert machine here.>"
         | 
         | It's a bit tautological.
         | 
         | The worry is that there's a Step C: Humans actually start to
         | behave as simple as said machine.
        
       | oh_sigh wrote:
       | Stochastic parrot was coined by bender, gebru, et al, but it was
       | probably "borrowed" from Regina Rini's "statistical parrot",
       | coined 6 months before stochastic parrots hit the scene.
       | 
       | https://dailynous.com/2020/07/30/philosophers-gpt-3/#rini
        
       | readams wrote:
       | I suspect people will argue about whether AIs are truly conscious
       | or just stochastic parrots even as all human-dominated tasks are
       | replaced by AIs.
       | 
       | The AIs will be creating masterwork art and literature
       | personalized for each person better than anything Shakespeare
       | ever wrote or Michaelangelo ever sculpted, but we'll console
       | ourselves that at least we're _really_ conscious and they're just
       | stochastic parrots.
        
         | 101008 wrote:
         | Masterwork is not independent of its time, context, etc. It's
         | hard to say AI will be creating masterwork, especially when
         | focused for each person.
        
         | lancesells wrote:
         | > The AIs will be creating masterwork art and literature
         | personalized for each person better than anything Shakespeare
         | ever wrote or Michaelangelo ever sculpted
         | 
         | I don't think it will. Art is truth and AI is a golem
         | replicating humanity.
        
           | esafak wrote:
           | It depends on what you value in art. Some value the only
           | artifact per se; others the story around it. Unfortunately, I
           | don't see too many complaints that image generators have no
           | human artist behind them.
        
       | cperciva wrote:
       | When my toddler was first learning to talk, we had some pictures
       | of felines hanging on the walls; some were cats and others were
       | kittens.
       | 
       | She quickly generalized; henceforth, both of them were "catten".
        
         | teaearlgraycold wrote:
         | Good toddler
        
         | djmips wrote:
         | My toddler understood that places to buy things were called
         | stores and understood eating - so restaurants were deemed
         | 'eating stores'. And we just went with that for a long time and
         | now they are grown we still call them eating stores for fun
         | sometimes. :)
        
       | jddj wrote:
       | Very nice.
       | 
       | I don't personally follow her all the way to some of those
       | conclusions but the delivery was awesome
        
       | xianshou wrote:
       | Pour one out for the decline of human exceptionalism! Once you
       | get material-reductionist enough and accept yourself as a pure
       | function of genetics, environment, past experience, and chance,
       | this conclusion becomes inevitable. I also expect it to be the
       | standard within a decade, with AI-human parity of capabilities as
       | the key driver.
        
         | dekhn wrote:
         | We'll see! I came to this conclusion a long time ago but at the
         | same time I do subjectively experience consciousness, which in
         | itself is something a mystery in the material-reductionist
         | philosophy.
        
           | idle_zealot wrote:
           | I hear this a lot, but is it really a mystery/incompatible
           | with materialism? Is there a reason consciousness couldn't be
           | what it feels like to be a certain type of computation? I
           | don't see why we would need something immaterial or some
           | undiscovered material component to explain it.
        
             | kaibee wrote:
             | Well the undiscovered part is why it should feel like
             | anything at all. And this is definitely relevant because
             | consciousness clearly exists enough that we exert physical
             | force about it, so its gotta be somewhere in physics. But
             | where?
        
               | idle_zealot wrote:
               | Why does it need to be in physics? What would that look
               | like, a Qualia Boson? It could be an emergent property
               | like life itself. Physics doesn't "explain" life, it
               | explains the physical mechanisms behind the chemical
               | interactions that ultimately produce life by virtue of
               | self-replicating patterns spreading and mutating. There
               | is no Life is physics, and yet we see it all around it
               | because it emerges as a complex result of fundamental
               | rules. My expectation is that experience is emergent from
               | computation that controls an agent and models the world
               | around that agent, and that experience isn't a binary. A
               | tree is more aware than a rock, an ant more than a tree,
               | a squirrel more than an ant, and so on until you reach
               | humans, but there may not be any real change in kind as
               | the conditions for experience become more developed.
               | 
               | I guess my real point is that I don't view experience or
               | consciousness as something special or exceptional. My
               | guess is that over time we come to understand how the
               | brain works better and better but never find anything
               | that definitively explains consciousness because it's
               | totally subjective and immeasurable. We eventually
               | produce computers complex enough to behave totally
               | convincingly human, and they will claim to be conscious,
               | maybe even be granted personhood, but we'll never
               | actually know for sure if they experience the world, just
               | like we don't know that other humans do.
        
               | robwwilliams wrote:
               | Toward the end of Hofstadter's Godel, Escher, Bach there
               | is a simple and plausible recursive attentional
               | explanation, but without any serious neuroscience. The
               | thalamo-cortico-thalamic system is a massively recursive
               | CNS system. Ray Guillery's biok The Brain As A Tool is a
               | good introduction to the relevant neuroscience with a hat
               | tip to consciousness written long after GEB.
        
             | dekhn wrote:
             | I mean, if you're a compatibilist, there's no mystery. In
             | that model, we live in a causally deterministic universe
             | but still have free will. I would say instead "the
             | _subjective experience of consciousness_ is an _emergent
             | property_ of _complex systems with certain properties_ ". I
             | guess that's consistent with "the experience of
             | consciousness is the feeling of a certain type of
             | computation".
             | 
             | Personally these sorts of things don't really matter to me-
             | I don't really care if other people are conscious, and I
             | don't think I could prove it either way- I just assume
             | other people are conscious, and that we can make computers
             | that are conscious.
             | 
             | And that's exactly what I'm pushing for: ML that passes
             | every Turing-style test that we can come up with. Because,
             | as they say "if you can't tell, does it really matter?"
        
             | beezlebroxxxxxx wrote:
             | Why is consciousness computation? What does it even mean to
             | say something feels like being "a type of computation"?
             | 
             | The concept of consciousness is wildly more expansive and
             | diverse than computation, rather than the other way around.
             | A strict materialist account or "explanation" of
             | consciousness seems to just end up a category error.
             | 
             | I take it as no surprise that a website devoted to the
             | computers and software often _insists_ that this is the
             | only way to look at it, but there are entire philosophical
             | movements that have developed fascinating accounts of
             | consciousness that are far from strict materialism, nor are
             | they  "spiritual" or religious which is a common rejoinder
             | by materialists, an example is the implications of Ludwig
             | Wittgenstein's work from _Philosophical Investigations_ and
             | his analysis of language. And even in neuroscience there is
             | far from complete agreement on the topic at all.
        
           | throwaway4aday wrote:
           | It makes perfect sense if you can disprove your own existence
        
         | jancsika wrote:
         | I'm not convinced that this material-reductionist view wouldn't
         | just be functionally equivalent to the way a majority of
         | citizens live their lives currently.
         | 
         | Now: a chance encounter with someone of a different faith leads
         | a citizen to respect the religious freedom of others in the
         | realm of self-determination.
         | 
         | Future: a young hacker's formative experience leads to the idea
         | that citizens should have the basic right to change out their
         | device's recommendation engine with a random number generator
         | at will.
         | 
         | Those future humans will still think of themselves as
         | exceptional because the AI tools will have developed right
         | alongside the current human-exceptionalist ideology.
         | 
         | Kinda like those old conservative couples I see in the South
         | where the man is ostensibly the story teller and head of
         | household. But if you listen long enough you notice his wife is
         | whispering nearly every detail of importance to help him
         | maintain coherence.
        
       | throw0101a wrote:
       | > _But he could understand so much more than he could say. If you
       | asked him to point to the vacuum cleaner, he would._
       | 
       | Perhaps worth noting that it is possible to teach infants (often
       | starting at around 9 months) sign language so that they can more
       | easily signal their desires.
       | 
       | Some priority recommended words would probably be:
       | 
       | * hungry/more
       | 
       | * enough/all done (for when they're full)
       | 
       | * drink (perhaps both milk/formula and water+ gestures)
       | 
       | See:
       | 
       | * https://babysignlanguage.com/chart/
       | 
       | * https://www.thebump.com/a/how-to-teach-baby-sign-language
       | 
       | These are not (AFAICT) 'special' symbols for babies, but the
       | regular ASL gestures for the work in question. If you're not
       | native English-speaking you'd look up the gestures in your
       | specific region/language's sign language:
       | 
       | * https://en.wikipedia.org/wiki/List_of_sign_languages
       | 
       | * https://en.wikipedia.org/wiki/Sign_language
       | 
       | + Another handy trick I've run across: have different coloured
       | containers for milk and water, and consistently put the same
       | contents in each one. That way the infant learns to grab a
       | particular colour depending on what they're feeling like.
        
         | yojo wrote:
         | FWIW, I tried this with both my sons. They both started using
         | the gestures the same day they started actually talking :-/
         | 
         | I have friends who had much more success with it, but the value
         | will largely depend on your child's relative developmental
         | strengths. A friend's son with autism got literally years'
         | benefit out of the gestures before verbal speech caught up.
        
           | kuchenbecker wrote:
           | My kids both picked it up, but my younger was similar. Being
           | able to sign "please" and "all done" helps anyway because
           | "eeess" and "a ya" are what she actually says.
        
           | throw0101a wrote:
           | > _FWIW, I tried this with both my sons. They both started
           | using the gestures the same day they started actually talking
           | :- /_
           | 
           | Could still useful: instead of shouting across the playground
           | on whether they have to go potty you can simply make the
           | gesture with minimal embarrassment. :)
        
             | toast0 wrote:
             | Yeah, a handful of signs is useful for adults in many
             | situations where voice comms don't work. And, at least in
             | my circles, there's a small shared vocabulary of signs that
             | there's a good chance will work. Potty, ouch, sleep, eat,
             | maybe a couple more.
        
             | vel0city wrote:
             | I also usually had success with signs when the child was
             | otherwise too emotional to verbalize their desire. They're
             | really upset and crying hard so it is hard to talk
             | especially when talking clearly is already a challenge, but
             | signing "milk" or "eat" or "hurt" or "more" can come
             | through easily.
        
             | petsfed wrote:
             | Tread carefully: the sign for poop looks close enough to a
             | crude gesture (cruder than just shouting "poop" at a
             | playground, as it turns out) that an ignorant bystander
             | might take it significantly wrongly.
        
           | thealfreds wrote:
           | Same with my nephew. He also has autism and the first thing
           | the speech therapist did when he was 3 was teach him simple
           | sign language. It became such a great catalyst for
           | communication. He's nowhere near his his age (now 6)
           | developmentally but within ~6 weeks he went from completely
           | non-verbal to actually vocalizing the simple words he learned
           | the sign language for.
        
           | ASalazarMX wrote:
           | There's probably variation among babies. One of my nephews
           | would examine his feet if you asked them where are his shoes,
           | even before walking. He got so proficient with signs that it
           | delayed talking; he preferred signaling and grunting :/
        
         | brainbag wrote:
         | I had heard about this before my son was born. We didn't try to
         | teach him anything, anytime we remembered (which was sporadic)
         | we just used the gestures when talking to him. I was amazed at
         | how quickly he picked up on it, and he was able to communicate
         | his needs to us months before he was able to verbalize.
         | 
         | It took very minimal effort on our part, and was very rewarding
         | for him; certainly a lot better than him crying with the hope
         | that we could guess what he wanted. Definitely recommended for
         | any new parents.
         | 
         | The best moment was when he was sitting on the floor, and
         | looked up at his mom and made the "together" sign, it was heart
         | melting.
        
           | esafak wrote:
           | In other words, you can invent your own sign language because
           | your child won't need to use it with other people.
        
         | EvanAnderson wrote:
         | Every kid is different. YMMV. We did some ASL gestures/words
         | with our daughter and it worked very well. I'd encourage
         | everyone to at least give it a try. She took to it and was
         | "talking" to us (mainly "hungry" and "milk", but we got
         | "enough" sometimes too) pretty quickly.
         | 
         | I can't remember exact ages and timeframes-- that time of my
         | life is "blurry". I wish I could remember all the gestures we
         | used. (The only ones I can remember now are "milk", "apple",
         | and "thank you".) As she became verbal she quickly transitioned
         | away from them.
        
         | pamelafox wrote:
         | My toddler learnt "more" and now uses it to get me to
         | repeatedly sing the same song OVER AND OVER again. They haven't
         | used the word yet, though they do speak other words.
         | 
         | I wish I'd learnt sign language before having kids so I just
         | already knew how to do it, it's so cool. Props to the Ms.
         | Rachel videos for including so many signs.
        
         | Izkata wrote:
         | My mom taught us some words somewhere around 5-8 years old, so
         | we could signal things to each other instead of interrupting
         | conversations. The three in particular I remember are "hungry",
         | "bored", and "thank you" (so she could remind us to say it
         | without the other person realizing).
        
         | petsfed wrote:
         | One of the funniest interactions I had with my eldest daughter
         | was the day we baked cookies together, when she not yet 2. She
         | was verbalizing a lot, but also signing "milk" and "more" quite
         | a bit. And when she bit into her very first chocolate chip
         | cookie of her entire life, she immediately signed "more" and
         | said as much _through_ the mouthful of cookie.
        
       | karmakurtisaani wrote:
       | Nice article, great presentation.
       | 
       | However, it's a bit annoying that the focus of the AI anxiety is
       | how AI is replacing us and the resolution is that we embrace our
       | humanity. Fair enough, but at least to me the main focus in my AI
       | anxiety is that it will take my job - honestly don't really care
       | about it doing my shitty art.
        
         | nsfmc wrote:
         | here's another piece in the issue that addresses your concern
         | https://www.newyorker.com/magazine/2023/11/20/a-coder-consid...
        
         | ryandrake wrote:
         | More specifically, I think we're worried about AI taking our
         | _incomes_ , not our jobs. I would love it if an AI could do my
         | entire job for me, and I just sat there collecting the income
         | while the AI did all the "job" part, but we know from history
         | (robotics) that this is not what happens. The owners of the
         | robots (soon, AI) keep all the income and the job goes away.
         | 
         | An enlightened Humanity could solve this by separating the
         | income from the job, but we live in a Malthusian Darwinian
         | world where growth is paramount, "enough" does not exist, and
         | we all have to justify and earn our living.
        
           | ketzo wrote:
           | I mean, I definitely hear (and feel) a lot of worry about AI
           | taking away work that we find meaningful and interesting to
           | do, outside of the pure money question.
           | 
           | I really like programming to fix things. Even if I weren't
           | paid for it, even if I were to win the lottery, I would want
           | to write software that solved problems for people. It is a
           | nice way to spend my days, and I love feeling useful when it
           | works.
           | 
           | I would be very bummed - perhaps existentially so - if there
           | were no _practical_ reason ever to write software again.
           | 
           | And I know the same is true for many artists, writers,
           | lawyers, and so on.
        
           | magneticnorth wrote:
           | At some point someone said to me, "How badly did we fuck up
           | as a society that robots taking all our jobs is a bad thing."
           | 
           | And I think about that a lot.
        
         | djmips wrote:
         | Your job is your art.
        
       | BD103 wrote:
       | Ignoring the topic of the article, the artwork and design was
       | fantastically done! Props to whoever designed it :)
        
       | armchairhacker wrote:
       | Toddlers can learn, LLM "learning" is very limited (fixed-size
       | context and expensive fine-tuning)
        
       | xigency wrote:
       | Regarding dismissals of LLM's on 'technical' grounds:
       | 
       | Consciousness is first a word and second a concept. And it's a
       | word that ChatGPT or Llama can use in an English sentence better
       | than billions of humans worldwide. The software folks have made
       | even more progress than sociologists, psychologists and
       | neuroscientists to be able to create an artificial language
       | cortex before we understand our biological mind comprehensively.
       | 
       | If you wait until conscious sentient AI is here to make your
       | opinions known about the implications and correct policy
       | decisions, you will already be too late to have an input. ChatGPT
       | can already tell you a lot about itself (showing awareness) and
       | will gladly walk you through its "thinking" if you ask politely.
       | Given that it contains a huge amount of data about Homo sapiens
       | and its ability to emulate intelligent conversation, you could
       | even call it Sapient.
       | 
       | Having any kind of semantic argument over this is futile because
       | a character AI that is hypnotized to think it is conscious, self-
       | aware and sentient in its emulation of feelings and emotion would
       | destroy most people in a semantics debate.
       | 
       | The field of philosophy is already ripe with ideas from hundreds
       | of years ago that an artificial intelligence can use against
       | people in debates of free will, self-determination and the nature
       | of existence. This isn't the battle to pick.
        
         | dsign wrote:
         | Exactly. To that I'm going to add that the blood of our
         | civilization is culture (sciences, technology, arts,
         | traditions). The moment there is something better at it than
         | humans, it's our "horse-moment".
        
       | Retr0id wrote:
       | This is a really beautiful article, and while there are certainly
       | fundamental differences between how a toddler thinks and learns,
       | and how an LLM "thinks", I don't think we should get too
       | comfortable with those differences.
       | 
       | Every time I say to myself "AI is no big deal because it can't do
       | X", some time later someone comes along and makes an AI that does
       | X.
        
         | alexwebb2 wrote:
         | There's a well-documented concept called "God of the gaps"
         | where any phenomenon humans don't understand at the time is
         | attributed to a divine entity.
         | 
         | Over time, as the gaps in human knowledge get filled, the god
         | "shrinks" - it becomes less expansive, less powerful, less
         | directly involved in human affairs. The definition changes.
         | 
         | It's fascinating to watch the same thing happen with human
         | exceptionalism - so many cries of "but AI can't do <thing
         | that's rapidly approaching>". It's "human of the gaps", and
         | those gaps are rapidly closing.
        
       | sickcodebruh wrote:
       | One of my favorite experiences from my daughter's earliest years
       | was the realization that she was able to think about, describe,
       | and deliberately do things much earlier than I realized. More
       | plainly: once I recognized she was doing something deliberately,
       | I often went back and realized she had been doing that same thing
       | for days or weeks prior. We encountered this a lot with words but
       | also physical abilities, like figuring out how to make her
       | BabyBjorn bouncer move. We had a policy of talking to her like
       | she understood on the off-chance that she could and just couldn't
       | communicate it. She just turned 5 and continues to surprise us
       | with the complexity of her inner world.
        
         | marktangotango wrote:
         | We did this, and I'd add that repeating what they say back to
         | them so they get that feedback is important too. It's startling
         | to see the difference between our kids and their class mates,
         | who's parents don't talk them (I know this from observing at
         | the countless birthday parties, school events, and sports
         | events). Talking to kids is like watering flower, they bloom
         | into beautiful beings.
        
       | raytopia wrote:
       | Really sweet story.
        
       | dekhn wrote:
       | I think the position of Gebru, et al, can best be expressed as a
       | form of "human exceptionalism". As a teenager, my english teacher
       | shared this writing by Faulkner, which he delivered as his Nobel
       | Prize speech. I complained because the writing completely ignores
       | the laws of thermodynamics about the end of the universe.
       | 
       | """I believe that when the last ding-dong of doom has clanged and
       | faded from the last worthless rock hanging tideless in the last
       | red and dying evening, that even then there will still be one
       | more sound: that of man's puny, inexhaustible, voice still
       | talking! ...not simply because man alone among creatures has an
       | inexhaustible voice, but because man has a soul, a spirit capable
       | of compassion, sacrifice and endurance."""
        
       | dekhn wrote:
       | If you wonder about toddlers, wait til you have teenagers.
       | 
       | In fact I've observed (after working with a number of world-class
       | ML researchers) that having children is the one thing that
       | convinces ML people that learning is both much easier and much
       | harder than what they do in computers.
        
       | calf wrote:
       | I think it's reductivism to assume that neural networks cannot
       | emergently support/implement a non-stochastic computational model
       | capable of explicit logical reasoning.
       | 
       | We already have an instance of emergent logic. Animals engage in
       | logical reasoning. Corollary, humans and toddlers are not merely
       | super-autocompletes or stochastic parrots.
       | 
       | It has nothing to do with "sensory embodiment" and/or "personal
       | agency" arguments like in the article. Nor the clever solipsism
       | and reductivism of "my mind is just random statistics of neural
       | firing". It's about finding out what the models of computation
       | actually are.
        
       | og_kalu wrote:
       | Paraphrasing and summarizing parts of this article,
       | https://hedgehogreview.com/issues/markets-and-the-good/artic...
       | 
       | Some ~72 years ago in 1951, Claude Shannon released his
       | "Prediction and Entropy of Printed English", an extremely
       | fascinating read now.
       | 
       | The paper begins with a game. Claude pulls a book down from the
       | shelf, concealing the title in the process. After selecting a
       | passage at random, he challenges his wife, Mary to guess its
       | contents letter by letter. The space between words will count as
       | a twenty-seventh symbol in the set. If Mary fails to guess a
       | letter correctly, Claude promises to supply the right one so that
       | the game can continue.
       | 
       | In some cases, a corrected mistake allows her to fill in the
       | remainder of the word; elsewhere a few letters unlock a phrase.
       | All in all, she guesses 89 of 129 possible letters correctly--69
       | percent accuracy.
       | 
       | Discovery 1: It illustrated, in the first place, that a
       | proficient speaker of a language possesses an "enormous" but
       | implicit knowledge of the statistics of that language. Shannon
       | would have us see that we make similar calculations regularly in
       | everyday life--such as when we "fill in missing or incorrect
       | letters in proof-reading" or "complete an unfinished phrase in
       | conversation." As we speak, read, and write, we are regularly
       | engaged in predication games.
       | 
       | Discovery 2: Perhaps the most striking of all, Claude argues that
       | that a complete text and the subsequent "reduced text" consisting
       | of letters and dashes "actually...contain the same information"
       | under certain conditions. How?? (Surely, the first line contains
       | more information!).The answer depends on the peculiar notion
       | about information that Shannon had hatched in his 1948 paper "A
       | Mathematical Theory of Communication" (hereafter "MTC"), the
       | founding charter of information theory.
       | 
       | He argues that transfer of a message's components, rather than
       | its "meaning", should be the focus for the engineer. You ought to
       | be agnostic about a message's "meaning" (or "semantic aspects").
       | The message could be nonsense, and the engineer's problem--to
       | transfer its components faithfully--would be the same.
       | 
       | a highly predictable message contains less information than an
       | unpredictable one. More information is at stake in ("villapleach,
       | vollapluck") than in ("Twinkle, twinkle").
       | 
       | Does "Flinkle, fli- - - -" really contain less information than
       | "Flinkle, flinkle" ?
       | 
       | Shannon concludes then that the complete text and the "reduced
       | text" are equivalent in information content under certain
       | conditions because predictable letters become redundant in
       | information transfer.
       | 
       | Fueled by this, Claude then proposes an illuminating thought
       | experiment: Imagine that Mary has a truly identical twin (call
       | her "Martha"). If we supply Martha with the "reduced text," she
       | should be able to recreate the entirety of Chandler's passage,
       | since she possesses the same statistical knowledge of English as
       | Mary. Martha would make Mary's guesses in reverse.
       | 
       | Of course, Shannon admitted, there are no "mathematically
       | identical twins" to be found, _but_ and here 's the reveal, "we
       | do have mathematically identical computing machines."
       | 
       | Those machines could be given a model for making informed
       | predictions about letters, words, maybe larger phrases and
       | messages. In one fell swoop, Shannon had demonstrated that
       | language use has a statistical side, that languages are, in turn,
       | predictable, and that computers too can play the prediction game.
        
       | birdman3131 wrote:
       | Interesting article. However it becomes very close to unreadable
       | just after the cliffhanger. Given that they use the word vertigo
       | around the same point it all straightens back up I assume it is
       | an explicit choice but it feels like a very bad choice for
       | anybody with any sort of reading disorder. Very wavy lines of
       | text along with side by side paragraphs with little to
       | differentiate them from each other.
        
       | westurner wrote:
       | Language acquisition > See also:
       | https://en.wikipedia.org/wiki/Language_acquisition
       | 
       | Phonological development:
       | https://en.wikipedia.org/wiki/Phonological_development
       | 
       | Imitation > Child development:
       | https://en.wikipedia.org/wiki/Imitation#Child_development
       | 
       | https://news.ycombinator.com/item?id=33800104 :
       | 
       | > _" The Everyday Parenting Toolkit: The Kazdin Method for Easy,
       | Step-by-Step, Lasting Change for You and Your Child"
       | https://www.google.com/search?kgmid=/g/11h7dr5mm6&hl=en-US&q... _
       | 
       | > _" Everyday Parenting: The ABCs of Child Rearing" (Kazdin,
       | Yale,) https://www.coursera.org/learn/everyday-parenting _
       | 
       | > _Re: Effective praise and Validating parenting_ [and parroting]
        
       | gumballindie wrote:
       | Probably, if they suffer from developmental issues. People are
       | anything but, no matter how much deities of the gullible want to
       | make you think they are.
        
       | johnea wrote:
       | Wow, it really seems a trend that people are unable to
       | understand/contemplate reality outside of an internet meme.
       | 
       | Beleive it or not, reality exists outside of anything you ever
       | read in a whitepaper...
        
       | dsQTbR7Y5mRHnZv wrote:
       | https://archive.is/AUOPt
        
       ___________________________________________________________________
       (page generated 2023-11-15 23:00 UTC)