[HN Gopher] On the dangers of seeing human minds as predictive m...
___________________________________________________________________
On the dangers of seeing human minds as predictive machines
Author : dangerman
Score : 74 points
Date : 2021-02-01 21:46 UTC (1 days ago)
(HTM) web link (aeon.co)
(TXT) w3m dump (aeon.co)
| abellerose wrote:
| Rene Descartes ruined the perception and healthcare of mental
| illness. Patients and even a few doctors would be more informed
| if they understood the symptoms of mental illness occur because
| of physical changes in the brain. Instead a misinformed belief of
| a chemical imbalance exists and is assumed as a truth by some
| physicians & nurses.
|
| Society is basically brainwashed into believing everyone has free
| will. The result is that people with the most capital prosper and
| I assume the foregoing wouldn't be the case if everyone was a
| determinist.
| solipsism wrote:
| > if they understood the symptoms of mental illness occur
| because of physical changes in the brain. Instead a misinformed
| belief of a chemical imbalance exists
|
| What's the difference? And how does that difference impact how
| people are treated by physicians and nurses?
| danaliv wrote:
| I read "physical changes" as structural changes in neurons
| and neural connections themselves, as opposed to the (at
| least popular) thinking that mental illness is down to
| imbalances in neurotransmitters. There's at least some
| research around this related to addiction, namely that
| overexpression of DFosB produces changes to neurons in the
| reward pathways.
| psyc wrote:
| Mentally ill person here. I haven't heard a mental health
| professional use the phrase 'chemical imbalance' or anything
| similar in 20 years. It's my understanding it has been
| deprecated. It's mostly repeated by laypeople in ignorance.
| It's hard to flush something like that out of the popular
| lexicon, once established.
|
| There's a lot of maddeningly persistent misinformation about
| mental health on Internet forums.
| dboreham wrote:
| I'm fairly sure I've seen tv drug ads in the past few years
| use the phrase "chemical imbalance".
| psyc wrote:
| They have a clear incentive to exaggerate a) the
| effectiveness of the drugs, and b) how well they understand
| how they work. There is an important distinction between
| knowing how drugs affect brain chemistry, vs knowing how
| they alleviate symptoms. The latter is still more empirical
| than theoretical.
| Gibbon1 wrote:
| The widespread and increasing use of psychotropic drugs says
| that 'chemical imbalance' still serves as the foundation.
| abellerose wrote:
| Person with gender dysphoria here. I've still heard the
| phrase in Canada by nurses & staff. Also heard it several
| years ago when living in USA by doctors & staff. Unsure why
| you think I was referring to internet forums? I thought those
| were deprecated since 2009.
| psyc wrote:
| https://www.google.com/search?q=chemical+imbalance+theory
| abellerose wrote:
| Unsure why you're linking that. Do you not realize by my
| first comment that I'm describing the theory as nonsense
| compared to what I wrote? edit: ah thanks for the
| clarification.
| psyc wrote:
| My intent was to support your comment, not contradict it.
| By reassuring anyone who might read this that the
| chemical imbalance angle has rightly fallen out of favor.
| SummerlyMars wrote:
| > Society is basically brainwashed into believing everyone has
| free will. The result is that people with the most capital
| prosper and I assume the foregoing wouldn't be the case if
| everyone was a determinist.
|
| Why do you assume this? If everyone believed in determinism
| rather than free will, couldn't those with the most capital
| (deterministically) say "Well, that's just the way it should
| be. They can't choose to be different."
|
| I'm not inclined to think that will is all that free, but I
| can't seem to see the connection between that and capitalism.
| abellerose wrote:
| The understanding of free will being an "illusion" opens a
| few doors for approaching life. One of them being how society
| is structured and regarding healthcare, housing, finances,
| education..
|
| Anyway, people cast their votes by the beliefs as well and
| currently we're living in social systems designed from the
| belief of have free will. The idea of someone earned what
| they have, contrary to someone worse off and people aren't
| just destined by their life circumstances to end up homeless.
| Genetics, environmental factors and all proceeding moments
| are factored from the preceding forces.
|
| Well, when you realize the foregoing about free will is
| untrue and you really take the time to adapt your thinking to
| the understanding of free will being illusion. I assume you
| become more compassionate because you're actually observing
| reality for how it truly is awful to some and those people
| had no control for their misfortune. I know from my own life
| when I understood it took a few years to truly get "it" but
| after I deeply feel more empathetic and disgusted by the
| current systems that refuse people the medical help they need
| or getting someone shelter & food.
|
| Everyone is just assigned a life at birth without any say and
| that's the same to what happens after without any real
| control existing to alter your destiny. So a nihilist can say
| well so what?..everything is just destined. But that doesn't
| mean we should keep stalling people from being educated of
| how reality happens to be and designing better social systems
| that adapt to the true reality of the universe. Anyway that's
| my long rant/suggestion on it.
| pdonis wrote:
| _> when you realize the foregoing about free will is untrue
| and you really take the time to adapt your thinking to the
| understanding of free will being illusion. I assume you
| become more compassionate_
|
| You assume wrong. The most brutal totalitarian governments
| in history have been built on the same understanding of
| humans that you describe. So that understanding can go
| either way: it can make you more compassionate, _or_ it can
| make you much _less_ so.
|
| Free will is best understood not as a "fact" but as a
| right. Every person has the _right_ to make their own
| choices instead of someone else making those choices for
| them. And the most dehumanizing thing you can tell a person
| is that they are "destined by circumstances" (your phrase)
| to be in the situation they are in, instead of having the
| power to change it by the choices they make.
|
| Sure, the power to make one's own choices is not unlimited.
| We can't choose to not be affected by gravity. We can't
| choose to be omnipotent or omniscient. And, most important,
| we can't choose how _other_ people will make their own
| choices (more on that below). But that doesn 't change the
| fact that people do make choices, and can change their
| situation by doing so. The proper role of compassion and
| charity is to help empower people to make better choices
| for themselves.
|
| And the proper understanding of situations where some
| people are deprived of basic necessities through no fault
| of their own is not that it was just "destined by
| circumstances", but that _other people_ made choices that
| _created_ those situations. Trying to hand-wave that away
| and pretend that things like famines and homelessness are
| just accidents of nature, instead of products of deliberate
| choices made by particular people in power--the whole "how
| society is structured" that you slide by without really
| looking at where it comes from--only makes those problems
| worse.
| abellerose wrote:
| Feel free to email me for further discussion. I have the
| impression you don't understand my definition of free
| will compared to your own definition. I will express here
| that the idea of making a conclusion by the past in
| history isn't fair or even comparable to what I could
| argue against people doing under the belief that people
| have free will. Anyway I'm not convinced by what you've
| expressed against my views and would appreciate a longer
| discussion by email if you're up to it. I fundamentally
| think it's morally wrong to keep someone in the dark from
| reality by deceiving them about their will or life
| outcome and especially if that person is homeless or
| suicidal for example.
| pdonis wrote:
| _> Feel free to email me for further discussion._
|
| Why not just have the discussion here?
|
| _> I have the impression you don 't understand my
| definition of free will compared to your own definition._
|
| I think you are trading on the ambiguity in the term
| "free will" to avoid having to confront the actual issues
| involved. That's why I used the less ambiguous term
| "making choices".
|
| If you don't think people can ever make choices that make
| a difference in their situation, then you and I have a
| fundamental disagreement that I don't think will get
| resolved by any discussion. Also, if that's your belief,
| I think you are being inconsistent; you talk about
| "designing better social systems", but that very process
| involves people making choices that will make a
| difference in their situation (as well as the situation
| of many, many other people).
|
| If you just think the _amount_ of difference a person can
| make in their situation by making choices varies with the
| situation, of course I agree with that. But that 's not a
| problem that can be fixed by "designing better social
| systems". It can only be fixed by being willing to call a
| spade a spade when people in power make choices that
| disempower others, so that people in power can be
| _stopped_ from doing that. The biggest barrier to people
| being able to change their situation by making choices is
| restrictions put on them by other people, not some
| abstract claim about free will being an illusion.
| "Designing social systems" makes that problem worse, not
| better.
|
| _> I fundamentally think it 's morally wrong to keep
| someone in the dark from reality by deceiving them about
| their will or life outcome_
|
| I think you are confusing your opinions with "reality".
| Telling people they don't have free will, or that free
| will is an illusion, is just as much of an opinion as
| telling them they _do_ have free will. Neither is a
| statement of "reality". That's why I say free will is
| best viewed as a right: because in my opinion, believing
| that people have free will is respecting their right to
| make their own choices, and believing that people don't
| have free will is _not_ respecting that right--which just
| means arrogating to yourself the power to make choices
| that disempower them. Respecting people 's right to make
| choices is not a factual claim about people; it's a
| policy, which I think should be adopted because it will
| end up helping people.
|
| _> especially if that person is homeless or suicidal for
| example_
|
| I don't see how it's any help to a person who is homeless
| or suicidal to tell them free will is an illusion. Nor
| would it be any help to tell them it isn't. A person who
| is homeless or suicidal has much more pressing things to
| think about than whether or not free will is an illusion.
| And helping such a person has nothing at all to do with
| your own opinions or beliefs, much less foisting them on
| others in the guise of "telling them about reality".
| pdonis wrote:
| _> what I could argue against people doing under the
| belief that people have free will_
|
| What sorts of terrible things do you think people do
| under the belief that people have free will?
| Layke1123 wrote:
| I fully believe my decisions and actions are largely
| determined long before I am fully aware of what is taking
| place.
|
| This doesn't make me despair or not care to do anything,
| but makes me extremely resilient and adaptable to
| changing situations. I can acknowledge that no matter
| what I want to do, I cannot stop my leg from twitching
| when someone hits the nerve underneath my kneecap.
|
| I also accept that if I make bad decisions because of an
| addiction or deficient reasoning process, I willingly
| would accept a mechanism to correct said process or
| improve my deficiency.
|
| It's not that free will is necessary to understand
| reality. It's that free will is necessary for YOUR
| reality. Some of us get along just peachy without it.
| mongol wrote:
| I agree with almost everything except that to say that
| famine and homeless can be caused both by human choices
| and accidents or events outside of human control.
| pdonis wrote:
| _> famine and homeless can be caused both by human
| choices and accidents or events outside of human control_
|
| Yes, I agree that accidents or events outside human
| control can cause bad things to happen. I would also
| point out, though, that _how_ bad those things get has
| far more to do with human choices. There are many choices
| that people can make to be better prepared for accidents
| and events outside their control if and when they happen.
| Layke1123 wrote:
| They will think its justified, just like the masses will
| justify taking all the hoarded wealth. Fairly distributing
| capital and power is the only reasonable solution in a world
| that is so obviously deterministic.
| wizzwizz4 wrote:
| Never underestimate the adaptability of rhetoric.
| Layke1123 wrote:
| While cute, it misses the point. If I say a car is
| coming, it is then up to the person to decide whether
| they step into the street or not. This is not a
| rhetorical argument.
| smolder wrote:
| They're flawed, exploitable predictive machines.
| 0thgen wrote:
| The author writes about how describing humans as predictive
| engines will 'encourage us to reduce our fellow humans to mere
| pieces of machinery'
|
| ^ this is pretty much a fluff statement. it's basically a hand-
| wavey sliperry slope argument that can be made about any theory
| in animal/human research.
|
| i could also make the claim that thinking about humans as
| predictive learning engines will encourage to consider our
| deterministic nature and have more compassion for eachother <--
| both my and the author's claims lack any substantial evidence
| passivate wrote:
| A few non-fluff sentences:
|
| "Consequential decisions in law enforcement, military and
| financial contexts are increasingly influenced by automated
| assessments spat out by proprietary predictive engines."
|
| "These prediction engines have primed us to be receptive to the
| idea of the predictive brain. So too has the science of
| psychology itself, which has been concerned since its founding
| with the prediction and control of human beings. 'All natural
| sciences aim at practical prediction and control, and in none
| of them is this more the case than in psychology today,"
|
| "Advertising agencies would fund their own experiments by
| researchers such as Watson to test the laws of the consumer-
| machines that they targeted, rationalising their understandings
| of phenomena such as habitual product use, targeted messaging
| and brand loyalty."
|
| " Simulmatics has become, in Lepore's words:
| the mission of nearly every corporation. Collect data. Write
| code: if/then/else. Detect patterns. Predict behaviour. Direct
| action. Encourage consumption. Influence elections. "
|
| "Scientists might believe that they are simply building
| conceptual and mechanical tools for observation and
| understanding - the telescopes and microscopes of the
| neuroscientific age. But the tools of observation can be
| fastened all too easily to the end of a weapon and targeted at
| masses of people. If predictive systems began as weapons meant
| to make humans controllable in the field of war and in the
| market, that gives us extra reason to question those who wield
| such weapons now"
|
| >" thinking about humans as predictive learning engines will
| encourage to consider our deterministic nature and have more
| compassion for eachother"
|
| OK, let us consider that too. What would be the underlying
| mechanism of this? How would we be primed/conditioned into
| adopting this behavior? Examples? Proposed models/theories/etc?
| [deleted]
| wombatmobile wrote:
| The author wants society to be more compassionate and less
| exploitative. That's all.
|
| Why?
|
| To make life more enjoyable and less challenging for
| everybody.
|
| Imagine a world in which the resources of e.g. Facebook were
| dedicated towards increasing happiness and usefulness through
| connecting people to sources of emotional satisfaction,
| rather than increasing shareholder wealth through targeted
| advertising.
|
| How?
|
| The author doesn't say.
|
| The challenge is to think of an economic system in which
| vendors can act selfishly but consumers benefit virtuously.
| One example is the car industry. Before Henry Ford brought
| mass transit and higher wages to society, individuals had
| less freedom, fewer choices, and more dependency on feudal
| overlords.
|
| The problem so far with the digital economy is that it hasn't
| yielded the same transition in society, at least, not so
| obviously. In many ways it's the opposite. More engagement
| leads to more frustration and unhappiness.
|
| The rise of surveillance capitalism is negative for personal
| development because it is based on an economic model of
| exploitation i.e. spying to make ads sell more stuff. To
| change this, advertising would have to change so that the KPI
| are not just clicks and conversions and blind revenue. The
| KPI should be consumer satisfaction and personal development,
| and societal development. That's not impossible. It just
| hasn't been done yet.
| rck wrote:
| Many words, few ideas. I was hoping for an argument based on ...
| well, anything. There's no real criticism of the scientific
| support for the idea that human minds are predictive. Just some
| vague connections to economics and a vocabulary that suggests
| anti-capitalist sympathies. Your attention is better spent
| elsewhere.
| Layke1123 wrote:
| If our attention is better spent elsewhere, why then did you
| spend effort and attention reading it and then trying to relate
| anti-capitalist rhetoric here? Is it possible that you have an
| overly attached fondness to the term capitalism and can't
| handle any rebuke of the term?
| Layke1123 wrote:
| Can someone point out why I'm being downvoted? Or respond to
| my comment rather than just shutting down discourse?
| wizzwizz4 wrote:
| Perhaps your criticism of the bad comment was too
| rhetorical.
| meowkit wrote:
| You write dismissively of capitalism is my first note.
|
| Capitalism isn't the boogie man people make it out to be.
| It's the optimization criteria for increasing financial
| capital over other forms of capital (natural, labor, etc),
| and corporatocracy.
|
| https://en.wikipedia.org/wiki/Corporatocracy
| Layke1123 wrote:
| Is writing dismissively of capitalism not OK? Can you
| make the argument that capitalism was and is always a
| force of good in the world? Is plutocracy considered a
| good and righteous thing?
| r34 wrote:
| More and more I'm convinced that in the set of all possible
| search spaces, the most important is the pleasure-driven search
| space. But some well constructed knowledge skeleton let's one
| explore it (starting) at a higher level.
|
| That's how I see it: to build the scaffolding on ratio, and start
| flying from there. But it's always important to be able to land
| back and not to get lost completely - because the outer space is
| simply uncomfortably unknown.
|
| That's why amongst all the writers (or more generally - creators)
| I enjoy most those who have both solid knowledge and brave
| imagination.
| Sparkyte wrote:
| Is it really predictive or just conformative? It's kind of bad
| nomeclature to assume that an outcome is guaranteed. We asses and
| weigh circumstances that's not predictive that's coping. We as
| human cope within a conformed environment and that's how we
| think. Failure to act accordingly is punishment and correctly
| reward. All learned coping behaviors.
| Veedrac wrote:
| If I've understood this article correctly, the thesis is that we
| should avoid trying to understand the mind, because understanding
| the mind is inevitably about controlling and subverting human
| behaviour to worse ends, and that it also legitimizes those ends
| by dehumanizing people.
|
| I am, mind, not totally sure I have understood the article,
| because the article is not written very clearly, as the point is
| never stated without vagueness and allusion. Perhaps this comes
| from the author's worries about the negative effects of
| understanding things.
| SllX wrote:
| Well, depends on your moral strictures.
|
| A functional understanding of how someone else thinks whether
| on a personal basis or more generally or categorically is
| useful and actionable information.
|
| What you do with that is up to you, but if you're not using
| information, someone else is.
| cmehdy wrote:
| This reminds me of the typical criticisms that Daniel
| Dennett[0] has faced throughout the decades where he tried to
| argue the point that we can talk about consciousness through
| similar tools as those of computation (amongst other things)
| and that there isn't a reason to put humans on a magical
| pedestal of immunity from further understanding.
|
| There's some anchored belief that humans are somehow sacred
| beyond understanding and that logic that we happily apply to a
| cricket or a star has to be thrown out the window when we try
| to touch the sanctity of humanity. I recall Dennett pointing
| out how starting from Descartes the whole meme of the person
| within your brain doing things (present day example: see
| Pixar's Inside Out) is a way to put a wall around the thing
| that scientists want to poke at for more understanding. The guy
| turning switches in the brain is God, and you just can't test
| it and should accept it as gospel, i.e. the "don't question
| God" type of thing.
|
| Making the unknown a thing from beyond is how we (used to)
| avoid being scared of death too. I've always been wary of that
| attitude, and the tech world is not immune to that by the way
| (these days a good one is "don't question the absolute power of
| technology to solve the climate crisis").
|
| [0]
| https://en.wikipedia.org/wiki/Daniel_Dennett#Philosophy_of_m...
| yters wrote:
| It could also be the case the human mind transcends
| computation, i.e. a transfinite, yet limited, halting oracle,
| and as such is logically impossible to reduce to a Turing
| machine. To insist this is not the case merely based on
| materialism is begging the question. It is a very simple and
| formally rigorous point, but gets obscure by high fallutin
| philosophical language like "reductionism".
| Thiez wrote:
| Is there any reason to believe that the human mind has
| these amazing computation-transcending powers? What would
| be the mechanism behind it?
|
| Unless there is evidence for these suggested special powers
| of the human mind, it looks to me like another instance of
| Russell's teapot.
| yters wrote:
| Programming is one of the most obvious. Mathematics is
| another. Human success in both is easiest to explain if
| we are halting oracles of some variety, albeit not
| perfect, but still transfinite.
| Thiez wrote:
| So you don't have any concrete examples, just broad
| topics that you have a vague feeling about?
|
| I guess I am not truly human because I can't solve the
| halting problem in the general case. I'm probably also
| subject to Godel's incompleteness theorem.
| MaxBarraclough wrote:
| > I recall Dennett pointing out how starting from Descartes
| the whole meme of the person within your brain doing things
|
| I like Dennett's coinage for this error of reasoning: the
| _Cartesian theater_. If a theory of mind relies upon some
| _inner_ mind, it hasn 't explained the mind at all.
|
| _edit_ I see it has its own Wikipedia article:
| https://en.wikipedia.org/wiki/Cartesian_theater
| thisiszilff wrote:
| A particularly choice quote from Dijkstra here:
|
| "The question of whether machines can think is about as
| relevant as the question of whether submarines can swim."
|
| In this case I guess it's the other way around, but the
| sentiment still stands.
| mannykannot wrote:
| Dijkstra, who valued rigor, was apparently greatly
| irritated by loose talk of computers as thinking machines
| and the use of teleological language in discussing them (in
| the sense of attributing purposes to programs, rather than
| their creators), so this quote could just be a reaction to
| that. If, on the other hand, it was offered in the spirit
| of "(artificial) machines could not possibly think, because
| thinking is something only (human) animals can do", that
| would just be a way of avoiding the question.
| ilaksh wrote:
| "..it becomes all too easy for us to slip into adversarial and
| exploitative framings of the human"
|
| That is inherent in the structure of society and largely
| supported by worldviews in many cases unfortunately.
|
| Personally I think that there are a couple of problems. One is
| with worldviews that oversimplify along the dimension of
| cooperative versus competitive. The second part which reinforces
| the first is that it's actually quite difficult to make a
| framework that really works without being overly competitive or
| cooperative or tending towards extreme centralization.
|
| My worldview is centered on technology and so of course I think
| that is a key part of the solution. Money and government must
| become high technologies. Decentralized approaches have the
| potential to provide not only the freedom to evolve but the
| capacity to operate holistically at the same time.
| zoomablemind wrote:
| > _"...Human beings aren't pieces of technology, no matter how
| sophisticated. But by talking about ourselves as such, we
| acquiesce to the corporations and governments that decide to
| treat us this way. When the seers of predictive processing hail
| prediction as the brain's defining achievement, they risk giving
| groundless credibility to the systems that automate that act -
| assigning the patina of intelligence to artificial predictors, no
| matter how crude or harmful or self-fulfilling their forecasts.
| "_
|
| This seems to summarize the main thesis.
|
| Modeling/simulating human effects by applying various
| technological approximations may be useful within the intended
| scope. However, expanding that scope and projecting the models
| onto the whole phenomenon of human brain is false, if not
| dangerous. The only purpose this sort of unwarranted projection
| serves is to justify the applied methods and the resulting
| treatment of humans.
|
| And why all these attempts to 'crack' the brain? The author
| posits that the main goal is control.
|
| But why one would need such a control? To subjugate, to make the
| world a better place, to live forever, or just make a quick buck
| and squirrel it away...? What do predictions tell about the
| ultimate intent?
| UnFleshedOne wrote:
| Like any technology I'm sure such understanding will be used
| for all of the above and then some.
| PeterStuer wrote:
| Best line: "If software has eaten the world, its predictive
| engines have digested it, and we are living in the society it
| spat out."
___________________________________________________________________
(page generated 2021-02-02 23:01 UTC)