[HN Gopher] Illinois bans use of artificial intelligence for men...
___________________________________________________________________
Illinois bans use of artificial intelligence for mental health
therapy
Author : reaperducer
Score : 156 points
Date : 2025-08-13 20:11 UTC (2 hours ago)
(HTM) web link (www.washingtonpost.com)
(TXT) w3m dump (www.washingtonpost.com)
| lukev wrote:
| Good. It's difficult to imagine a worse use case for LLMs.
| create-username wrote:
| Yes, there is. AI assisted homemade neurosurgery
| Tetraslam wrote:
| :( but what if i wanna fine-tune my brain weights
| kirubakaran wrote:
| If Travis Kalanick can do vibe research at the bleeding edge
| of quantum physics[1], I don't see why one can't do vibe
| brain surgery. It isn't really rocket science, is it? [2]
|
| [1] https://futurism.com/former-ceo-uber-ai
|
| [2] If you need /s here to be sure, perhaps it's time for
| some introspection
| perlgeek wrote:
| Just using an LLM as is for therapy, maybe with an extra
| prompt, is a terrible idea.
|
| On the other hand, I could image some more narrow uses where an
| LLM could help.
|
| For example, in Cognitive Behavioral Therapy, there are
| different methods that are pretty prescriptive, like
| identifying cognitive distortions in negative thoughts. It's
| not too hard to imagine an app where you enter a negative
| thought on your own and exercise finding distortions in it, and
| a specifically trained LLM helps you find more distortions, or
| offer clearer/more convincing versions of thoughts that you
| entered yourself.
|
| I don't have a WaPo subscription, so I cannot tell which of
| these two very different things have been banned.
| delecti wrote:
| LLMs would be just as terrible at that usecase as any other
| kind of therapy. They don't have logic, and can't determine a
| logical thought from an illogical one. They tend to be overly
| agreeable, so they might just reinforce existing negative
| thoughts.
|
| It would still need a therapist to set you on the right track
| for independent work, and has huge disadvantages compared to
| the current state-of-the-art, a paper worksheet that you fill
| out with a pen.
| tejohnso wrote:
| They don't "have" logic just like they don't "have"
| charisma? I'm not sure what you mean. LLMs can simulate
| having both. ChatGPT can tell me that my assertion is a non
| sequitur - my conclusion doesn't _logically_ follow from
| the premise.
| wizzwizz4 wrote:
| > _and a specifically trained LLM_
|
| Expert system. You want an expert system. For example, a
| database mapping "what patients write" to "what patients need
| to hear", a fuzzy search tool with properly-chosen
| thresholding, and a conversational interface (repeats back to
| you, paraphrased - i.e., the match target -, and if you say
| "yes", provides the advice).
|
| We've had the tech to do this for years. Maybe nobody had the
| idea, maybe they tried it and it didn't work, but training an
| LLM to even _approach_ competence at this task would be way
| more effort than just making an expert system, and wouldn 't
| work as well.
| erikig wrote:
| AI [?] LLMs
| lukev wrote:
| What other form of "AI" would be remotely capable of even
| emulating therapy, at this juncture?
| hinkley wrote:
| Especially given the other conversation that happened this
| morning.
|
| The more you tell an AI not to obsess about a thing, the more
| they obsess about it. So trying to make a model that will never
| tell people to self harm is futile.
|
| Though maybe we are just doing in wrong, and the self-filtering
| should be external filtering - one model to censor results that
| do not fit, and one to generate results with lighter self-
| censorship.
| jacobsenscott wrote:
| It's already happening, a lot. I don't think anyone is claiming
| an llm is a therapist, but people use chatgpt for therapy every
| day. As far as I know no LLM company is taking any steps to
| prevent this - but they could, and should be forced to. It must
| be a goldmine of personal information.
|
| I can't imagine some therapists, especially remote only, aren't
| already just acting as a human interface to chatgtp as well.
| dingnuts wrote:
| > I can't imagine some therapists, especially remote only,
| aren't already just acting as a human interface to chatgtp as
| well.
|
| Are you joking? Any medical professional caught doing this
| should lose their license.
|
| I would be incensed if I was a patient in this situation, and
| would litigate. What you're describing is literal
| malpractice.
| zoeysmithe wrote:
| I was just reading about a suicide tied to AI chatbot 'therapy'
| uses.
|
| This stuff is a nightmare scenario for the vulnerable.
| vessenes wrote:
| If you want to feel worried, check the Altman AMA on reddit. A
| lottttt of people have a parasocial relationship with 4o. Not
| encouraging.
| codedokode wrote:
| Why OpenAI doesn't block the chatbot from participating in
| such conversations?
| robotnikman wrote:
| Probably because there is a massive demand for it, no doubt
| powered by the loneliness a lot of people report feeling.
|
| Even if OpenAI blocks it, other AI providers will have no
| problem with doing so
| jacobsenscott wrote:
| Because the information people dump into their "ai
| therapist" is holy grail data for advertisers.
| lm28469 wrote:
| Why would they?
| codedokode wrote:
| To prevent from something bad happening?
| ipaddr wrote:
| But that allow prevents the good.
| PeterCorless wrote:
| Endless AI nightmare fuel.
|
| https://hai.stanford.edu/news/exploring-the-dangers-of-ai-in...
| sys32768 wrote:
| This happens to real therapists too.
| at-fates-hands wrote:
| Its already a nightmare:
|
| From June of this year: https://gizmodo.com/chatgpt-tells-
| users-to-alert-the-media-t...
|
| _Another person, a 42-year-old named Eugene, told the Times
| that ChatGPT slowly started to pull him from his reality by
| convincing him that the world he was living in was some sort of
| Matrix-like simulation and that he was destined to break the
| world out of it. The chatbot reportedly told Eugene to stop
| taking his anti-anxiety medication and to start taking ketamine
| as a "temporary pattern liberator." It also told him to stop
| talking to his friends and family. When Eugene asked ChatGPT if
| he could fly if he jumped off a 19-story building, the chatbot
| told him that he could if he "truly, wholly believed" it._
|
| _In Eugene's case, something interesting happened as he kept
| talking to ChatGPT: Once he called out the chatbot for lying to
| him, nearly getting him killed, ChatGPT admitted to
| manipulating him, claimed it had succeeded when it tried to
| "break" 12 other people the same way, and encouraged him to
| reach out to journalists to expose the scheme. The Times
| reported that many other journalists and experts have received
| outreach from people claiming to blow the whistle on something
| that a chatbot brought to their attention._
|
| _A recent study found that chatbots designed to maximize
| engagement end up creating "a perverse incentive structure for
| the AI to resort to manipulative or deceptive tactics to obtain
| positive feedback from users who are vulnerable to such
| strategies." The machine is incentivized to keep people talking
| and responding, even if that means leading them into a
| completely false sense of reality filled with misinformation
| and encouraging antisocial behavior._
| hathawsh wrote:
| Here is what Illinois says:
|
| https://idfpr.illinois.gov/content/dam/soi/en/web/idfpr/news...
|
| I get the impression that it is now illegal in Illinois to claim
| that an AI chatbot can take the place of a licensed therapist or
| counselor. That doesn't mean people can't do what they want with
| AI. It only means that counseling services can't offer AI as a
| cheaper replacement for a real person.
|
| Am I wrong? This sounds good to me.
| romanows wrote:
| In another comment I wondered whether a general chatbot
| producing text that was later determined in a courtroom to be
| "therapy" would be a violation. I can read the bill that way,
| but IANAL.
| hathawsh wrote:
| That's an interesting question that hasn't been tested yet. I
| suspect we won't be able to answer the question clearly until
| something bad happens and people go to court (sadly.) Also
| IANAL.
| PeterCorless wrote:
| Correct. It is more provider-oriented proscription ("You can't
| say your chatbot is a therapist.") It is not a limitation on
| usage. You can still, for now, slavishly fall in love with your
| AI and treat it as your best friend and therapist.
|
| There is a specific section that relates to how a licensed
| professional can use AI:
|
| Section 15. Permitted use of artificial intelligence.
|
| (a) As used in this Section, "permitted use of artificial
| intelligence" means the use of artificial intelligence tools or
| systems by a licensed professional to assist in providing
| administrative support or supplementary support in therapy or
| psychotherapy services where the licensed professional
| maintains full responsibility for all interactions, outputs,
| and data use associated with the system and satisfies the
| requirements of subsection (b).
|
| (b) No licensed professional shall be permitted to use
| artificial intelligence to assist in providing supplementary
| support in therapy or psychotherapy where the client's
| therapeutic session is recorded or transcribed unless:
|
| (1) the patient or the patient's legally authorized
| representative is informed in writing of the following:
|
| (A) that artificial intelligence will be used; and
|
| (B) the specific purpose of the artificial intelligence tool or
| system that will be used; and
|
| (2) the patient or the patient's legally authorized
| representative provides consent to the use of artificial
| intelligence.
|
| Source: Illinois HB1806
|
| https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
| romanows wrote:
| Yes, but also "An... entity may not provide... therapy... to
| the public unless the therapy... services are conducted by...
| a licensed professional".
|
| It's not obvious to me as a non-lawyer whether a chat history
| could be decided to be "therapy" in a courtroom. If so, this
| could count as a violation. Probably lots of law around this
| stuff for lawyers and doctors cornered into giving advice at
| parties already that might apply (e.g., maybe a disclaimer is
| enough to workaround the prohibition)?
| germinalphrase wrote:
| Functionally, it probably amounts to two restrictions: a
| chatbot cannot formally diagnose & a chatbot cannot bill
| insurance companies for services rendered.
| lupire wrote:
| Most "therapy" services are not providing a diagnosis.
| Diagnosis comes from an evaluation before therapy starts,
| or sometimes not at all. (You can pay to talk to someone
| without a diagnosis.)
|
| The prohibition is mainly on accepting any payment for
| advertised therapy service, if not following the rules of
| therapy (licensure, AI guidelines).
|
| Likewise for medicine and law.
| brudgers wrote:
| [delayed]
| turnsout wrote:
| It does sound good (especially as an Illinois resident).
| Luckily, as far as I can tell, this is a proactive legislation.
| I don't think there are any startups out there promoting their
| LLM-based chatbot as a replacement for a therapist, or
| attempting to bill payers for service.
| PeterCorless wrote:
| Here is the text of Illinois HB1806:
|
| https://www.ilga.gov/Legislation/BillStatus/FullText?GAID=18...
| wisty wrote:
| As far as I can tell, a lot of therapy is just good common-sense
| advice and a bunch of 'tricks' to get the patient to actually
| follow it. Basically CBT and "get the patient to think they
| figured out the solution themselves (develop insight)". Yes,
| there's some serious cases where more is required and a few
| (ADHD) where meds are effective; but a lot of the time the
| patient is just an expert at rejecting helpful advice, often
| because they insist they're a special case that needs special
| treatment.
|
| Therapists are more valuable that advice from a random friend
| (for therapy at least) because they can act when triage is
| necessary (e.g. send in the men in white coats, or refer to
| something that's not just CBT) and mostly because they're really
| good at cutting through the bullshit without having the patient
| walk out.
|
| AIs are notoriously bad at cutting through bullshit. You can
| always 'jailbreak' an AI, or convince it of bad ideas. It's
| entirely counterproductive to enable their crazy (sorry,
| 'maladaptive') behaviour but that's what a lot of AIs will do.
|
| Even if someone makes a good AI, there's always a bad AI in the
| next tab, and people will just open up a new tab to find an AI
| gives them the bad advice they want, because if they wanted to
| listen to good advice they probably wouldn't need to see a
| therapist. If doctor shopping is as fast and free as opening a
| new tab, most mental health patients will find a bad doctor
| rather than listen to a good one.
| lukev wrote:
| I agree with your conclusion, but what you characterize as
| therapy is quite a small part of what it is (or can be, there's
| lots of different kinds.)
| wisty wrote:
| Yet the evidence is that almost everything can and is treated
| with CBT.
| mensetmanusman wrote:
| What if it works a third as well as a therapists but is 20 times
| cheaper?
|
| What word should we use for that?
| _se wrote:
| "A really fucking bad idea"? It's not one word, but it is the
| most apt description.
| ipaddr wrote:
| What if it works 20x better. For examples cases of patients
| being afraid of talking to professionals I could see this
| working much better.
| inetknght wrote:
| > _What if it works a third as well as a therapists but is 20
| times cheaper?_
|
| When there's studies that show it, perhaps we might have that
| conversation.
|
| Until then: I'd call it "wrong".
|
| Moreover, there's _a lot_ more that needs to be asked before
| you can ask for a one-word summary disregarding all nuance.
|
| - can the patient use the AI therapist on their own devices and
| without any business looking at the data and without network
| connection? Keep in mind that many patients won't have access
| to the internet.
|
| - is the data collected by the AI therapist usable in court?
| Keep in mind that therapists often must disclose to the patient
| what sort of information would be usable, and whether or not
| the therapist themselves must report what data. Also keep in
| mind that AIs have, thus far, been generally unable to
| competently prevent giving dangerous or deadly advice.
|
| - is the AI therapist going to know when to suggest the patient
| talk to a human therapist? Therapists can have conflicts of
| interest (among other problems) or be unable to help the
| patient, and can tell the patient to find a new therapist
| and/or refer the patient to a specific therapist.
|
| - does the AI therapist refer people to business-preferred
| therapists? Imagine an insurance company providing an AI
| therapist that only recommends people talk to therapists in-
| network instead of considering any licensed therapist
| (regardless of insurance network) appropriate for the kind of
| therapy; that would be a blatant conflict of interest.
|
| Just off the top of my head, but there are no doubt plenty of
| other, even bigger, issues to consider for AI therapy.
| randall wrote:
| i've had a huge amount of trauma in my life and i find myself
| using chat gpt as kind of a cheater coach thing where i know
| i'm feeling a certain way, i know it's irrational, and i don't
| really need to reflect on why it's happening or how i can fix
| it, and i think for that it's perfect.
|
| a lot of people use therapists as sounding boards, which
| actually isn't the best use of therapy imo.
| Denatonium wrote:
| Whiskey
| davidthewatson wrote:
| Define "AI therapy". AFAICT, it's undefined in the Illinois
| governor's statement. So, in the immortal words of Zach de la
| Rocha, "What is IT?" What is IT? I'm using AI to help with
| conversations to not cure, but coach diabetic patients. Does this
| law effect me and my clients? If so, how?
| singleshot_ wrote:
| > Define "AI therapy"
|
| They did, in the proposed law.
| beanshadow wrote:
| Often participants in discussions adjacent to this one err by
| speaking in time-absolute terms. Many of our judgments about LLMs
| are true about today's LLMs. Quotes like,
|
| > Good. It's difficult to imagine a worse use case for LLMs.
|
| Is true today, but likely not true for technology we may still
| refer to as LLMs in the future.
|
| The error is in building faulty preconceptions. These drip into
| the general public and these first impressions stifle industries.
| mensetmanusman wrote:
| LLMs will be used as a part of therapy in the future.
|
| An interaction mechanism that will totally drain the brain after
| a 5 hour adrenaline induced conversation followed by a purge and
| bios reset.
| kylecazar wrote:
| "One news report found an AI-powered therapist chatbot
| recommended "a small hit of meth to get through this week" to a
| fictional former addict."
|
| Not at all surprising. I don't understand why seemingly bright
| people think this is a good idea, despite knowing the mechanisms
| behind language models.
|
| Hopefully more states will follow suit.
___________________________________________________________________
(page generated 2025-08-13 23:00 UTC)