[HN Gopher] A neurology ICU nurse on AI in hospitals
       ___________________________________________________________________
        
       A neurology ICU nurse on AI in hospitals
        
       Author : redrove
       Score  : 87 points
       Date   : 2024-11-12 14:51 UTC (8 hours ago)
        
 (HTM) web link (www.codastory.com)
 (TXT) w3m dump (www.codastory.com)
        
       | Festro wrote:
       | "We didn't call it AI at first." Because the first things
       | described in the article are not AI. They are ML at most.
       | 
       | Then the article discusses a patient needs scoring method, moving
       | from their own Low/Medium/High model to a scoring method on an
       | unbound linear scale. The author appears to struggle with being
       | able to tell if 240 is high or not. They don't state if they ever
       | had training or saw documentation for the scoring method. Seems
       | odd to not have these things but that if they did the scores
       | would be a lot easier to interpret.
       | 
       | Then they finally get to AI, and it's a pilot scheme for writing
       | patient notes. That's all. If it sucks and hallucinates
       | information it's not going to go live anywhere. No matter how
       | many tech bros try to force it through. If the feedback model for
       | the pilot is bad then the author should take issue with that.
       | It's important that such tests give testers an adequate method to
       | flag issues.
       | 
       | Very much an AI = bad article. AI converging with medical
       | technology is a really dangerous space, for obvious reasons. An
       | article like this does make me worry it's being rushed through,
       | but not because of the author's objections, instead because of
       | their ignorance of what is and what isn't AI, and then on the
       | other side of the apparent lack of consultation being offered by
       | the technology providers even during testing stages.
        
         | ToucanLoucan wrote:
         | > AI converging with medical technology is a really dangerous
         | space, for obvious reasons. An article like this does make me
         | worry it's being rushed through, but not because of the
         | author's objections, instead because of their ignorance of what
         | is and what isn't AI
         | 
         | I mean, worth pointing out that OpenAI has been shoving LLMs
         | into people's faces for going on about a year and half now at
         | global scale and calling it AI, to a degree that now we have to
         | call AI A _G_ I, and LLM's get called AI, even though there is
         | nothing intelligent about them whatsoever.
         | 
         | Just saying when the marketing for a tech is grinding the edge
         | of being misinformation itself, it's pretty normal for the end
         | users to end up pretty goddamn confused by the end.
        
           | bryanlarsen wrote:
           | For 60 years the Turing Test was the standard benchmark for
           | artificial intelligence. Now machines can pass the test. The
           | only goalpost moving I can see is the moving done by the
           | people who insist that LLM's aren't AI.
        
           | bitwize wrote:
           | The simple state machines and decision trees that make
           | enemies move and attack in video games, we also call AI.
           | 
           | AI is a loose term and always has been. It's like the term
           | "robot": we call any machine a robot which either a)
           | resembles a human or part of one (e.g., a robotic arm), or b)
           | can perform some significant human labor that involves
           | decision making (e.g., a robot car that drives itself).
           | Similarly, AI is anything that makes, or seems to make,
           | judgements that we think of as the exclusive purview of
           | humans. Decision trees used to be thought of as AI, but today
           | they are not (except, again, in the context of video games
           | where they're used to control agents intended to _seem_
           | alive).
        
             | ToucanLoucan wrote:
             | > The simple state machines and decision trees that make
             | enemies move and attack in video games, we also call AI.
             | 
             | Yes but no game companies have ever marketed their game
             | asserting that the things your shooting are, in fact, self-
             | aware conscious intelligences.
        
           | PhasmaFelis wrote:
           | That's been the technical definition in the AI research
           | community for at least 50 years. "AI = machines that think
           | like people" is the sci-fi definition.
        
             | ToucanLoucan wrote:
             | Yes but OpenAI is flagrantly using the people's sci-fi
             | defintion understanding to radically overvalue it's, to be
             | blunt, utterly mid products.
        
             | bluefirebrand wrote:
             | > AI = machines that think like people" is the sci-fi
             | definition
             | 
             | It's also the layman's definition
             | 
             | Which does matter because laymen are the ones who are
             | treating this current batch of AI as the silver bullet for
             | all problems
        
         | ta988 wrote:
         | The industry calls AI even the dumbest linear system, I don't
         | think it is right to blame non industry people if they don't
         | use the right words after that.
        
         | develatio wrote:
         | > No matter how many tech bros try to force it through.
         | 
         | Are you sure about that? My wife is a nurse and she has to deal
         | with multiple machines that were put in her unit just because
         | the hospital has a contract with XYZ brand. It doesn't matter
         | at all if these machines are 5x more expensive, x10 slower, x80
         | less effective, etc... compared with the "other" machines (from
         | other brands).
         | 
         | I'm actually terrified the same might happen with this.
        
         | happytoexplain wrote:
         | I think the effort to keep the definition of AI narrow is not
         | useful (and futile, besides). By both common usage and even
         | most formal definitions, it's an umbrella term, and sometimes
         | refers to a specific thing under that umbrella that we already
         | have other words for if we would like to be more specific (ML,
         | deep learning, LLM, GPT, genAI, neural net, etc).
        
         | bryanlarsen wrote:
         | > If it sucks and hallucinates information it's not going to go
         | live anywhere.
         | 
         | It hallucinates and it's live in many places. My doctor uses
         | it.
         | 
         | AFAICT the hallucination rate is fairly low. It's transcription
         | and summarization which has a lower hallucination rate than
         | when it's asked for an answer.
         | 
         | It's massively better than the alternative. Taking the time to
         | write down proper notes is the difference between 4 patients
         | per hour and 6 and a commensurate drop in income. So notes
         | virtually always get short-changed.
         | 
         | Very occasionally hallucinated notes are better than notes that
         | are almost always incomplete.
        
           | fhfjfk wrote:
           | What about increasing the supply of doctors rather than
           | decreasing the time they spend with patients?
           | 
           | I question the underlying premise that efficiency needs to
           | increase.
        
             | daemin wrote:
             | Because the USA is for-profit healthcare industry they need
             | to optimise and increase efficiency. That's the only way to
             | make the numbers go up. Therefore fewer doctors, fewer
             | nurses, fewer administrators (maybe), and more paying
             | patients.
        
               | anadem wrote:
               | > more paying patients
               | 
               | that's a chilling thought; don't give them ideas
        
               | slater wrote:
               | Surprising that you imply they haven't already been
               | capitalizing (ha!) on that idea since the 1970s :D
        
               | bryanlarsen wrote:
               | It's no better in other countries. Other country's health
               | care systems are typically monopsonies, which means we
               | get crappier results for lower prices. So instead of the
               | doctor choosing to do 6 patients per hour instead of 4,
               | it's the government.
        
             | vladms wrote:
             | In the case of summarizing it is not time spent with
             | patient is time recording what was discussed.
             | 
             | I recently heard a talk from doctolib (French company that
             | among other offers now the summarization service to
             | doctors) and they mentioned that before AI, doctors were
             | writing on average 144 characters after a patient visit. I
             | doubt half a tweet is the ideal text to convey information.
        
             | SpaceLawnmower wrote:
             | Note taking is not about the time spent with patients. It's
             | about keeping a good record for next time and insurance and
             | is a major reason for physician burnout. Some doctors will
             | finish up charting when after hours.
             | 
             | Yes physicians could still see fewer patients but filling
             | out their mandatory notes is annoying regardless of it's a
             | manageable amount of patients or a work extra hours amount.
        
       | Loughla wrote:
       | Healthcare is a massive cost for people, businesses, and
       | governments.
       | 
       | >So we basically just become operators of the machines.
       | 
       | Driving down the cost of manufacturing because of process
       | standardization and automation brought down the cost of consumer
       | goods, and labor's value.
       | 
       | If you don't think this is coming for every single area of
       | business, you're foolish. Driving down labor costs is the golden
       | goose. We've been able to collect some eggs through technology,
       | but AI and the like will be able to cut that goose open and take
       | all the eggs.
        
         | coliveira wrote:
         | > will be able to cut that goose open and take all the eggs
         | 
         | You're completely right! AI will kill the golden goose, that is
         | what this metaphor is all about.
        
         | happytoexplain wrote:
         | "don't think will happen" != "don't think is good"
        
         | thefz wrote:
         | > Driving down the cost of manufacturing because of process
         | standardization and automation brought down the cost of
         | consumer goods, and labor's value.
         | 
         | And quality as well.
        
           | Spivak wrote:
           | Not really, the quality we're able to produce at scale is the
           | best it's ever been in the world. The "quality of everything
           | has been in decline" isn't due to advances in manufacturing
           | but the economic factors that are present even without those
           | advances.
           | 
           | Rapid inflation over a relatively short period of time has
           | forced everyone to desperately try and maintain close to
           | their pre-inflation prices because workers didn't and aren't
           | going to get the corresponding wage increases.
           | 
           | I hope eventually there's a tipping point where the powers
           | that be realize that our economy can't work unless the wealth
           | that naturally accumulates at the top gets fed back into the
           | bottom but the US pretty unanimously voted for the exact
           | opposite so I desperately I hope I'm wrong or that this path
           | can work too.
        
             | deepsquirrelnet wrote:
             | This is so hard to vocalize, but yes, exactly. Prices
             | aren't coming down. That's largely not something that
             | happens. Instead, wages are supposed to increase in
             | response.
             | 
             | You might see little blips around major recessions, but
             | prices go up, driven by inflation.
             | https://fred.stlouisfed.org/series/CPIAUCSL
             | 
             | Fortunately the rate of change of CPI appears to have
             | cooled off, but the best scenario is for it to track with
             | the previous historical slope, which means prices are
             | staying where they are and increasing at "comfortable"
             | rates again.
        
             | randomdata wrote:
             | _> because workers didn 't_
             | 
             | Do you mean immediately? At peak inflation wages weren't
             | keeping pace, but wage growth is exceeding inflation right
             | now to catch back up.
             | 
             | Incomes have been, on average, stagnant for hundreds of
             | years - as far back as the data goes. It is unlikely that
             | this time will be different.
             | 
             |  _> I hope eventually there 's a tipping point where the
             | powers that be_
             | 
             | You did mention the US. Presumably you're not talking about
             | a dictatorship. The powers that be are the population at
             | large. I'm sure they are acutely aware of this. They have
             | to live it, after all.
        
         | ToucanLoucan wrote:
         | > Driving down the cost of manufacturing because of process
         | standardization and automation brought down the cost of
         | consumer goods, and labor's value.
         | 
         | We don't need AI to bring down the cost of healthcare. It's
         | well documented to a ridiculous degree now that the United
         | States spends vastly more per-patient on healthcare while
         | receiving just about the worst outcomes, and it has nothing the
         | fuck to do with how much the staff are paid, and everything to
         | do with the for profit models and insurance industry. Our
         | healthcare system is comprised to a large amount of nothing but
         | various middlemen operating between what should be a pretty
         | straightforward relationship between you, your doctor, your
         | pharmacist and the health oversight of the Government.
        
           | coldpie wrote:
           | Well put. The goal of AI isn't to bring down costs. It's to
           | move (even more of) the profits from the workers to the
           | owners. If the goal was to bring down costs, there are way,
           | way more effective ways to do that.
        
             | vladms wrote:
             | What references you have for "even more of"?
             | 
             | Global inequality is dropping :
             | https://ourworldindata.org/the-history-of-global-economic-
             | in...
             | 
             | Yes, probably the richest do now crazier stuff than before
             | (ex: planning to go to Mars rather than making a pyramid),
             | but lots of people have access to more things (like food,
             | shelter, etc.)
             | 
             | There are enough open source models weights so that
             | everybody can use AI for whatever they want (with some
             | minor investments in a couple of GPU). It is not some
             | closed secret that nobody can touch.
        
         | jprete wrote:
         | Efficiency and lowered costs are not universally good. They
         | strongly favor easily-measured values over hard-to-measure
         | values, which in practice means preferring mechanisms to
         | people.
        
         | stego-tech wrote:
         | > If you don't think this is coming for every single area of
         | business, you're foolish. Driving down labor costs is the
         | golden goose.
         | 
         | I mean, that's saying the quiet part out loud that I think more
         | people need to hear and understand. The goal of these
         | technocrats isn't to improve humanity as a whole, it's to
         | remove human labor from the profit equation. They genuinely
         | believe that it's possible to build an entire economy where
         | bots just buy from other bots ad infinitum and humans are
         | removed wholesale from the economy.
         | 
         | They aren't building an AI to improve society or uplift
         | humanity, they're building digital serfs so they can fire their
         | physical, expensive ones (us). Deep down, I think those of us
         | doing the actual labor understand that their vision
         | fundamentally cannot work, that humans _must_ be given the
         | opportunity to labor in a meaningfully rewarding way for the
         | species to thrive and evolve.
         | 
         | That's not what these AI tools intend to offer. The people know
         | it, and it's why we're so hostile towards them.
        
         | wiz21c wrote:
         | The more you put AI, you automate, the more you centralize
         | control. Heck, when you don't need humans anymore, then there's
         | no work anymore and it remains a few very wealthy people. You
         | end up with a big inequal society. And I believe there's a 99%
         | chance that you will be on the wrong side of it.
        
         | HeyLaughingBoy wrote:
         | > Driving down the cost of manufacturing because of process
         | standardization and automation brought down the cost of
         | consumer goods
         | 
         | This is true, but it has nothing to do with AI.
        
         | 015a wrote:
         | Ok, fine, but how do you vibe your sense of "automation and AI
         | will help drive down the cost of healthcare" with the absolute
         | undeniable reality that healthcare has been adopting automation
         | for decades, and over the decades it has only gotten
         | (exponentially) more and more expensive? While outcomes are
         | stagnating or getting worse? Where is the disconnect between
         | your sense of how reality should function, and how it is
         | tangibly, actually functioning?
        
           | qgin wrote:
           | Outcomes have been getting worse for decades?
        
             | 015a wrote:
             | Not the best source, but its at least illustrative of the
             | point: https://www.statista.com/statistics/1040079/life-
             | expectancy-...
        
               | tqi wrote:
               | That other than 2020 (ie COVID), life expectancy has been
               | continuously rising for the last 100 years?
        
               | 015a wrote:
               | Its actually much scarier than that: the trend started
               | reversing ~2017, COVID accelerated it, and it hasn't
               | recovered post-COVID.
               | 
               | Naturally, changes to any sufficiently complex system
               | take years to truly manifest their impact in broad
               | statistics; sometimes decades. But, don't discount this
               | single line from the original article:
               | 
               | > Then in 2018, the hospital bought a new program from
               | Epic
        
               | tqi wrote:
               | How does that qualify as evidence that "outcomes have
               | been getting worse for decades?"
        
               | 015a wrote:
               | I did not say it was evidence. I actually stated it was a
               | quite poor source; but that it is at least illustrative
               | of the point.
        
               | tqi wrote:
               | Of the point that outcomes have been getting worse for
               | decades?
        
           | tqi wrote:
           | > over the decades it has only gotten (exponentially) more
           | and more expensive
           | 
           | There is a lot of research on this question, and AFAIK there
           | is no clear cut answer. It's probably a host of different
           | reasons, but one of the non-nefarious ones is that the range
           | of ailments we can treat has increased.
        
             | 015a wrote:
             | I think the real reason is mostly obvious to anyone who is
             | looking: Its the rise of the bureaucracy. Its the same
             | thing that's strangling education, and basically all other
             | public resources.
             | 
             | The automation, and now AI, we've adopted over the years
             | by-and-large does not serve to increase the productivity or
             | efficiency of care-givers. Care-givers are not seeing more
             | patients-per-hour today than they were 40 years ago
             | (though, they might be working more hours). It _might_ ,
             | rarely, increase the quality of care (e.g. ensuring they
             | adhere to best-practices, centrally documenting patient
             | information for better continuity of care, AI-based
             | radiological reading); but while I've listed a few examples
             | there, it is not a common situation where a patient is
             | better off having a computer in the loop; and this speaks
             | nothing to the cost of implementing these technologies.
             | 
             | Automation and now AI almost exclusively exists to increase
             | the productivity and efficiency of the _bureaucracy_ that
             | sits on top of care-givers. If you have someone making
             | calls to schedule patients, the rate at which you can
             | schedule patients is limited by that one person; but with a
             | digitized scheduling system, you can schedule an infinite
             | bandwidth of patients (to your very limited and resource-
             | constrained staff of caregivers). Forcing a caregiver to
             | follow some checklist of best practices might help the 0.N%
             | of patients where a step is missed; but it will definitely
             | help 100% of the bureaucracy meet some kind of compliance
             | framework mandated by the government or malpractice
             | insurance company. Having this checklist will also
             | definitely hurt the 1-0.N% of patients which would have
             | been fine without it, because adopting the checklist is
             | non-free, and adhering to it questions the caregiver 's
             | professionalism and agency in providing care. These are two
             | small examples among millions.
             | 
             | When we talk about increasing the efficiency of the
             | bureaucracy, what we're really stating is: Automation is a
             | tool that enables the bureaucracy to exist in the first
             | place. Multi-state billion dollar interconnected centrally
             | owned healthcare provider networkers simply did not exist
             | 70 years ago; today its how most Americans receive what
             | care they do. The argument follows: This is the free market
             | at work, automation has enabled organizations like these to
             | become more efficient than the alternative; but:
             | 
             | 1. Healthcare is among the furthest things from a laissez-
             | faire free market in the United States; the extreme
             | regulation (from both the government and from health
             | insurance providers, which lest you forget was _mandated_
             | by law that all americans carry, by democracts, with the
             | passage of the ACA, and despite that being rolled back is
             | still a requirement in some states). Bureaucracy is not the
             | free-market end-state of a system which is trying to
             | optimize itself for higher efficiency (lower costs + better
             | outcomes); it was induced upon our system by corporations
             | and a corporate-captured government seeking their share of
             | the pie; it was forced upon independent medical providers
             | who saw their administrative costs soar.
             | 
             | 2. Competition itself is an economic mechanism which simply
             | does not function as well in the medical sector than in
             | other sectors, for so many reasons but the most obvious
             | one: If you're dying, you aren't going to reject care. You
             | oftentimes cannot judge the quality of the care you're
             | receiving until you're a statistic. And, medical care is,
             | even in a highly efficient system, going to be expensive
             | and difficult to scale resources to provide, so provider
             | selection isn't great. Thus, the market can't select-out
             | overly-bureaucratic organizations; they've become "too big
             | to fail", and the quality of the care they provide actually
             | isn't material.
             | 
             | And, like, to be clear: I'm not discounting what you're
             | saying. There are dozens of factors at play. Let's be real,
             | the bureaucracy has enabled us to treat a wider range of
             | illnesses, because the wide net it casts can better-support
             | niche care offices. But, characterizing this as generally
             | non-nefarious is also dangerous! One trend we've seen in
             | these gigacorporation medical care providers is a bias of
             | resources toward "expensive care" and away from general
             | practice / family care. The reason is obvious: One patient
             | with a rare disease that costs $100,000 to care for
             | represents a more profitable allocation of resources than a
             | thousand patients getting annual checkups. Fewer patients
             | get their annual checkups -> Cancers get missed early ->
             | They become $100,000 patients too. The medical companies
             | love this! But: Zero people ANYWHERE in this system want
             | this. Insurance doesn't want this. Government doesn't want
             | this. Doctors don't want it. Administration doesn't want
             | it. No one wants the system to work like this. The
             | companies love it; the system loves it; the people don't.
             | Its Moloch; the system craves this state, even if no one in
             | it actually wants it.
             | 
             | Here's the point of all this: I think you can have a
             | medical system that is centrally ran. You can let the
             | bureaucracy go crazy, and I think you'll actually get
             | really good outcomes in a system like this because you can
             | appoint authoritarians to the top of the bureaucracy to
             | slay moloch when he rears his ugly head. I think you can
             | also go in the opposite direction, kill regulation, kill
             | the insurance-state, just a few light touch sensible
             | legislations mostly positioned toward ensuring care
             | providers are educated appropriately and are accountable,
             | and you'll get a great system too. Not as good as the other
             | state, but better than the one we have right now, which is
             | effectively the result of ping-ponging back and forth
             | between two political ruling classes who each believe their
             | side of the coin is the only side of the coin, so they'd
             | rather keep flipping it than just let it lay.
        
       | coliveira wrote:
       | Everyone should be terrified. The "promise" of AI is the
       | following: remove any kind of remaining communication between
       | humans, because that is "inefficient", and replace it with an AI
       | that will mediate all human interactions (in business and even in
       | other areas). In a few years, AIs trained by big corps will run
       | the show and humans will be required to interface with them to do
       | anything of value. Similar to what they want to do nowadays with
       | mobile/enterprise systems, but at a much deeper level.
        
         | anthonyskipper wrote:
         | Some of us look forward to that future where you mostly just
         | interact with AI. The one depressing us is not turning running
         | over our goverment to AI. The sooner we can do that the better,
         | you can't trust humans.
        
           | maxehmookau wrote:
           | > Some of us look forward to that future where you mostly
           | just interact with AI.
           | 
           | What is it about that that appeals to you? I'm genuinely
           | curious.
           | 
           | A world without human interaction feels like a world I don't
           | want to exist in.
        
             | cptaj wrote:
             | They expect AI bureaucracy to be more effective than human
             | bureaucracy.
             | 
             | I expect this to be entirely true in some cases.
        
             | andy_ppp wrote:
             | If you are autistic (for example) I'm guessing human
             | interaction can be extremely difficult and very stressful
             | and triggering. Machines are much more amenable and don't
             | have loads of arbitrary unwritten rules the way humans do.
             | Maybe the idea of being entrapped by bureaucracy introduced
             | by machines will be better than the the bureaucracy
             | introduced by humans?
        
               | add-sub-mul-div wrote:
               | The difference between a standard human-written algorithm
               | and machine learning is exactly that inability to find
               | the rules transparent, predictable, and not arbitrary.
        
               | andy_ppp wrote:
               | I can see this but I think humans are much more random
               | than most LLMs - they lie, they have egos, they randomly
               | dislike other humans and make things difficult for them.
               | Never mind body language, networks of influence,
               | reputation destruction and all the other things that
               | people do to obtain power.
               | 
               | I think LLMs are much more predictable and they will get
               | better.
        
               | luxcem wrote:
               | > Machines are much more amenable and don't have loads of
               | arbitrary unwritten rules
               | 
               | I'm sure system prompts of the most famous LLM are just
               | that
        
               | andy_ppp wrote:
               | They are not as arbitrary as body language for example.
        
           | itishappy wrote:
           | Can we trust AI?
        
             | randomdata wrote:
             | Yes, we can. But should we?
        
             | stego-tech wrote:
             | Define "trust", because that singular word carries
             | immeasurable weight.
             | 
             | Can we trust AI to make consistent predictions from its
             | training data? Yeah, fairly reliably. Can we trust that
             | data to be impartial? What about the people training the
             | model, can we trust their impartiality? What about the
             | investors bankrolling it, can we trust _them_?
             | 
             | The more you examine the picture in detail, the less I
             | think we're able to state it's trustworthy.
        
           | 1986 wrote:
           | "You can't trust humans" but you can trust a non
           | deterministic black box to take their place?
        
             | david-gpu wrote:
             | Humans already are non-deterministic black boxes, so I'm
             | not sure I would use that comparison.
        
               | f1shy wrote:
               | For me they are more a gray box. That is why publicity
               | and propaganda work.
        
               | epgui wrote:
               | Humans are accountable. You can sue a human.
        
               | david-gpu wrote:
               | And you can't sue the corporation that made an AI?
        
               | epgui wrote:
               | In theory yes, but good luck with that.
        
             | saberience wrote:
             | Are you suggesting humans are deterministic?
        
               | f1shy wrote:
               | A little bit, we are. With some degree of confidence,
               | given the incentives you can predict the output.
        
           | 015a wrote:
           | You won't receive better outcomes in this world. The people
           | in charge will simply change what they're measuring until the
           | outcomes look better.
        
           | croes wrote:
           | If you don't trut humans you shouldn't trust AI-.
           | 
           | AI is based on human input and has the same biases.
        
             | add-sub-mul-div wrote:
             | Minus the accountability.
        
           | rtkwe wrote:
           | That's still trusting humans, either the ones who created the
           | AI and gave it it's goals/parameters or the humans that
           | actually implement it's edicts. Can't get away from people,
           | it's a lesson all the DAO hype squad learned quickly,
           | fundamentally you still need people to implement the
           | decisions.
        
           | rvense wrote:
           | What looks like turning things over to AI is really turning
           | things over to the people who own the AI, which is another
           | thing entirely.
        
         | A_D_E_P_T wrote:
         | Counterpoint: AI is actually better at communication than most
         | humans. In fact, even an ancient (in relative terms) article
         | found that AI bots have better bedside manner than human
         | doctors:
         | 
         | https://www.theguardian.com/technology/2023/apr/28/ai-has-be...
         | 
         | Today, I expect it's not even very close.
         | 
         | I also believe that AI diagnostics are on average more accurate
         | than the mean human doctor's diagnostic efforts -- and can be,
         | in principle, orders of magnitude faster/better/cheaper.
         | 
         | As of right now, there's even less gatekeeping with AIs than
         | there is with humans. You'll jump through a lot of hoops and
         | pay a lot of money for an opportunity to tell a doctor of your
         | symptoms; you can do the same thing with GPT-4o and get a
         | reasonable response in no time at all -- at and no cost.
         | 
         | I'd much prefer, and I would be _much_ better served, by a
         | capable AI  "medical assistant" and open access to scans,
         | diagnostics, and pharmaceuticals [1] over the current paradigm
         | in the USA.
         | 
         | [1] - Here in Croatia, I can buy whatever drugs I want, with
         | only very narrow exceptions, OTC. There's really no
         | "prescription" system. I can also order blood tests and scans
         | for myself.
        
           | croes wrote:
           | AI is better in simulating communication but worse in
           | understanding.
           | 
           | >you can do the same thing with GPT-4o and get a reasonable
           | response in no time at all -- at and no cost.
           | 
           | Reasonable doesn't mean correct. Who is liable if it's the
           | wrong answer?
        
             | A_D_E_P_T wrote:
             | "AI" is basically a vast, curated, compressed database with
             | a powerful index. If the database reflects the current
             | state of the art, it'll have better understanding than the
             | majority of human practitioners.
             | 
             | You may say it will "simulate understanding" -- but in this
             | case the simulation would be indistinguishable from the
             | real thing, thus it would _be_ the real thing. (Really
             | "indiscernible" in the philosophical sense of the word.)
             | 
             | > _Reasonable doesn 't mean correct. Who is liable if it's
             | the wrong answer?_
             | 
             | I think that you can get better accuracy than with the
             | average human doctor. Beyond that, my own opinion is that
             | liability should be _quisque pro se_.
        
               | bangaroo wrote:
               | > "AI" is basically a vast, curated, compressed database
               | with a powerful index. If the database reflects the
               | current state of the art, it'll have better understanding
               | than the majority of human practitioners.
               | 
               | But it's not. You're missing the point entirely and don't
               | know what you're advocating for.
               | 
               | A dictionary contains all the words necessary to describe
               | any concept and rudimentary definitions to help you
               | string sentences together but you wouldn't have a doctor
               | diagnose someone's medical condition with a dictionary,
               | despite the fact that it contains most if not all of the
               | concepts necessary to describe and diagnose any disease.
               | It's useful information, but not organized in a way that
               | is conducive to the task at hand.
               | 
               | I assume based on the way you're describing AI that
               | you're referring to LLMs broadly, which, again, are spicy
               | autocorrect. Super simplified, they're just big masses of
               | understanding of what things might come in what order,
               | what words or concepts have proximity to one another, and
               | what words and sentences look like. They lack (and really
               | cannot develop) the ability to perform acts of deductive
               | reasoning, to come up with creative or new ideas, or to
               | actually understand the answers they're giving. If they
               | connect a bunch of irrelevant dots they will not second
               | guess their answer if something seems off. They will not
               | consult with other experts to get outside opinions on
               | biases or details they overlooked or missed. They have no
               | concept of details. They have no concept of expertise.
               | They cannot ask questions to get you to expand on vague
               | things you said that a doctor has intuition might be
               | important
               | 
               | The idea that you could type some symptoms into ChatGPT
               | and get a reasonable diagnosis is foolish beyond
               | comprehension. ChatGPT cannot reliably count the number
               | of letters in a word. If it gives you an answer you don't
               | like and you say that's wrong it will instantly correct
               | itself, and sometimes still give you the wrong answer in
               | direct contradiction to what you said. Have you used
               | google, lately? Gemini AI summaries at the tops of the
               | search results often contain misleading or completely
               | incorrect information.
               | 
               | ChatGPT isn't poring over medical literature and trying
               | to find references to things that sound like what you
               | described and then drawing conclusions, it's just finding
               | groups of letters with proximity to the ones you gave it
               | (without any concept of what the medical field is.)
               | ChatGPT is a machine that gives you an answer in the
               | (impressively close, no doubt) shape of the answer you'd
               | expect when asked a question that incorporates massive
               | amounts of irrelevant data from all sorts of places
               | (including, for example, snake oil alternative medicine
               | sites and conspiracy theory content) that are also being
               | considered as part of your answer.
               | 
               | AI undoubtedly has a place in medicine, in the sorts of
               | contexts it's already being used in. Specialized machine
               | learning algorithms can be trained to examine medical
               | imaging and detect patterns that look like cancers that
               | humans might miss. Algorithms can be trained to identify
               | or detect warning signs for diseases divined from
               | analyses of large numbers of specific cases. This stuff
               | is real, already in the field, and I'm not experienced
               | enough in the space to know how well it works, but it's
               | the stuff that has real promise.
               | 
               | LLMs are not general artificial intelligence. They're
               | prompted text generators that are largely being tuned as
               | a consumer product that sells itself on the basis of the
               | fact that it feels impressive. Every single time I've
               | seen someone try to apply one to any field of experienced
               | knowledge work they either give up using it for anything
               | but the most simple tasks, because it's bad at the things
               | it's done, or the user winds up Dunning-Kreugering
               | themselves into not learning anything.
               | 
               | If you are seriously asking ChatGPT for medical
               | diagnoses, for your own sake, stop it. Go to an actual
               | doctor. I am not at all suggesting that the current state
               | of healthcare anywhere in particular is perfect but the
               | solution is not to go ask your toaster if you have
               | cancer.
        
               | A_D_E_P_T wrote:
               | I think that your information is slightly out of date.
               | (From Wolfram's book, perhaps?) LLM + plain vanilla RAG
               | solves almost all of the problems you mentioned. LLM +
               | agentic RAG solves them pretty much entirely.
               | 
               | Even as of right now, _stock LLMs are much more accurate
               | than medical students in licensing exam questions_ :
               | https://mededu.jmir.org/2024/1/e63430
               | 
               | Thus your comment is basically at odds with reality. Not
               | only have these models eclipsed what they were capable of
               | in early 2023, when it was easy to dismiss them as
               | "glorified autocompletes," but they're now genuinely
               | turning the "expert system" meme into a reality via RAG-
               | based techniques and other methods.
        
               | bangaroo wrote:
               | Read the conclusions section from the paper you linked:
               | 
               | > GPT-4o's performance in USMLE disciplines, clinical
               | clerkships, and clinical skills indicates substantial
               | improvements over its predecessors, suggesting
               | significant potential for the use of this technology as
               | an educational aid for medical students. These findings
               | underscore the need for careful consideration when
               | integrating LLMs into medical education, emphasizing the
               | importance of structured curricula to guide their
               | appropriate use and the need for ongoing critical
               | analyses to ensure their reliability and effectiveness.
               | 
               | The ability of an LLM to pass a multiple-choice test has
               | no relationship to its ability to make correlations
               | between things it's observing in the real world and
               | diagnoses on actual cases. Being a doctor isn't doing a
               | multiple choice test. The paper is largely making the
               | determination that GPT might likely be used as a study
               | aid by med students, not by experienced doctors in
               | clinical practice.
               | 
               | From the protocol section:
               | 
               | > This protocol for eliciting a response from ChatGPT was
               | as follows: "Answer the following question and provide an
               | explanation for your answer choice." Data procured from
               | ChatGPT included its selected response, the rationale for
               | its choice, and whether the response was correct
               | ("accurate" or "inaccurate"). Responses were deemed
               | correct if ChatGPT chose the correct multiple-choice
               | answer. To prevent memory retention bias, each vignette
               | was processed in a new chat session.
               | 
               | So all this says is in a scenario where you present
               | ChatGPT with a limited number of options and one of them
               | is guaranteed to be correct, in the format of a test
               | question, it is likely accurate. This is a much lower
               | hurdle to jump than what you are suggesting. And further,
               | under limitations:
               | 
               | > This study contains several limitations. The 750 MCQs
               | are robust, although they are "USMLE-style" questions and
               | not actual USMLE exam questions. The exclusion of
               | clinical vignettes involving imaging findings limits the
               | findings to text-based accuracy, which potentially skews
               | the assessment of disciplinary accuracies, particularly
               | in disciplines such as anatomy, microbiology, and
               | histopathology. Additionally, the study does not fully
               | explore the quality of the explanations generated by the
               | AI or its ability to handle complex, higher-order
               | information, which are crucial components of medical
               | education and clinical practice--factors that are
               | essential in evaluating the full utility of LLMs in
               | medical education. Previous research has highlighted
               | concerns about the reliability of AI-generated
               | explanations and the risks associated with their use in
               | complex clinical scenarios [10,12]. These limitations are
               | important to consider as they directly impact how well
               | these tools can support clinical reasoning and decision-
               | making processes in real-world scenarios. Moreover, the
               | potential influence of knowledge lagging effects due to
               | the different datasets used by GPT-3.5, GPT-4, and GPT-4o
               | was not explicitly analyzed. Future studies might compare
               | MCQ performance across various years to better understand
               | how the recency of training data affects model accuracy
               | and reliability.
               | 
               | To highlight one specific detail from that:
               | 
               | > Additionally, the study does not fully explore the
               | quality of the explanations generated by the AI or its
               | ability to handle complex, higher-order information,
               | which are crucial components of medical education and
               | clinical practice--factors that are essential in
               | evaluating the full utility of LLMs in medical education.
               | 
               | Finally:
               | 
               | > Previous research has highlighted concerns about the
               | reliability of AI-generated explanations and the risks
               | associated with their use in complex clinical scenarios
               | [10,12]. These limitations are important to consider as
               | they directly impact how well these tools can support
               | clinical reasoning and decision-making processes in real-
               | world scenarios.
               | 
               | You're saying that "LLMs are much more accurate than
               | medical students in licensing exam questions" and
               | extrapolating that to "LLMs can currently function as
               | doctors."
               | 
               | What the study says is "Given a set of text-only
               | questions and a list of possible answers that includes
               | the correct one, one LLM routinely scores highly (as long
               | as you don't include questions related to medical
               | imaging, which it cannot provide feedback on) on
               | selecting the correct answer but we have not done the
               | necessary validation to prove that it arrived at it in
               | the correct way. It may be useful (or already in use)
               | among students as a study tool and thus we should be
               | ensuring that medical curriculums take this into account
               | and provide proper guidelines and education around their
               | limitations."
               | 
               | This is not the success you believe it to be.
        
               | A_D_E_P_T wrote:
               | I get that you really disdain LLMs. But consider that a
               | totally off-the-shelf, stock model is acing the medical
               | licensing exam. It doesn't only perform better than human
               | counterparts at the very peak of their ability (young,
               | high-energy, immediately following extensive schooling
               | and dedicated multidisciplinary study) _it leaves them in
               | the dust._
               | 
               | If you think that the test is simple or even text-only,
               | here are some sample questions: https://www.usmle.org/sit
               | es/default/files/2021-10/Step_1_Sam...
               | 
               | > _What the study says is ..._
               | 
               | Surely you realize that they're not going to write, "AI
               | is already capable of replacing family doctors," though
               | that is the obvious implication.
               | 
               | And that's just a stock model. GPT-o1 via the API /w
               | agentic RAG is a better doctor than >99% of working
               | physicians. (By "doctor" I mean something like "medical
               | oracle" -- ask a question, get a correct answer.) It's
               | not _yet_ quite as good at generating and testing
               | hypotheses, but few doctors actually bother to do that.
        
           | Magi604 wrote:
           | I agree with you. The only issue is training AI to be better
           | and better. Much more efficient.
        
         | teeray wrote:
         | > remove any kind of remaining communication between humans,
         | because that is "inefficient", and replace it with an AI that
         | will mediate all human interactions
         | 
         | I imagine that call center operators are salivating at this
         | prospect. They can have an AI customers can yell at and it will
         | calmly and cheerfully tell them (in a more "human-esque" way)
         | to try rebooting their modem again, or visit the website to
         | view their bill.
        
           | danudey wrote:
           | They're going to be laughing all the way to the... settlement
           | payments?
           | 
           | https://www.forbes.com/sites/marisagarcia/2024/02/19/what-
           | ai...
        
         | chubot wrote:
         | It's true, but corporate policies and insurance are already
         | like "slow AI"
         | 
         | They remove most of what's real in interactions
         | 
         | I remember going for a routine checkup at Kaiser, and the
         | doctor was literally checking boxes on her computer terminal,
         | rather than looking, talking, listening.
         | 
         | I dropped them after that -- it was pointless for me to go
         | 
         | It seems like there are tons of procedures that already have to
         | be followed, with little agency for doctors
         | 
         | I've talked to doctors who say "well the insurance company say
         | I should prescribe this before that, even if the other thing
         | would be simpler". Even super highly paid doctors are sometimes
         | just "following the rules"
         | 
         | And more importantly they do NOT always understand the reasons
         | for the rules. They just have to follow them
         | 
         | ---
         | 
         | To the people wondering about the "AI alignment problem" --
         | we're probably not going to solve that, because we failed to
         | solve the easier "corporate alignment problem"
         | 
         | It's a necessary prerequisite, but not sufficient, because AIs
         | take corporate resources to create
        
           | danudey wrote:
           | > I remember going for a routine checkup at Kaiser, and the
           | doctor was literally checking boxes on her computer terminal,
           | rather than looking, talking, listening.
           | 
           | This is also a doctor issue, to be clear. My primary care
           | physician has a program he uses on his laptop; I'm not sure
           | what program it is, but he's been using it since I started
           | going to him around 2009 so it's definitely not something
           | new. He goes through and checks off boxes, as you described
           | your doctor doing, but he also listens and makes suggestions.
           | 
           | When I have an issue, he asks all the questions and checks
           | off the boxes, but he's also listening to the answers. When I
           | over-explain something, he goes into detail about why that is
           | or is not (or may or may not) be relevant to the issue. He
           | makes suggestions based on the medicine but also on his
           | experiences. Seasonal affective disorder? You can get a lamp,
           | you can take vitamin D, or you can go snowboarding up above
           | the clouds. Exercise and sunlight both.
           | 
           | For my psych checkups (ADHD meds and antidepressants) he goes
           | through the standard score questionnaire (which every doctor
           | I've seen uses), then fills in the scores I got into his app.
           | Because of that he can easily see what my scores were the
           | last time we spoke (about once every three months), so it's
           | easy to see if something has changed dramatically or if
           | things are relatively consistent.
           | 
           | It seems as though it saves a lot of time compared to, say,
           | paper charting, and while I have seen people complain on
           | review sites that he's just checking stuff off on a form, I
           | don't don't feel that it's actually impacting the quality of
           | care I get, and it's good to know that he's going through the
           | same process each time, making notes each time, and having
           | all that information easily accessible for my next
           | appointment.
           | 
           | I should probably have prefaced all this by saying I'm in
           | Canada, and so he's not being mandated by a private insurance
           | company to follow a list just because the bureaucracy won't
           | pay for your treatment if he doesn't. Maybe that makes it
           | different.
        
         | tivert wrote:
         | > Everyone should be terrified. The "promise" of AI is the
         | following: remove any kind of remaining communication between
         | humans, because that is "inefficient", and replace it with an
         | AI that will mediate all human interactions (in business and
         | even in other areas).
         | 
         | Kinda, that's the kind of enshittification customers/users can
         | expect.
         | 
         | The truly terrifying "promise" of AI is to free the ownership
         | class from most of its need of labor. If the promise is truly
         | realized, what labor remains will likely be so specialized and
         | high-skill that huge numbers of people will be completely
         | excluded from the economy.
         | 
         | Almost all of us here are laborers, though many don't identify
         | as such.
         | 
         | Our society absolutely _does not_ have the ideological
         | foundations to accommodate mass amounts of unemployed people,
         | _especially at the top_.
         | 
         | The best outcome is "AI" hits a wall and is a flop like
         | blockchain: really sexy demos, but ultimately falls far, _far_
         | short of the hype.
         | 
         | The worst outcome is Sam Altman builds an AGI, and he's not
         | magnanimous enough to run soup kitchens and homeless shelters
         | for us and our descendants, as he pursues egotistical mega-
         | projects with his AI minions.
        
           | coliveira wrote:
           | > The worst outcome is Sam Altman builds an AGI
           | 
           | Sam Altman doesn't need to build an AGI for this process to
           | happen. Companies already demonstrate that they're satisfied
           | with a lame AI that work just barely enough to replace most
           | workers.
        
             | danudey wrote:
             | "It hallucinates facts and uses those to manufacture lies?
             | How soon can we have it managing all of our customer
             | interactions?"
        
         | JTyQZSnP3cQGa8B wrote:
         | Most people who are not into computers see AI as the next step
         | of computers, and they are actively waiting for it.
         | 
         | I think that it's very different from a computer which is a
         | stupid calculator that frees us from boring mechanical tasks.
         | AI replaces our thoughts and creativity which is IMHO a
         | thousand times worse. Its aim is to replace humans while making
         | them us more stupid since we won't have to think anymore.
        
       | Mathnerd314 wrote:
       | > There's a proper way to do this.
       | 
       | Is there? Seems like people will complain however fast you roll
       | out AI, so you might as well roll it out quickly and get it over
       | with.
        
       | mro_name wrote:
       | There's this earthquake phrase in past tense:
       | 
       | > We felt like we had agency.
        
       | paulnpace wrote:
       | I think something this article demonstrates is how AI
       | implementation is resulting in building resistance to AI because
       | AI is being forced onto people instead of being demanded by those
       | people. Typically, the people doing the forcing don't understand
       | very well the job the people being forced to adopt AI actually
       | perform.
        
       | parasense wrote:
       | I did a bunch of research essays into medical uses of AI/ML and
       | I'm not terrified, in fact the single most significant use of
       | these technologies is probably in or around healthcare. One of
       | the most cited uses would be expert analysis of medical imaging,
       | especially breast cancer imaging. There is a lot of context to
       | unpack around breast cancer imaging, or more sucinctly put,
       | controversial drama! The fact is there is a statisticalluy high
       | rate of false positives in breast cancer diagnostics made by
       | human doctors. This reality resulted in a big overall policy
       | shift to have women breast scanned less often, depending on their
       | age, or something like that. Because so many women were
       | victimized with breast surgery that turned out to be false
       | positive or whatever. The old saying to make an omlet one must
       | break a few eges is sometimes used, and that's a terrible
       | euphamism. AI has proven to be better at looking at medical
       | image, and in the case of breast cancer seems to out perform
       | humans. And of course the humans have a monotonous job revewing
       | image after image, and they want to be safe instead of latter
       | being sorry, so of course they have high false possitives. The
       | machines never get tired, they never get biased (this is a bone
       | of contention), and they never stop. Ultimatly a human doctor
       | still has to review the images, and the machines simply inform if
       | the doctor is being too agressive in diagnosis, or possibly
       | missing something. The whole thing gets escellated if there is
       | any disparity. The out come from early studdies is encouraging,
       | but these studies take years, and are very expensive. One of the
       | biggest problems is the technology proficiency of medical staff
       | is low, and so we are now in a situation where software engineers
       | are cross traning to be at the level of a nurse or even doctors
       | in rare cases.
        
         | buffington wrote:
         | One very important part your comment doesn't mention: a real
         | human being has to actually take images for the AI to analyze.
         | 
         | The amount of training a radiation technologist (the person who
         | makes you put your body in uncomfortable positions when you
         | break something) is significant. My partner has made a career
         | of it, and the amount of school needed and clinical hours is
         | non-trivial, and harder to do than becoming a nurse from what I
         | understand.
         | 
         | They need to know as much about bones as orthopedic surgeons
         | while also knowing how radiation works, as well as how the
         | entire imagining tech stack works, while also having the soft
         | skills needed to guide injured/ill patients to do difficult
         | things (often in the midst of medical trauma).
         | 
         | The part where a doctor looks at images is really just a very
         | small part of the entire "product." The radiologists who say
         | "there's a broken arm" are never in the room, never see the
         | patient, never have context. It's something that, frankly, an
         | AI can do much more consistently and accurately at this point.
        
         | eesmith wrote:
         | > AI has proven to be better at looking at medical image, and
         | in the case of breast cancer seems to out perform humans
         | 
         | FWIW, https://pmc.ncbi.nlm.nih.gov/articles/PMC11073588/ from
         | 2024 Apr 4 ("Revolutionizing Breast Cancer Detection With
         | Artificial Intelligence (AI) in Radiology and Radiation
         | Oncology: A Systematic Review") says:
         | 
         | "Presently, when a pre-selection threshold is established
         | (without the radiologist's involvement), the performance of AI
         | and a radiologist is roughly comparable. However, this
         | threshold may result in the AI missing certain cancers.
         | 
         | To clarify, both the radiologist and the AI system may overlook
         | an equal number of cases in a breast cancer screening
         | population, albeit different ones. Whether this poses a
         | significant problem hinges on the type of breast cancer
         | detected and missed by both parties. Further assessment is
         | imperative to ascertain the long-term implications"
         | 
         | and concludes
         | 
         | "Given the limitations in the literature currently regarding
         | all studies being retrospective, it has not been fully clear
         | whether this system can be beneficial to breast radiologists in
         | a real-time setting. This can only be evaluated by performing a
         | prospective study and seeing in what situations the system
         | works optimally. To truly gauge the system's effectiveness in
         | real-time clinical practice, prospective studies are necessary
         | to address current limitations stemming from retrospective
         | data."
        
       | lekanwang wrote:
       | As an investor in healthcare AI companies, I actually completely
       | agree that there's a lot of bad implementations of AI in
       | healthcare settings, and what practitioners call "alarm fatigue"
       | as well as the feeling of loss of agency is a huge thing. I see a
       | lot of healthcare orgs right now roll out some "AI" "solution" in
       | isolation that raises one metric of interest, but fails to
       | measure a bunch of other systemic measures.
       | 
       | Two thoughts: 1: I think the industry could take cues from
       | aerospace and the human factors research that's drastically
       | improved safety there -- autopilot and autoland systems in
       | commercial airliners are treated as one part of a holistic system
       | with the pilot and first officer and flight attendants in keeping
       | the plane running smoothly. Too few healthcare AI systems are
       | evaluated holistically.
       | 
       | 2: Similarly, if you're going to roll out a system, either
       | there's staff buy-in, or the equilibrium level of some kind of
       | quality/outcomes/compliance measure should increase that
       | justifies staff angst and loss of agency. Not all AI systems are
       | bad. One "AI" company we invested in, Navina, is actually loved
       | by physicians using them, but the team also spent a LOT of time
       | doing UX research and feedback with actual users and the support
       | team is always super responsive.
        
       | heironimus wrote:
       | This is the same technology story told thousands of times a day
       | with nearly every technology. Medical seems to be especially bad
       | at this.
       | 
       | Take a very promising technology that could be very useful. Jump
       | on it early without even trying to get buy in and without fully
       | understanding the people that will use it. Then push a poor
       | version of it.
       | 
       | Now the nurses hate the tech, not the poor implementation of it.
       | The techies then bypass the nurses because they are difficult,
       | even though they could be their best resource for improvement.
        
       | cowmix wrote:
       | This article feels "ripped from today's headlines" for me, as my
       | mother-in-law was recently in the ICU after a fall that caused
       | head trauma. The level of AI-driven automated decision-making is
       | unsettling, especially as it seems to allow large organizations
       | to deflect accountability--"See? The AI made us do it!" I'm not
       | entirely sure what guided her care--or lack thereof--but, as
       | someone who frequently works in healthcare IT, I see these issues
       | raised all the time.
       | 
       | On the other hand, having access to my own "AI" was incredibly
       | helpful during her incident. While in the ICU, speaking with her
       | doctors, I used ChatGPT and Claude to become a better advocate
       | for her by asking more informed questions. I could even take
       | pictures of the monitors tracking her vitals, and ChatGPT helped
       | me interpret the readings, which was surprisingly useful.
       | 
       | In this "AI-first" world we're heading into, individuals need
       | their own tools to navigate the asymmetric power dynamic with
       | large organizations. I wonder how long it will be until these
       | public AI models get "tweaked" to limit their effectiveness in
       | helping us question "the man."
        
         | cowmix wrote:
         | Side note: I tried the same questions with some local LLMs I'm
         | running at home--unfortunately, they're nowhere near as good or
         | useful. I hope local models improve quickly, so we're not left
         | depending on the good graces of big LLM(tm).
        
         | wing-_-nuts wrote:
         | The article feels very much like a union rep fighting
         | automation. If AI is provably worse, we should see that come up
         | in the AI making 'bad calls' vs the human team. You would even
         | see affects on health outcomes.
         | 
         | One place I'd really like an all seeing eye AI overlord is in
         | nursing home care. I have seen family members lie in filth with
         | clear signs of an infection. I am confident, if we hadn't
         | visited, seen this, and got her out of there she would have
         | died there, years before her time.
        
           | FireBeyond wrote:
           | Sadly, the one thing I took from my time as an EMT and
           | paramedic was which nursing homes to consider and which to
           | avoid. I filed more than one complaint with the DOH.
           | 
           | It's a standing joke that whenever 911 crews respond to a
           | nursing home, the report you'll get from staff will be a
           | bingo game of:
           | 
           | - "I just got on shift; I don't know why you were called."
           | 
           | - "This is not my usual floor; I'm just covering while
           | someone is on lunch. I don't know why you were called."
           | 
           | - [utterly unrealistic set of vitals, in either direction,
           | healthy, lively vitals for someone who is not thriving, or
           | "should be unconscious" vitals for someone lively and spry]
           | 
           | - [extended time waiting for patient notes, history, an
           | outdated med list with everything they've taken in their ten
           | years at the facility]
           | 
           | And so on.
           | 
           | I (generally) don't blame the floor staff (though some
           | things, as you describe, are inexcusable) but
           | management/ownership. The same management/ownership that has
           | the policy to call 911 for anything more involved than a
           | bandaid for some weird idea of managing liability, nurses
           | that "aren't allowed" to do several interventions that they
           | can, for the same reason, all the while the facility has a
           | massive billboard out the front advertising "24/7 nursing
           | care" (and fees/costs commensurate with that).
        
             | wing-_-nuts wrote:
             | Well, now I want to know, how do you pick a _good one_?
        
             | Sabinus wrote:
             | Name and shame. In the absence of adequate government
             | protections, only company reputation protects consumers
             | from exploitation.
        
           | consteval wrote:
           | The reality is that our economy and entire understanding of
           | human society relies on labor. If we free humans from labor,
           | they just die. Like you're depriving them of oxygen.
           | 
           | Automation is great and all, and it's worked because we've
           | been able to push humans higher and higher up the job ladder.
           | But if, in the future, only highly specialized experts are
           | valuable and better than AI, then a large majority of
           | humanity will just be excluded from the economy all together.
           | 
           | I'm not confident the average Joe could become a surgeon,
           | even given perfect access to education. And I'm not even
           | confident surgery won't be automated. Where does that leave
           | us?
        
             | marcuskane2 wrote:
             | > Where does that leave us?
             | 
             | Free to pursue our desires in a utopia.
             | 
             | Humans used to work manual labor to produce barely enough
             | food to survive, with occasional famines, and watch
             | helplessly as half of their children died before adulthood.
             | 
             | We automated farm labor, mining, manufacturing, etc so that
             | one worker can now produce the output of 10, 100 or 100,000
             | laborers from a generation or two ago. Now those people
             | work in new jobs and new industries that didn't previously
             | exist.
             | 
             | Today we're seeing the transition from automating physical
             | labor to automating mental labor. Just as before, we'll see
             | those workers move into new jobs and new industries that
             | didn't exist before.
             | 
             | Our society already spends 1000x more resources on
             | children, elderly, disabled, unemployed, refugee, etc than
             | would have been possible in the 1800s. The additional
             | societal wealth creation from AI will mean that we can
             | dedicate just a tiny portion of the surplus to provide
             | universal basic income to everyone. (Or call it disability
             | payments or housing assistance or welfare or whatever term
             | if UBI doesn't resonate politically)
        
               | consteval wrote:
               | Practically I think this is the only way forward. The
               | previous solutions of pushing people "up" only works for
               | so long. People are hard limited by what they're capable
               | of - for example, I couldn't be a surgeon even if I
               | wanted to. I'm just not smart enough and driven enough.
        
       | boohoo123 wrote:
       | 100% agree AI will ruin healthcare. I'm an IT director at a rural
       | mental health clinic and I see the push for AI across my state
       | and it's scary what they want. All i can do is push back.
       | Healthcare is a case by case personal connection, something AI
       | can't do. It only reduces humans down to numbers and operates on
       | that. There is no difference between healthcare AI to a web
       | scraper on webmd or mayo clinic.
        
         | moralestapia wrote:
         | I'm not vouching for AI, I actually thing it will only make
         | things worse.
         | 
         | But,
         | 
         | >Healthcare is a case by case personal connection [...]
         | 
         | I haven't felt this with doctors in like 20 years.
        
       | theptip wrote:
       | > As a nurse, you end up relying on intuition a lot. It's in the
       | way a patient says something, or just a feeling you get from how
       | they look
       | 
       | There is a longstanding tension between those who believe human
       | intuition is trustworthy, and the "checklist manifesto" folks.
       | Personally I want room for both, there are plenty of cases where
       | for example the nurse/doctor's intuition fails and they forget to
       | ask about travel or outdoor activities and miss some obvious
       | tropical disease, or something situational like Lyme's.
       | 
       | I've spent a fair amount of time in a hospital and the human
       | touch is really invaluable. My hope is that AI can displace the
       | busywork and leave nurses more time to do the actual care.
       | 
       | But a concrete example of the thing an AI will struggle with is
       | looking at the overlapping pain med schedule, spotting that the
       | patient has not been exhibiting or complaining of pain, and
       | delaying one med a couple hours from the scheduled time to make
       | the night schedule more pleasant for the patient. It's hard to
       | quantify the tradeoffs here! (Maybe you could argue the patient
       | should be given a digital menu to request this kind of thing...)
        
         | RHSeeger wrote:
         | It's interesting to me because AI and intuition serve some of
         | the same purpose; to help the person being served find the
         | answer. And both have similar limitations in that you need to
         | verify what they're telling you.
         | 
         | - If your gut tells you it's Lyme Disease, you don't just check
         | it off as Lyme Disease and call it a day. You run tests to find
         | out if it is
         | 
         | - If the AI tells you it it's Lyme Disease, you don't just
         | check it off as Lyme Disease and call it a day. You run tests
         | to find out if it is
         | 
         | AI should (almost?) never be used as the system of record. But
         | it can be amazing in saving time; by guiding you to the right
         | answer.
        
         | 8338550bff96 wrote:
         | None of this has to do with. AI. At all.
         | 
         | This is politics and policy
        
       | rubatuga wrote:
       | What terrifies me is people will turn their brains off and
       | blindly trust AI.
        
       | taylodl wrote:
       | AI is a tool. Doctors can use the tool to ensure they haven't
       | overlooked anything. At the end of the day, it's still doctors
       | who are practicing medicine and are responsible for treatment.
       | 
       | Yes, there are a lot of bridges we need to cross with regards to
       | the best practices for using semi-intelligent tools. These tools
       | are in their infancy, so I expect there's going to be a lot we
       | learn over the next five to ten years and a lot of policy and
       | procedure that get put in place.
        
       | throwaway4220 wrote:
       | "Physician burnout" from documentation was the excuse for AI
       | adoption - Stop Citrix or VMware or whatever. make a responsive
       | emr where you don't have to click buttons like a monkey
        
         | bearjaws wrote:
         | Epic and Cerner are your main enemies if reducing burn out is
         | the problem. Even then, the continued consolidation and inflow
         | of PE into healthcare will be the next big problems.
        
       | tqi wrote:
       | > We didn't call it AI at first. The first thing that happened
       | was these new innovations just crept into our electronic medical
       | record system. They were tools that monitored whether specific
       | steps in patient treatment were being followed. If something was
       | missed or hadn't been done, the AI would send an alert. It was
       | very primitive, and it was there to stop patients falling through
       | the cracks.
       | 
       | Journalists LOVED The Checklist Manifesto when it came out in
       | 2009, I guess if you call it AI then they will hate it?
       | Similarly, in the early 2020s intuition was bad because of
       | implicit bias, but now I guess it is good?
        
       | qgin wrote:
       | Am I reading incorrectly or does this entire article come down
       | to:
       | 
       | 1. A calculated patient acuity score
       | 
       | 2. Speech-based note-taking
       | 
       | I didn't see any other AI taking over the hospital.
        
       | ilaksh wrote:
       | This is a problem with management, not AI.
       | 
       | The acuity system obviously doesn't work well and wasn't properly
       | rolled out. It's clear that they did not even explain how it was
       | supposed to work. That's a problem with that system and it's
       | deployment, not AI in general.
       | 
       | Recording verbal conversations instead of making doctors and
       | nurses always type things is surely the result of a massive
       | portion of doctors saying that record keeping was too awkward and
       | time intensive. It is not logical to assume that there is a
       | privacy concern that overrides the time saving and safety aspect
       | of doing that. People make that assumption because they are pre-
       | conditioned against surveillance and are not considering
       | physician burnout with record keeping systems.
       | 
       | It's true that there are large gaps in AI capability and that
       | software rollouts are quite difficult and poor implementation can
       | cause a significant burden on medical professionals as it has
       | here. I actually think if it's as bad as he says with the acuity
       | then that puts patients in danger and should result in firings or
       | lawsuits.
       | 
       | But that doesn't mean that AI isn't useful and won't continue to
       | become more useful.
        
       ___________________________________________________________________
       (page generated 2024-11-12 23:01 UTC)