[HN Gopher] I'm an doctor: Here's what I found when I asked Chat...
___________________________________________________________________
I'm an doctor: Here's what I found when I asked ChatGPT to diagnose
my patients
Author : blago
Score : 20 points
Date : 2023-04-05 22:07 UTC (52 minutes ago)
(HTM) web link (inflecthealth.medium.com)
(TXT) w3m dump (inflecthealth.medium.com)
| chimeracoder wrote:
| > So after my regular clinical shifts in the emergency department
| the other week, I anonymized my History of Present Illness notes
| for 35 to 40 patients -- basically, my detailed medical narrative
| of each person's medical history, and the symptoms that brought
| them to the emergency department -- and fed them into ChatGPT.
|
| It's quite wild that the doctor would openly admit to violating
| HIPAA in such a brazen way.
|
| HIPAA is _incredibly_ broad in its definition of protected health
| information - if it 's possible to identify an individual from
| data even through statistical methods involving other data that a
| third party might already possess, it's considered protected.
| It's inconceivable that the doctor would be able to sufficiently
| anonymize the data in this capacity and still provide enough
| detail for individual diagnoses.
| suddenclarity wrote:
| It does worry me what data people are sharing without seemingly
| much though. He claims it anonymised but I'm a bit sceptical when
| you input the medical history of 40 people. It's easy to slip up.
| PaulKeeble wrote:
| But it also gets around the common misdiagnoses for chronic
| conditions. It has a great description of Long Covid and ME/CFS
| for example whereas your typical Primary care is going to dismiss
| that patient with a Psychology diagnosis as is happening daily
| across the entire western world. Its less biased but its not
| going to find the rare things especially where the patient has
| missed something important.
|
| Its a mixed bag just like it is with software. If you ask it to
| solve something simple it often does a decent job, but something
| complex and its confidently wrong. It doesn't show the self doubt
| of expertise that it needs to be a reliable tool yet it still
| requires the user has that expertise to be able to save time
| using it.
| gamesbrainiac wrote:
| Which version though? 3.5 or 4? It does not state this
| explicitly. There is a world of difference between 3.5 and 4.
| maherbeg wrote:
| Ah yes, the "everyone lies" House M.D. problem
| SketchySeaBeast wrote:
| Not even, ChatGPT, being an engine that figures out what's
| right by finding out what is average, is bad at understanding
| the atypical.
| ChatGTP wrote:
| It's funny because it's almost the exact same problem I have with
| using it professionally for writing software.
| throwbadubadu wrote:
| Seconding.
| zzzeek wrote:
| "She had an ectopic pregnancy, in which a malformed fetus
| develops in a woman's fallopian tube, and not her uterus.
| Diagnosed too late, it can be fatal -- resulting in death caused
| by internal bleeding. Fortunately for my patient, we were able to
| rush her into the operating room for immediate treatment."
|
| because this doctor is not practicing in Texas where such a
| procedure might get you arrested
| Turskarama wrote:
| I'm not really sure what he expected here, ChatGPT was not
| trained to be a doctor, it is far more general than that. Asking
| ChatGPT for medical advice is like asking someone who is very
| well read but has no experience as a doctor, and in that context
| it's doing very well.
|
| He also brings up one of the most salient points without really
| visiting it enough: ChatGPT does not ask for clarification,
| because it is not a knowledge base trying to find an answer. All
| it does is figure out what character is statistically most likely
| to come next, it has no heuristic to know that there is a task it
| hasn't fully completed.
|
| This is the same reason ChatGPT cannot yet write programs by
| itself: in order to do so you'd need to specify the entire
| program up front (which is exactly what code is).
|
| As soon as we have agents that can do a proper feedback loop of
| querying a LLM consecutively until some heuristic is reached then
| the kind of AI doctors are looking for will emerge.
| frognumber wrote:
| There are two types of medical conditions
|
| 1) Those you see a doctor for
|
| 2) Those you don't
|
| The line depends on where you live. In a poor village, 100% might
| be the latter, while an executive in SFO will see a doctor for
| anything serious, but might not if they cut themselves with a
| kitchen knife.
|
| What's underrated is the ability to have basic medical care and
| information everywhere, all the time, for free.
|
| That can be casual injuries below the threshold of visiting a
| doctor (am I better heating or icing? immobilizing or
| stretching?), or those can be settings where there are no
| doctors.
|
| Even more, doctors (like AIs) make mistakes, and it's often
| helpful having a second opinion.
| jhgg wrote:
| I am curious if GPT-4 would have performed better.
| preommr wrote:
| It's amazing that it was that effective...
|
| - It's a generalized language model; imagine how much more
| effective it would be with a specialized ai that used a variety
| of techniques that are better suited for logic and reasoning,
| while using llms to interact with patients.
|
| - It cost an order of magnitude less than the visit to a doctor.
|
| - The potential in being able to constantly monitor a patient - a
| point made in the post.
___________________________________________________________________
(page generated 2023-04-05 23:00 UTC)