[HN Gopher] "AI promised to revolutionize radiology but so far i...
       ___________________________________________________________________
        
       "AI promised to revolutionize radiology but so far its failing"
        
       Author : macleginn
       Score  : 342 points
       Date   : 2021-06-07 13:53 UTC (9 hours ago)
        
 (HTM) web link (statmodeling.stat.columbia.edu)
 (TXT) w3m dump (statmodeling.stat.columbia.edu)
        
       | tomrod wrote:
       | AI and data science dev here, in the trenches.
       | 
       | My rule of thumb is that the overlap between jobs AI can replace
       | and RPA can replace seems to be almost 100% of AI capabilities.
       | 
       | The successful AI projects I've encountered usually either build
       | something totally new or augment the existing workforce.
       | 
       | It can be hard for us technologists to appreciate, but the
       | inefficiencies of technology, policy, and people configurations
       | can't always be resolved by technology alone.
        
         | ___luigi wrote:
         | > .. but the inefficiencies of technology .. can't always be
         | resolved by technology alone
         | 
         | Technology will always resolve inefficiencies of technology.
         | Check your mobile in your pocket, and mobile in the 90's.
        
           | tomrod wrote:
           | I can appreciate your confusion and apologize for my lack of
           | clarity. Note that my comment was on the interplay of the
           | three domains, not the technology domain in a vacuum.
        
       | fsloth wrote:
       | These guys - MVision - seem to promise automated segmentation
       | using AI. Is this failing or more into the correct direction?
       | https://www.mvision.ai/
        
       | scythe wrote:
       | Medical physics student here. I work for a hospital that pays
       | $silly per annum to use a type of expensive treatment planning
       | software for radiation oncology. The software comes with a built-
       | in automatic contouring based on "AI".
       | 
       | One of our units covered contouring and the role of the medical
       | physicist vis a vis contouring, which is generally to act as a
       | double check layer behind the radiologist. We received about an
       | hour of instruction on how to contour. After that, the instructor
       | and the class unanimously agreed that every single student had
       | learned to beat the software at recognizing the parotid gland.
       | And not by a small margin.
       | 
       | Why is it so bad? Security is a big reason. The software that can
       | be installed on hospital computers is tightly controlled. Our
       | research group is currently hamstrung by IT after they got mad at
       | us for _using PowerShell scripts to rename files_. This was
       | itself a workaround for the limitations of above-mentioned
       | software. In turn, we tend to end up with a few exorbitantly
       | priced omnibus programs rather than a lot of nice Unixy utilities
       | that do one thing and do it well, because it lowers the IT
       | approval overhead and the market has gone that way.
       | 
       | Even though my personal situation is frustrating, I obviously
       | recognize that you can't simply allow hospital faculty to install
       | whatever executables they please in the age of ransomware.
       | Commenters hoping for a quick fix are wrong. Almost all meme
       | alternatives have downsides that won't be obvious at first.
       | 
       | (But I still wish every day that Windows would go away.)
        
       | jl2718 wrote:
       | Radiologists are not even trying. They treat their methods like
       | FDA-approved medical devices. Even basic image segmentation to
       | help with 3D structure recognition is off-limits. The benefits of
       | neural networks in diagnostic radiology will not occur in one
       | shot, and I don't think it will happen at all in the United
       | States until people start sending their data elsewhere. But good
       | luck getting it. I just got my CT scan from a total mis-diagnosis
       | that resulted in an unnecessary surgery. It came on a CD that
       | won't read in the only drive I have access to. And even that is
       | just images. It's not possible to get the actual data. This is
       | not a failure of DNN. This is active AMA hostility toward
       | technology, and not just in radiology. Just watch, people are
       | going to start going elsewhere for medical care. They will do
       | everything they can to get insurance requirements, subsidies, and
       | laws against it, but they will lose. They are a dishonest Luddite
       | cartel, and they're hurting people.
        
         | mikesabbagh wrote:
         | The problem is with the money and power. Doctors will never let
         | go their income for a data scientist. This type of invention
         | will never start in the US, it will start in a communist
         | country where leaders can move mountains or in Africa where
         | there is a big lack of Doctors.
        
           | visarga wrote:
           | I don't buy this, doctors are not one united body. Doctors
           | with an AI tool will be more efficient than doctors without
           | it. If the tool has a measurable positive impact on patients
           | (outside of cost reduction) then it will become necessary to
           | have the AI in order to get the patients.
        
           | ska wrote:
           | US based companies have been shipping AI/ML tools for nearly
           | 30 years at this point, which undermines your argument.
           | 
           | The biggest problems are data access (big) and data
           | quality/labelling quality (bigger).
           | 
           | Medical conservatism is a real issue, but nowhere near as big
           | as those. There isn't a big cabal trying to keep AI out, it
           | just hasn't worked very well so far.
           | 
           | FDA is reasonably responsive (for an agency like that) these
           | days, and has been doing planning for more of this sort of
           | tech: https://www.fda.gov/medical-devices/software-medical-
           | device-...
        
         | nradov wrote:
         | You have the legal right to obtain a copy of your medical data.
         | Providers can require you to pay reasonable administrative fees
         | for making data copies but they do have to give it to you. If
         | they don't comply then you can file a formal complaint.
         | 
         | https://www.hhs.gov/hipaa/for-professionals/privacy/guidance...
        
         | jcims wrote:
         | Also in the US, I had a similar 'awakening' if you will after
         | being at the side of a loved one for a little over two years of
         | intensive medical intervention. I've been left quite bitter and
         | ultimately distrustful about where things stand today.
         | 
         | That said I do recognize that I have the advantage of not
         | making life and death decisions and have no idea what its like
         | to weigh the advantages of innovation against the risk of
         | untimely death or significant impairment/expense that comes
         | with advancing the frontiers of medicine.
        
         | kspacewalk2 wrote:
         | Radiologists most definitely are trying. Our institute's entire
         | medical imaging research arm is driven by several very
         | motivated practicing radiologists. You just misunderstand what
         | it is that they do, fundamentally. Diddling with some pics and
         | publishing papers is just not in the same league as making
         | medical diagnoses. A lot is riding on their understanding every
         | little artifact of the algorithm/approach that gives them a
         | modified image to interpret. They will never accept black-box
         | automagic, and they will always evaluate the benefits of novel
         | algorithms together with the drawbacks of having to get used to
         | their quirks and opaque artifacts, possibly with outcomes
         | impacted and/or lives lost in the process. Where the
         | risk/benefit analysis is clear, they do adopt plenty of common-
         | sense automation tools for a very simple reason - they get paid
         | per scan read, so their time is (lots of) money, to them.
        
           | caddemon wrote:
           | I don't think the blame falls on practicing radiologists, but
           | the OP is absolutely correct that medical data is way too
           | inaccessible. It is often impossible to get your own raw
           | data, and even worse it is sometimes impossible to share that
           | data with another doctor. Two large hospitals in major US
           | cities apparently can't share EEG data because they use
           | different software to read it. Guess who wins when all your
           | prior data gets essentially thrown out? It's not the
           | insurance companies, and it's certainly not you - it's the
           | new hospital.
           | 
           | How realistic it is to have ML involved in reading radiology
           | results in theory I don't know, but the larger point is that
           | in practice it is sure as hell not going to happen until
           | patients have real access to do what they please with their
           | own data. Not only am I pissed I can't have my own EEG data,
           | but I also would gladly contribute it to a database for
           | development of new tools, or any other research study that
           | asked. But there is essentially no way to even do that, at
           | least at either institution I've asked. Just think of all the
           | data that is being utterly wasted right now!
        
             | an_opabinia wrote:
             | > patients have real access to do what they please with
             | their own data... contribute it to a database...
             | 
             | Misconception #2 is that there's some "data moat" or
             | whatever.
        
               | caddemon wrote:
               | I am aware there isn't, what I'm saying is there should
               | be - particularly for dense datatypes like EEG that we
               | probably aren't fully leveraging at the moment.
        
             | visarga wrote:
             | >I also would gladly contribute it to a database for
             | development of new tools, or any other research study that
             | asked
             | 
             | This should be a standard question in the medical file like
             | those related to organ donation.
        
             | RobertDeNiro wrote:
             | The number of patients that are interested in viewing or
             | accessing their own data has to be negligible. Last time I
             | got an Xray they actually gave me a DVD of the imaging
             | itself. I remember looking at it, I thought it was neat,
             | but ultimately there was little use in there for me. I dont
             | know what % of patients have bothered to look at it.
        
               | vmception wrote:
               | The data will get better
               | 
               | Healthkit ftw
        
               | deeviant wrote:
               | It's not about the patient reviewing their own data as
               | much as it is about the patient having easy access to
               | their data and can easily share that data with other
               | consumers of it (i.e. some AI based interpretation
               | service)
        
               | phobosanomaly wrote:
               | 'Easy access' is scary for hospitals because it means
               | increased possibility of HIPAA violations.
        
               | xkjkls wrote:
               | Viewing their own raw data may be negligible, but sharing
               | between medical professionals is a relatively common and
               | necessary practice. Currently it is extremely difficult
               | to get one doctor to share medical information with
               | another, and it shouldn't be.
        
               | nradov wrote:
               | Provider organizations are understandably reluctant to
               | accept removable media from unknown sources due to the
               | risk of malware. Many of the computers that doctors use
               | don't have DVD drives or they're disabled for security.
        
         | ivalm wrote:
         | That's complicated. I work at a very large health care org that
         | employs ~1000 radiologists. We definitely would love to have a
         | good solution, but there just aren't good enough vendor
         | solutions and even working with vendors it's hard to get things
         | to an acceptable state. In that sense I have more hope in derm.
        
         | brnt wrote:
         | I left radiotherapy for this reason: they haven't the faintest
         | idea what they are doing, and are constrained by whatever
         | manufacturers managed to get the FDA to approve, which is 20
         | year old tech at best. Not that radiotherapists/logists know
         | how to deal with modern tech... A small hint: our software repo
         | was migrated TO svn in 2014.
         | 
         | When people do 'AI' in radiology, they mean making little with
         | scripts in Tensorflow. Sure, it's a beginning, but I've seen
         | entire institutes being unable to get to grips and move past
         | that stage. You wouldn't be able to tell from their slides, of
         | course.
        
           | mikekij wrote:
           | @brnt It's great to see someone else from radiotherapy on HN.
           | I'd love to chat if you're down. Shoot me an email.
        
             | brnt wrote:
             | I've left the field for greener pastures ;) Looked at your
             | website, interesting, but curious how you deal with all the
             | legacy software. Java was considered quite recent at one of
             | the places I worked, they were rolling Delphi in 2019 and
             | were not planning to switch!
        
               | mikekij wrote:
               | It is an interesting market, for sure. The fact that
               | these vendors are reluctant to upgrade tech stacks
               | actually creates a bunch of opportunities. (Although we
               | wish they'd just upgrade their Windows version:)
        
           | TeMPOraL wrote:
           | Not sure how modern AI is going to help here.
           | 
           | > _they haven 't the faintest idea what they are doing, and
           | are constrained by whatever manufacturers managed to get the
           | FDA to approve (...)_
           | 
           | If you take this layer of proprietary magic nobody
           | understands, and add DNNs to mix, you'll get... two layers of
           | proprietary magic which nobody understands (and possibly
           | owned by two different parties).
        
             | brnt wrote:
             | If only there were two layers of proprietary magic ;)
        
         | umvi wrote:
         | What do you mean it's "off limits"? There are several companies
         | developing ML models that process CT scans, for example:
         | https://www.radformation.com/autocontour/autocontour
        
           | markus92 wrote:
           | But how many have commercially available/succesful products?
        
         | kyouens wrote:
         | Couple of points: The 21st Century Cures Act has recently
         | expanded rules for information portability, which will make it
         | much easier to get access to your data in the future. The
         | challenge here has nothing to do with _radiologists_ hoarding
         | your data. The lack of interoperability typically stems from
         | limitations of electronic health information systems. Most
         | radiologists would love to be able to look at your scans from
         | multiple previous hospitals where you were imaged previously,
         | but technical barriers currently make that difficult.
        
           | caddemon wrote:
           | I don't blame the practicing radiologist, but I also don't
           | buy this is purely a technical issue. The hospital is quite
           | literally incentivized to have you repeat tests. I highly
           | doubt they care to make data accessibility/portability a
           | priority. Hopefully these new rules will force their hand.
        
             | kyouens wrote:
             | You are right, to an extent. Health systems and EHR vendors
             | both have historically had an economic disincentive to
             | share data. Think "ecosystem lock-in". My impression is
             | that things are gradually changing for the better.
        
         | bigbillheck wrote:
         | > a CD that won't read in the only drive I have access to.
         | 
         | You can get an external cdrom drive for about 20 bucks.
        
         | samuel wrote:
         | There is an IHE-Standard Profile that specifies how data has to
         | be to laid out in portable media(USB,CD). All PACS systems
         | follow it and it's basically a no-problem today.
         | 
         | I don't doubt about your anecdote, but I don't think that it's
         | a very common occurrence.
         | 
         | https://wiki.ihe.net/index.php/Cross-enterprise_Document_Med...
        
       | SneakyTornado29 wrote:
       | AI isn't here to replace radiologists. It is here to augment
       | them.
        
       | varelse wrote:
       | If you take career advice from the brainfarts of thought leaders
       | in any other field besides the one you're intent to join, you're
       | going to have a bad time.
       | 
       | And even then, thought leaders rarely build, but they sure love
       | the sound of their own impotent voices and the disproportionate
       | influence platforms like TED provide them to virtue signal and
       | other buzzwords the dystopic tech hivemind conjured into
       | existence to stay relevant.
       | 
       | Caveat Emptor...
        
       | paxys wrote:
       | Has AI revolutionized anything other than driving up user
       | engagement/addiction on shitty websites?
        
         | pantulis wrote:
         | Have you ever heard about Alexa?
        
           | kilnr wrote:
           | I don't know, the NSA has been capable of that level of mass
           | surveillance for a long time before Amazon was.
        
         | mustafa_pasi wrote:
         | Speech and language transcription and translation has gone a
         | very long way. Still not perfect but almost at human level in
         | some instances.
        
         | The_rationalist wrote:
         | Asking such a question is only proof of your own ignorance. I
         | invite you to discover the state of the art and its scopes
         | https://paperswithcode.com/sota
        
           | Der_Einzige wrote:
           | While this comment is a good start, we should remember that
           | for some scores, SOTA is only loosely correlated with
           | improvements in downstream performance. This is true in
           | things like summarization with ROUGE scores (which suck and
           | everyone hates them)
        
       | kn1ght wrote:
       | I worked in a small research company that had a method
       | (segmentation + CNNs, etc) a few years back. We had some exciting
       | stuff also on masking effects, but as soon as we got into SaMD
       | and the main revenue stream (grants) dried up, engineering closed
       | down.
        
       | ChicagoBoy11 wrote:
       | The metric of radiology jobs as a sign of the lack of AI
       | revolution in the field seems poor to me. Sadly, much of our
       | medical infrastructure (and the jobs it creates) only have a very
       | tenuous relationship to the actual care and the quality that it
       | delivers. Rather, most of the infrastructure tries to optimize
       | for the billing and the legislation that surrounds it.
       | 
       | One of the ways to immediately see this is for us, a technical
       | crowd, to puzzle on why it seems that Moore's law doesn't seem to
       | affect medical technology... AT ALL. Some of the same procedures
       | using the same machines from decades ago today cost A LOT more
       | than they used to, for instance.
       | 
       | This isn't to say that this AI revolution in radiology hasn't
       | been underwhelming; I just think that using this job metric is a
       | poor indicator of the technology's capability.
        
         | swyx wrote:
         | to be fair, Geoff Hinton invited this comparison when he made
         | that quote, which has been repeated ad infinitum in the past 5
         | years and probably brought a lot of existential dread to
         | radiologists. The AI field should have repudiated it harder,
         | instead they embraced it because it was flattering.
        
         | azalemeth wrote:
         | The role of the radiologist isn't just mapping image[range_x,
         | range_y, range_z] to disease -- it's _also_ including a vast
         | amount of Bayesian priors on the _rest_ of the patient 's
         | notes, and their referring colleagues' hints of an indication.
         | 
         | For example, often the question isn't just "does this person
         | have mitral valve regurgitation yes/no", it's more along the
         | lines of "is there evidence from this cardiac MRI scan that
         | their mitral valve regurgitation is significant and able to
         | explain the symptoms that we have -- and if so, is it amenable
         | to treatment". That's a _totally_ different question -- are the
         | symptoms beyond _what would be expected for the patient_ ; and
         | _is there a plausible mechanism_ are all second-level
         | radiological questions, well beyond the level of  "please
         | classify this stack of images into either healthy, cyst, or
         | tumour". Another random example would be the little old lady
         | who comes in with breathlessness at rest: she may well have a
         | large, slow-growing lung cancer (that any AI algorithm would
         | easily diagnose) that she may well die with or of, but the
         | acute dysponea could be down to an opportunistic LRTI that
         | remains treatable with a course of antibiotics (and visible on
         | a plain film chest x-ray). Capturing _that_ sort of information
         | is a lot, lot harder.
         | 
         | You're also forgetting that the cost of an expensive imaging
         | modality like MRI or CT is amortised over 10 years, and that --
         | by _far_ -- the biggest cost of running the service is the
         | staff. The doctors do more than push buttons. In many services,
         | actually, they don 't acquire the scans or interact with
         | patients at all.
        
           | sergers wrote:
           | Agreed,that's why most AI/ml in radiology is limited to
           | critical findings, identifying acute areas for the
           | radiologist to review, it's not making a diagnosis of it.
           | 
           | And on the EHR/history side, there's ML starting to be used
           | to organize and highlight relevant info so the rad doesn't
           | have to go searching for it.
           | 
           | These are both tools for the radiologist to interpret exams
           | faster, and more accurately.
           | 
           | It's not taking them out of the picture.
           | 
           | Eventually likely to happen... But not where "AI" is today
        
           | nairoz wrote:
           | Agreed but some questions are also a lot easier and a lot
           | more common. In x-ray, looking for a bone fracture is a
           | single task that requires no information about patient and
           | can be done by an algorithm.
        
             | azalemeth wrote:
             | Indeed. And in my country, nurse-practitioners can diagnose
             | and manage simple uncomplicated fractures, for example.
        
       | kevinalexbrown wrote:
       | The QZ article is narrowly correct but widely misleading. It
       | almost willfully ignores the momentum and direction.
       | 
       | In reality, radiologists will not be summarily replaced one day.
       | They will get more and more productive as tools extend their
       | reach. This can occur even as the number of radiologists
       | increases.
       | 
       | Here's a recent example where Hinton was right in concept: recent
       | AI work for lung cancer detection made radiologists perform
       | better in an FDA 510k clearance.
       | 
       |  _20 readers reviewed all of 232 cases using both a second-reader
       | as well as a concurrent first reader workflows. Following the
       | read according to both workflows, five expert radiologists
       | reviewed all consolidated marks. The reference standard was based
       | on reader majority (three out of five) followed by expert
       | adjudication, as needed. As a result of the study's truthing
       | process, 143 cases were identified as including at least one true
       | nodule and 89 with no true nodules. All endpoints of the analyses
       | were satisfactorily met. These analyses demonstrated that all
       | readers showed a significant improvement for the detection of
       | pulmonary nodules (solid, part-solid and ground glass) with both
       | reading workflows._
       | 
       | https://www.accessdata.fda.gov/cdrh_docs/pdf20/K203258.pdf
       | 
       | (I am proud to have worked with others on versions of the above,
       | but do not speak for them or the approval, etc)
       | 
       | The AI revolution in medicine is here. That is not in dispute by
       | most clinicians in training now, nor, from all signs, by the FDA.
       | Not everyone is making use of it yet, and not all of it is
       | perfect (as with radiologists - just try to get a clean training
       | set). But the idea that machine learning/ai is overpromising is
       | like criticizing Steve Jobs in 2008 for overpromising the iphone
       | by saying it hasn't totally changed your life yet. Ok.
        
         | ska wrote:
         | > The AI revolution in medicine is here.
         | 
         | There were limited scope CADe results showing improvements over
         | average readers 20 years ago, and people calling it a
         | 'revolution' then. I'm not sure anything has really shifted;
         | the real problems in making clinical impact remain hard.
        
         | robk wrote:
         | There are indeed areas where it's being used to complement
         | radiologists as a second review and reduce the recall rate
         | https://www.kheironmed.com/news/press-release-new-results-sh...
        
           | ramraj07 wrote:
           | This is how it needs to be approached. AI systems and rule
           | based systems that work together with the clinicians to
           | enhance their decision making ability instead of replacing
           | them.
        
       | andrewtbham wrote:
       | There seems to be a lot of startups in this space.... from a
       | google search:
       | 
       | https://www.medicalstartups.org/top/radiology/
       | 
       | I know of one local startup personally...
       | 
       | https://www.aimetrics.com/
       | 
       | Does anyone know if any of them are getting any traction?
        
         | rcpt wrote:
         | It's not a bad idea tbh.
         | 
         | Currently nurse practitioners (think: nursing degree then two
         | years of online college) are winning the right to run their own
         | independent medical practices all over the place. You can get a
         | Xanax prescription after your first 15min zoom call in much of
         | the US right now.
         | 
         | The political consensus is that doctors are overeducated and
         | overpriced so I think an AI replacement could still win
         | licensing even if it doesn't match their accuracy.
        
           | Workaccount2 wrote:
           | What gets me about doctors, and maybe I'm just
           | unlucky/haven't seen enough doctors, is that I never get that
           | "expert" vibe from them.
           | 
           | You know when you're talking to someone who does, say,
           | database management. And they have been at it for 15 years,
           | have a bunch accreditation and are well compensated for their
           | work. You just get the impression that you can pull the most
           | esoteric question about databases out, and they'll go on for
           | 45 minutes about all the nuances of it. No matter how hard
           | you try, you with a mild database understanding, would never
           | be able to pin them.
           | 
           | I just have never gotten that vibe from a doctor. I always
           | felt like I was only a question or two away from them
           | shrugging, me googling, and me finding the answer.
        
             | phobosanomaly wrote:
             | I think a lot of it has to do with the fact that the
             | database guy actually implements things in an environment
             | that can be manipulated at will.
             | 
             | Doctors don't implement things in an environment that they
             | control. Patients come to them with a chief complaint, and
             | the doctor tries to resolve or manage it to the best of
             | their ability with a minimal intervention according to a
             | set of guidelines someone else wrote down.
             | 
             | A doctor can't sit there and play with the
             | diagnostic/treatment process in the same way a database guy
             | can go play with the database software. At best the doctor
             | can sit there with a textbook or medical journal and try to
             | memorize more facts, or take notes, but it's not the same
             | as pulling apart code, running it in different ways, and
             | seeing how it behaves.
             | 
             | Medical school is a continuous process of memorizing shit
             | off of flash cards culled from a textbook. You don't
             | actually build anything, or implement anything, or _do_
             | anything in a real-world sense that would make you an
             | expert in the same way as someone who was working with a
             | system that they were able to take apart and play with and
             | manipulate. There 's no real way to develop the kind of
             | deep knowledge you're talking about in that environment.
             | 
             | A diesel mechanic can pull apart and engine. Hold every
             | part in their hand. Drive a diesel-engine vehicle. Observe
             | all the things that go wrong. Simulate, and innovate. An
             | individual doctor can't really do any of that. Medical
             | schools are even axing dissections, so med students are
             | lucky if they get to see what the hell peritoneum actually
             | looks like.
        
       | ErikVandeWater wrote:
       | I don't think it's Luddites holding AI back as some comments have
       | suggested. In the medical field, indemnity is the name of the
       | game. An expert will always have to sign off on whatever the AI
       | suggests.
        
       | Ensorceled wrote:
       | With digital lightboxes, 3d imaging, automated segmentation and
       | other workflow improvements, a lot of the "low hanging fruit" has
       | already been removed from the process.
       | 
       | When my doctor thought I might have a stress fracture, I went for
       | x-rays. _I_ with my stint 10 year previous in medical imaging
       | could tell at a glance I didn 't have a stress fracture. The
       | radiologists report was a brief "does not present with stress
       | fracture nor any other visible issue". AI is not going to
       | eliminate this; a radiologist is still going to need to spend 30
       | seconds "sign" the diagnosis, it's not going to take much out of
       | the system.
       | 
       | The real work in radiology is the hard cases, ones that require
       | diagnosis and consulting with other specialists. If AI can help a
       | orthopaedic surgeon plan a surgery for a car accident victim;
       | then it will start replacing radiologists.
        
       | rscho wrote:
       | If you want to replace radiologists, then start by understanding
       | what radiologists do. If your only answer to that is 'they
       | describe what they see' then you'll have to think a lot harder
       | than that.
        
       | nairoz wrote:
       | Surprised to read this. Having worked in the field, I see a
       | growing interest in AI from the radiology community attested by
       | the RSNA new AI journal. It's not about replacing radiologists
       | but helping them in their daily work, as a safety net (double
       | check) or as a prioritization tool.
        
       | rdudekul wrote:
       | In my 'biased' view, AI has already revolutionized more fields
       | than most people will 'ever' recognize. However unfounded fears
       | and insecurities around jobs are keeping its real potential at
       | bay.
       | 
       | My bet is the actual impact will first be realized in poorest
       | countries (India included) and then will spread to more advanced
       | countries (US/G7).
        
       | fny wrote:
       | Title should read "Peak of Inflated Expectations Reached".
       | 
       | I've worked on many health AI/ML projects. The last decade has
       | produced tremendously powerful prototypes which lay clear paths
       | forward for productization.
       | 
       | Sure, assay software that no one cares enough about hasn't been
       | updated, but you better believe the medical apparatus as a whole
       | will welcome any tool that increases throughput and increases
       | margins.
       | 
       | Automating radiology or facilitating radiology does just that.
       | Sure radiologists might not like it, _but radiologist do not
       | operate health systems, MBAs do._
       | 
       | For some perspective, medical devices take 3-7 years for
       | approval. Most are not game-changing technologies. The ImageNet
       | moment came in 2012. How can we reasonably expect to have
       | functioning automated radiology only a decade from when we
       | realized deep nets could classify cats and dogs?
        
         | austinjp wrote:
         | > radiologist do not operate health systems, MBAs do.
         | 
         | This is _highly_ dependent on your geographical location.
        
       | jonplackett wrote:
       | ai promised to revolutionize ________ but so far its failing
        
       | btilly wrote:
       | I believe that the problem is psychological.
       | 
       | Many years ago, back in the 1990s, wavelet based algorithms were
       | able to outperform humans on detecting tumors. The thing is that
       | the algorithms were better on the easy parts of the mammogram,
       | and worse on the hard parts. So researchers thought that humans
       | plus software should do better still, because the humans would
       | focus on the hard parts that they did better and the software
       | would catch the rest.
       | 
       | Unfortunately according to a talk that I was at, it didn't work
       | that way. It turns out that radiologists already spend most of
       | their time on the hard parts. So they quickly dismissed the
       | mistakes of theirs that they found as careless errors, and
       | focused on the mistakes of the algorithm in the hard part as
       | evidence that the algorithm didn't work well. And the result was
       | that the radiologists were so resistant to working with the
       | software that it never got deployed.
       | 
       | For the same psychological reason I expect radiologists to never
       | voluntarily adopt AI. And they will resist until we reach a point
       | that the decision is taken out of their hands because hospitals
       | face malpractice suits for not having used superior AI.
        
       | incrudible wrote:
       | If 95% is good enough for you, machine learning will probably get
       | you there rather easily.
       | 
       | With many of the really valuable use-cases, it's just not good
       | enough. If 100% of the time you need an expert to tell if a
       | sample falls within the 95% of successes or 5% failures, you're
       | not adding any value.
       | 
       | Even if you're bulk processing stuff that would've otherwise been
       | ignored, _somebody_ will have to deal with those signals. The net
       | effect is _more work_ , not less.
       | 
       | In other words, would-be radiologists ought to stay in school.
        
       | MattGaiser wrote:
       | I would be happy with it being a 2nd opinion clinic. Not
       | replacing radiologists, but "hey doc, have you considered X, Y,
       | and Z that make the model think it is actually A instead of B?"
        
         | ska wrote:
         | That's traditionally how most ML has been used in radiology
         | systems (where it is).
        
       | mcguire wrote:
       | " _What happened? The inert AI revolution in radiology is yet
       | another example of how AI has overpromised and under delivered. .
       | . ._ "
       | 
       | Isn't this how all of the previous AI Winters started?
        
       | newyankee wrote:
       | The reason a clean alternative to radiologists in the form of AI
       | is not available because of the inertia of the medical system.
       | Due to its innate conservative nature, a beta testing in a third
       | country with successful results will only be the pathway for it
       | to be adopted by richer countries with stricter medical systems.
       | I feel AI in medicine is a boon for developing countries if used
       | properly. Especially diagnostics.
        
         | dailybagel wrote:
         | Why should medical "beta testing" happen in a third-world
         | country? Is there some reason the higher risk of an
         | experimental procedure is more acceptable there than (say)
         | Boston or Dallas?
        
           | newyankee wrote:
           | AI assisted virtual Doctor > No Doctor
        
           | gbear605 wrote:
           | Unfortunately doctors are a lot less prevalent in a lot of
           | developing countries, especially in subsaharan Africa. If an
           | AI can do a third of the things that a doctor can do, that
           | means that many more people can be treated. In the US, it
           | just means that the appointments can be cheaper. So the
           | developing countries have a lot more to gain from things like
           | AI. The US and other developed countries should be doing more
           | to help the situation, including training and paying doctors
           | to work in those countries, but AI can potentially save a lot
           | of lives there in the meantime.
           | 
           | Of course, AI can only save lives if it works reliably, which
           | doesn't seems to be the case yet, but that hopefully can be
           | overcome.
        
             | querulous wrote:
             | you think doctors are scarce in subsaharan africa but mri
             | machines and xrays and ultrasounds are plentiful?
        
               | Google234 wrote:
               | It's must easier and faster to buy a machine than train a
               | doctor
        
               | nradov wrote:
               | It really isn't. Some countries can produce a trained
               | physician for less than the cost of a new MRI machine.
               | And beyond the capital expense those machines are
               | expensive to operate due to technicians, maintenance,
               | consumable supplies, power, etc.
        
               | Fomite wrote:
               | This is in contradiction to my experience working on
               | medicine in Africa, where there are often very well
               | trained people coping with a water system that doesn't
               | always work.
        
           | tomp wrote:
           | 90% reliable doctor in Switzerland > 10% reliable AI > 0%
           | reliable no doctor in Africa
        
         | qayxc wrote:
         | > Especially diagnostics.
         | 
         | I don't think so. Maybe I'm just too old, but I remember
         | vividly that the same was said about expert systems back in the
         | late 90s and early 2000s.
         | 
         | 20 years later and no one is even considering expert systems
         | for automated diagnosis anymore. The problem with current
         | machine learning models is their blackbox character.
         | 
         | You cannot query the system for why a diagnosis was made and
         | verify it's "reasoning". Tests rely on using the systems as
         | oracles instead and in medical diagnosis, a patient's medical
         | history is just as important as the latest lab results.
         | 
         | No amount of ML (in its current form) will be able to manage to
         | interview patients accordingly. It might work as a tool for
         | assisting professionals, but it's nowhere near in a state that
         | warrant's its use for automated diagnosing of patients.
        
           | Der_Einzige wrote:
           | Wtf - many ML models today are either full on white boxes or
           | are directly interpretable in various ways (e.g. LIME
           | algorithm). Even neural networks have good interpretability
           | tools (e.g. captum).
           | 
           | ML is not the black box nightmare that I see it described as
           | on here. You can figure out feature contributions and can
           | quite easily (and accurately) verify it's reasoning. If you
           | really need these kind of models, look into various kinds of
           | tree based ML models like random forests or boosted trees...
        
             | qayxc wrote:
             | > look into various kinds of tree based ML models like
             | random forests or boosted trees...
             | 
             | Those are the expert systems that have fallen out of favour
             | over a decade ago, so thanks, but no thanks.
             | 
             | > many ML models today are either full on white boxes or
             | are directly interpretable in various ways
             | 
             | Sources please, and remember to use relevant sources only,
             | i.e. interpretation of medical image analysis like here
             | [1].
             | 
             | Notably activation maps told researchers precisely
             | _nothing_ about what the neural net was actually basing its
             | conclusions on:
             | 
             | > For other predictions, such as SBP and BMI, the attention
             | masks were non-specific, such as uniform 'attention' or
             | highlighting the circular border of the image, suggesting
             | that the signals for those predictions may be distributed
             | more diffusely throughout the image.
             | 
             | So much for "fully white boxes" and "direct
             | interpretability"...
             | 
             | [1] https://storage.googleapis.com/pub-tools-public-
             | publication-...
        
         | sambe wrote:
         | As I said above, it seems fairly clear if you go to the
         | original article that there is a real problem with the existing
         | systems adapting poorly to different setups - they are trained
         | on one system/hospital and then don't generalise well.
        
       | [deleted]
        
       | dm319 wrote:
       | I've just been in an MDT meeting that was meant to have a
       | radiologist in it, but due to annual leave we didn't have anyone.
       | I think people in tech don't have much of an idea of what
       | radiologists do - the conclusion from a scan depends very much on
       | the clinical context. In an MDT setting there is significant
       | discussion about the relevance and importance of particular
       | findings.
        
       | PaulHoule wrote:
       | "Expert Systems" that could diagnose and treat disease were
       | technically successful in the 1970, see
       | 
       | https://en.wikipedia.org/wiki/Mycin
       | 
       | This technology never made it to market because of various
       | barriers; at that time you didn't have computer terminals in a
       | hospital or medical practice.
       | 
       | Docs want to keep their feeling of autonomy despite much medical
       | knowledge being rote memorization and rule-based.
       | 
       | The vanguard of medicine is "patient centered" and tries to feed
       | back statistics to help in decisions like "what pill do I
       | prescribe this patient for high blood pressure?" -- the kind of
       | 'reasoning with uncertainty' that an A.I. can do better than you.
       | 
       | As for radiology the problem is that images are limited in what
       | they can resolve. Tumors can hide in the twisty passages of the
       | abdomen and imaging by MRI is frequently inconclusive in common
       | sorts of pain such as back pain, knee pain, shoulder pain, neck
       | pain and ass pain.
        
         | jquaint wrote:
         | > The vanguard of medicine is "patient centered" and tries to
         | feed back statistics to help in decisions like "what pill do I
         | prescribe this patient for high blood pressure?" -- the kind of
         | 'reasoning with uncertainty' that an A.I. can do better than
         | you.
         | 
         | I think this illustrates why AI in medicine is a hard problem.
         | I'm not actually sure this is a clear cut AI/Statistics
         | problem.
         | 
         | Mainly because "what pill do I prescribe this patient for high
         | blood pressure?" has lots of hidden questions.
         | 
         | AI solves "what pill will statistically leads to a higher
         | survival rate", but that is not the only consideration.
         | 
         | Often doctors have to balance side effects and other
         | treatments.
         | 
         | What is easier for the patent: A lifestyle change to reduce
         | blood pressure? or Enduring the side effects of the pill?
         | 
         | This type of question is quite difficult for our AI's to answer
         | at the moment
         | 
         | Most drugs have side effects that are hard to objectively
         | measure the impact of.
        
           | nradov wrote:
           | There are also coverage rules to consider. Payers often
           | require providers to try less expensive treatments first and
           | will only authorize more expensive pills if the patient
           | doesn't respond well.
        
         | kspacewalk2 wrote:
         | "AI" can't even perform as well as humans (despite plenty of
         | promises) in a field like radiology. The idea of an AI family
         | doc system or ER doc system actually making diagnoses (instead
         | of being a glorified productivity tool*) is downright
         | hilarious. Lots and lots of luck interpreting barely coherent,
         | contradictory and often misleading inputs from patients,
         | dealing with lost records or typos, etc.
         | 
         | Doctors don't get paid the big bucks for rule-based solutions
         | based on rote memorization. They get paid the big bucks to
         | understand when it's inappropriate to rely on them.
         | 
         | * which IS a worthy goal to aspire to and actually helpful
        
           | IdiocyInAction wrote:
           | > "AI" can't even perform as well as humans (despite plenty
           | of promises) in a field like radiology. The idea of an AI
           | family doc system or ER doc system actually making diagnoses
           | (instead of being a glorified productivity tool*) is
           | downright hilarious. Lots and lots of luck interpreting
           | barely coherent, contradictory and often misleading inputs
           | from patients, dealing with lost records or typos, etc.
           | 
           | I think the future of that might be with wearables like the
           | Apple Watch. While it probably won't replace doctors
           | wholesale, applying ML to the data gathered from various
           | sensors continously seems like a much better promise to me.
        
           | visarga wrote:
           | > They get paid the big bucks to understand when it's
           | inappropriate to rely on them.
           | 
           | An automated system could record and analyze more outcome and
           | biometric data than a group of doctors, over time obtaining
           | more experience about when to apply or not the various
           | medical rules. Human experience doesn't scale like a dataset
           | or a model.
           | 
           | I bet some diagnostics could be correctly predicted by a
           | model that a human can't understand, especially if they
           | require manipulating more information than a human can hold
           | at once.
        
         | mcguire wrote:
         | AI Winter (https://en.wikipedia.org/wiki/AI_winter).
         | 1966: failure of machine translation         1970: abandonment
         | of connectionism         Period of overlapping trends:
         | 1971-75: DARPA's frustration with the Speech Understanding
         | Research program at Carnegie Mellon University
         | 1973: large decrease in AI research in the United Kingdom in
         | response to the Lighthill report             1973-74: DARPA's
         | cutbacks to academic AI research in general         1987:
         | collapse of the LISP machine market         1988: cancellation
         | of new spending on AI by the Strategic Computing Initiative
         | 1993: resistance to new expert systems deployment and
         | maintenance         1990s: end of the Fifth Generation computer
         | project's original goals
         | 
         | I got my bachelors in 1990, and took a lot of classes in AI
         | around that time. Have you ever worked with an expert system
         | like Mycin? It is really quite difficult to pull out an
         | expert's knowledge, rules of thumb, and experience-based
         | intuitions. Difficult and expensive. Those that were not
         | tightly focused on a limited domain were also generally not
         | satisfactory, and those that were failed hilariously if any one
         | parameter was outside the system's model.
         | 
         | Yes, doctors have a lot of cultural baggage that reduces their
         | effectiveness. But there's a completely different reason why AI
         | has not replaced them. After many, many attempts.
        
           | PaulHoule wrote:
           | Connectionism is back with a vengeance. It still struggles
           | with text but vision problems like 'detect pedestrian with
           | camera and turn on the persistence-of-vision lightstrip at
           | the right time' are solved.
           | 
           | Many expert systems were based on "production rules" and it's
           | a strange story that we have production rules engines that
           | are orders of magnitude more scalable than what we had in the
           | 1980s. Between improved RETE and "look it up in the
           | hashtable" it has been a revolution but production rules have
           | not escaped a few special applications such as banking.
           | 
           | Talk to a veteran of "business rules" project and about 50%
           | of the time they will tell you it was a success, the other
           | 50% of the time they made mistakes up front and went into the
           | weeds.
           | 
           | Machine learners today repeat the same experiments with the
           | same data sets... That doesn't get you into commercially
           | useful terrain.
           | 
           | Cleaning up a data set and defining the problem such that it
           | can be classified accurately is painful in the exact same way
           | extracting rules out of the expert is painful. It's closely
           | related to the concept of "unit test" but it is still a
           | stretch to convince financial messaging experts to publish a
           | set of sample messages for a standard with a high degree of
           | coverage. You can do neat things with text if you can get
           | 1000 to 20,000 labeled samples, but most people give up
           | around 10.
        
         | [deleted]
        
         | [deleted]
        
         | dm319 wrote:
         | | Docs want to keep their feeling of autonomy despite much
         | medical knowledge being rote memorization and rule-based.
         | 
         | There is so much arrogance and ignorance in this thread.
        
       | chsasank wrote:
       | Let me start with declaring conflict of interest: I work in one
       | of the aforementioned AI startups, qure.ai. Bear with my long
       | comment.
       | 
       | AI _is_ starting to revolutionise radiology and imaging, just not
       | in the ways we think. You would imagine radiologists getting
       | replaced by some automatic algorithm and we stop training
       | radiologists thereafter. This is not gonna happen anytime soon.
       | Besides, there 's not much to gain by doing that. If there are
       | already trained radiologists in a hospital, it's pretty dumb to
       | replace them with AI IMO.
       | 
       | AI instead is revolutionising imaging in a different way.
       | Whenever we imagine AI for radiology, you probably imagine dark
       | room, scanners and films. I appeal you to imagine patient
       | instead. And point of care. Imaging is one of the best
       | diagnostics out there: non invasive and you can actually _see_
       | what is happening inside the body without opening it up. Are we
       | training enough radiologists to support this diagnostic panacea?
       | In other words, is imaging limited by the growth of radiologists?
       | 
       | Data does suggest lack of radiologists. Especially in the lower
       | and medical income countries.[1] Most of the world's population
       | lives in these countries. In these countries, hospitals can
       | afford CT or X-Ray scanners (at least the pre-owned ones) but
       | can't afford having a radiologist on premise. In India, there are
       | roughly 10 radiologists per million.[2] (For comparison, US has ~
       | 10x more radiologists.) Are enough imaging exams being ordered by
       | these 10 radiologists? What point is there to 'enhance' or
       | 'replace' these 10 radiologists?
       | 
       | So, coming to my point: AI will create _new_ care pathways and
       | will revolutionize imaging by allowing more scans to be ordered.
       | And this is happening as we speak. In March 2021, WHO released
       | guidelines saying that AI can be used as an alternative to human
       | readers for X-Rays in the tuberculosis (TB) screening [3]. It
       | turns out AI is both more sensitive and specific than human
       | reader (see table 4 in [3]). Because TB is not a  'rich country
       | disease', nobody noticed this, author included likely. Does this
       | directive hurt radiologists? Nope, because there are none to be
       | hurt: Most of the TB cases are in rural areas and no radiologist
       | will travel to random nowhere village in Vietnam. This means more
       | X-rays can be ordered, more patients treated, all without taking
       | on the burden of training ultra-specialist for 10 years.
       | 
       | References:
       | 
       | 1. https://twitter.com/mattlungrenMD/status/1382355232601079811
       | 
       | 2.
       | https://health.economictimes.indiatimes.com/news/industry/th...
       | 
       | 3.
       | https://apps.who.int/iris/bitstream/handle/10665/340255/9789...
        
       | wheresvic4 wrote:
       | It's very interesting to see this on HN because we're actively
       | working in this space, albeit on building a training platform but
       | the long-term goal is to generate models that can outperform the
       | current ones that require a lot of expert input.
       | 
       | Shameless plug: https://www.rapmed.net
        
       | mrfusion wrote:
       | Isn't this simply a case of over regulation?
        
       | TaupeRanger wrote:
       | How could it? We find most things we're capable of finding.
       | Medicine needs more treatments, cures, and prevention techniques,
       | not more diagnosis.
        
       | blackvelvet wrote:
       | Radiologist here with an interest in this topic. I think the
       | problem with most AI applications in radiology thus far is that
       | they simply don't add enough value to the system to gain
       | widespread use. If something truly revolutionary comes along, and
       | it causes a clinical benefit, healthcare systems will shift to
       | adapt this in a few years. AI just hasn't lived up to it's
       | promise, and I agree it's because most of the people involved
       | don't get that the job of a radiologist is way more complex than
       | they think it is.
       | 
       | Everytime I open a journal, I see more examples of either
       | downright AI nonsense ('We used AI to detect COVID by the sounds
       | of a cough') or stuff that's just cooked up in a lab somewhere
       | for a publication ('Our algorithm can detect pathology X with an
       | accuracy of 95%, here's our AUC').
       | 
       | Hyperbolic headlines - Geoff Hinton saying in 2016 that it's time
       | to stop training radiologists springs to mind - don't help the
       | over promise of AI, and then they shoot themselves in the foot
       | when they underdeliver.
       | 
       | Earlier discussions about radiologists being self interested in
       | sabotaging AI is tinfoil hat stuff - if I had an AI algorithm in
       | the morning that could sort out the 20 lung nodules in a scan, or
       | tell me which MS plaque is new in a field of 40, I'd be able to
       | report twice as many scans and make twice as much money.
       | 
       | Companies come along every month promising their AI pixie dust is
       | going to improve your life. It probably will, but 10 years from
       | now, not today. The AI Rad companies are caught in an endless
       | hype cycle of overpromising and under delivering.
        
         | ska wrote:
         | > self interested in sabotaging AI is tinfoil hat stuff
         | 
         | Agree this in nonsense. Not a radiologist but have worked with
         | many.
         | 
         | The big barriers to AI impact in radiology are a) translation
         | is a lot harder than people think, b) access to enough high
         | quality data with good cohort characteristics c) good labeling
         | (most of the interesting problems aren't really amenable to
         | unsupervised) a d) generalization, as always.
         | 
         | It doesn't help that for the most part medical device companies
         | aren't good at algorithms and algorithms companies aren't good
         | at devices, lots of rookie mistakes made on both sides.
        
           | blackvelvet wrote:
           | Also PACS isn't designed to implement algorithms. PACS is
           | legacy software that is, by and large, terrible.
        
             | ska wrote:
             | > Also PACS isn't designed to implement algorithms.
             | 
             | That doesn't really matter too much from the implementing-
             | ML point of view, you can just use it as a file store.
             | DICOM files themselves are annoying too (especially if they
             | bury stuff in private tags), as are HL7 (and EMR
             | integrations) but .. that's mostly just work.
             | 
             | Agree the viewers lack flexibility but that's a lot more
             | solvable than say the morass of EMR. If you are just
             | looking at image interpretation visualizing things isn't so
             | bad, if you had the models to visualize.
        
       | carbocation wrote:
       | I think that the role of radiologists in the medical system is
       | misunderstood. Radiologists are consultants. Yes, in some cases -
       | many cases, even - you just want a result to an answer to a
       | specific, common question from an imaging study. And in those
       | cases, I am sure that deep learning-based readings will do a fine
       | job. But for more diffuse inquiries, or for times when there is
       | disagreement or uncertainty over a reading, radiologists are
       | wonderful colleagues to engage in discussion.
       | 
       | I'm not super interested in predicting employment trends, but
       | it's hard to imagine a world where the radiologist-as-consultant
       | disappears.
        
         | gowld wrote:
         | > Yes, in some cases - many cases, even - you just want a
         | result to an answer to a specific, common question from an
         | imaging study
         | 
         | And these questions are already outsourced to India.
        
         | koheripbal wrote:
         | It's an oversimplification to say AI will replace xyz job.
         | 
         | It seems more likely that AI will simply sift through the data
         | more thuroughly and look holistically to catch things a
         | radiologist might miss.
         | 
         | A radiologist, for example, might miss spotting a small tumor
         | in an xray taking for an unrelated hip injury.
         | 
         | AI has a lot of complimentary value.
        
           | cloverich wrote:
           | It is a good point but I think one part is a bit backward in
           | an ironic way. One reason AI hasn't replaced radiologists is
           | because radiologists are typically very good physicians, and
           | specifically do not look at images in isolation but review
           | the record, talk with the physicians, sometimes the patients
           | too, etc. So its actually backwards (in some cases) -- AI
           | struggles because it looks at the image in isolation, while
           | the radiologist is looking at the patient more holistically.
        
             | koheripbal wrote:
             | Radiologists don't talk to patients (usually), so there's
             | no reason why AI cannot be given all the same patient data.
             | ...although reading doctors' notes is probably a whole
             | 'nother AI program.
        
               | ska wrote:
               | > .although reading doctors' notes is probably a whole
               | 'nother AI program.
               | 
               | yep. One that also has a long history, and a lot of
               | current players - an nobody has really good traction
               | there either.
        
           | pharmakom wrote:
           | I don't see radiology work decreasing either. Instead, I
           | think it will serve more people but at lower cost per person.
           | No one will skip medical services if they can afford them,
           | but currently prices are high. Imagine a future where a
           | radiologist serves 10x customers as before by leveraging
           | smart technologies, for similar overall compensation.
        
           | dx034 wrote:
           | But aren't we already over-diagnosing some cancers? Spotting
           | more tiny tumors in unrelated images might do more harm
           | (through procedures/treatment) than ignoring them. I'm not
           | sure if we're really better off detecting every anomaly in
           | someone's body.
        
             | vecter wrote:
             | Why would we not want to know about a tumor in the body? I
             | assume competent doctors will assess the risk of such a
             | thing, but knowing about it is better than not.
        
               | nradov wrote:
               | Everyone gets cancer eventually, it's inevitable if you
               | live long enough. There's no point in knowing that a
               | small, slow growing tumor will kill you in 10 years if a
               | heart attack is going to kill you in 5 years anyway.
               | Knowing about the tumor just creates more psychological
               | stress and potentially extra unnecessary medical
               | treatments for no benefit.
        
               | vharuck wrote:
               | Doctors will optimize for patient outcomes, usually by
               | doing all they can. Sometimes, this doesn't scale well.
               | For example, the US Preventative Services Task Force
               | stopped recommending routine PSA screening among
               | asymptomatic patients to detect prostate cancer in 2012.
               | They based their decision on a careful review of medical
               | research, noting the screening didn't have much of an
               | effect on mortality but could cause stress or invasive
               | follow-up tests. Urologists generally opposed the
               | decision. The USPSTF has since walked it back to, "Talk
               | about the risks and benefits." I've looked at survey
               | results for my state, and the numbers indicate a good
               | proportion of men are told the benefits of a PSA but not
               | any risks
               | 
               | Patients are even less reasonable. If you tell somebody
               | they have a tumor, they will now have a constant stress.
               | If you say "cancer," they'll likely undergo expensive and
               | potentially harmful treatment, even if "watch and wait"
               | was a totally valid choice (e.g., slow-developing
               | prostate cancer for very old men). Remember how Angelina
               | Jolie had a double mastectomy after being told she had a
               | good chance of developing breast cancer? That behavior
               | will lead to a lot of unnecessary pain, debt, and lower-
               | quality lives if it became normal.
               | 
               | It'd be hard if not impossible to ask doctors don't share
               | knowledge about a tumor with patients. But in some cases
               | we intentionally ask them not to go looking for tumors
               | because the expected value of a positive result is a
               | negative impact.
        
             | carbocation wrote:
             | Because modalities like MRI are non-ionizing and therefore
             | not intrinsically harmful, I think it is reasonable to
             | consider a wild extreme: in some future, what if a large
             | group of people underwent imaging every month or every
             | year. It's possible to imagine gaining a very good
             | understanding of which lesions have malignant potential and
             | which ones don't.
             | 
             | The transition period that we are in now - where we are
             | gaining information but not yet sure how to act on all of
             | it - is painful. There are a lot of presumably unnecessary
             | follow-up procedures. But it's possible that at some future
             | point, we'll understand that 0.8mm nodules come and go
             | throughout a lifetime and don't merit any action, whereas
             | {some specific nodule in some specific location} almost
             | always progresses to disease.
             | 
             | Obviously what I'm describing is research, and so I'm not
             | saying that we should treat clinical protocols differently
             | right now. But I think it's not too hard to imagine that we
             | can get to a point where we have a very good idea about
             | which lesions to follow/treat and which lesions to leave
             | be.
        
           | carbocation wrote:
           | I agree with you and with the sibling comment!
        
         | newyankee wrote:
         | I really respect the intensity of training all medical
         | practitioners have and the responsibility society puts on them.
         | However i think there is an urgent need of reforming medical
         | systems to leverage all the new trends in a responsible manner.
         | Augmenting the capabilities of Doctors is one way, but better
         | frictionless anonymised data sharing can also be very useful.
         | However the only thing that prevents a success of new approach
         | other than incumbents is that it is difficult to determine
         | winners & losers of new approaches and it is likely that larger
         | players have better chances of being more successful instead of
         | many smaller players.
        
       | screye wrote:
       | " _ML promises to revolutionize 'X' because of the explosion of
       | data in the modern era._"
       | 
       | Outside of some singularity whack-jobs, that's always been the
       | promise. The explosion of data in the field is a necessary
       | requirement.
       | 
       | Healthcare fields make it nigh impossible to access data in way
       | that will allow for fast prototyping or detailed experimentation.
       | This isn't just about privacy either. Each Hospital treats even
       | anonymized samples as a prospective source of income and a
       | competitive advantage. I understand why they do it from a profit
       | motive perspective, but it is certainly being traded off against
       | prospective decreases in healthcare prices and significantly
       | improved diagnostics.
       | 
       | ML revolutionized Vision because of Imagnenet and Coco. ML
       | revolutionized Language when Google scraped the entire internet
       | for BERT. Graph neural networks have started working now that
       | they're being run on internet sized knowledge graphs. Even self-
       | driving companies know that the key to autonomous diving mecca
       | lies in the data and not the models. (Karpathy goes into
       | intricate detail here during his talks)
       | 
       | If a field wishes to claim that ML has failed to revolutionize
       | it, I would ask it to first meet the one requirement ML needs
       | satisfied: Large-scale publically-ish available labelled data.
       | The sad thing is that Healthcare is not incapable of providing
       | this. It's just that the individual players do not want to
       | cooperate to make it happen.
        
       | mpreda wrote:
       | Spelling: "it's failing" instead of "its failing", in the title
       | no less.
        
       | AngeloAnolin wrote:
       | The intrinsic uniqueness of human physiology and the differing
       | assessments made by health practitioners make this area of
       | medicine quite challenging.
       | 
       | This is compounded by the fact that different device
       | manufacturers in the field of radiology each has their own
       | proprietary technology that delivers different medical imaging
       | analysis.
       | 
       | While there has been a lot of headway performed in terms of data
       | interchange, the race by the multitude of players in this area of
       | medicine is staggering that one will always try to proclaim as
       | more revolutionary and innovative.
        
       | ziofill wrote:
       | and so is grammar
        
       | new299 wrote:
       | There was some interesting work recently published in nature on
       | augmenting therapy selection:
       | 
       | https://www.nature.com/articles/s41591-021-01359-w
       | 
       | "Overall, 89% of ML-generated RT plans were considered clinically
       | acceptable and 72% were selected over human-generated RT plans in
       | head-to-head comparisons."
       | 
       | This seems like it could be a way forward. Where AI is used to
       | propose alternative and improve patient outcomes.
        
         | boleary-gl wrote:
         | That's the way it is used today - for instance in mammography
         | there is Computer aided detection (CAD):
         | 
         | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1665219/
         | 
         | That's been in use for some time. But like many parts of
         | radiology it really only can be a second look tool that as you
         | mentioned proposes alternatives or suggests things. The false
         | positive rate for CAD is substantially higher than for humans
         | because of the human ability to see symmetry and patterns in
         | very diverse tissue sets like one sees in screening
         | mammography.
         | 
         | And the nature of screening tests like mammography means that
         | actually percentages like "89%" isn't really good enough. You
         | have to be more specific and sensitive than that to have a
         | successful program, and I'm not sure that ML will ever be able
         | to get there...there's a lot of experience and human intuition
         | involved at some point that would be hard to replicate...and I
         | know that because people have been trying to do that for
         | decades.
        
           | ska wrote:
           | The comparisons are pretty tricky to do right, especially
           | with systems that have been trained with the assumption that
           | they are operating as "a second check". For what it's worth,
           | that language was popularized by the first such system
           | approved to market by FDA, in the mid-late 90s. It had,
           | amongst other things, a NN stage.
           | 
           | Even at that time, such systems were better than some
           | radiologists at most tasks, and most radiologists at some
           | tasks - but breadth was the problem, as was generalization
           | over a lot of different set ups.
           | 
           | I think this is more a data problem than an algorithmic one.
           | With something as narrow as screening mammo CAD (very
           | different than diagnostic), it's quite plausible that it
           | could become a more effective "1st pass" tool than humans _on
           | average_ , but to get there would involve unprecedented data
           | sharing and access (that 1st system was trained on a few
           | thousand 2d sets, nothing like enough to capture sample
           | variability)
        
       | helsinkiandrew wrote:
       | > Geoffrey Hinton ... in 2016, "We should stop training
       | radiologists now, it's just completely obvious within five years
       | deep learning is going to do better than radiologists."
       | 
       | >... Indeed, there is now a shortage of radiologists that is
       | predicted to increase over the next decade.
       | 
       | I hope those two aren't related. It's ok to bet your wealth on
       | some technology or business existing or not in 5 years time, but
       | people's lives and health are more important
        
       | Inhibit wrote:
       | Am I missing something or does this look like poor uptake for
       | (possibly) reasons other than performance? The article cites a
       | lack of use as justification for the assumption that the model
       | doesn't work. It might just take time.
       | 
       | Especially in medical. A century (+) ago hand-washing took time
       | to adopt unless I'm mis-remembering.
        
         | username_my1 wrote:
         | but the article is right in saying that AI advocates said no
         | need for any new radiologists.
         | 
         | I know this in the field of vehicle damages, AI is good but not
         | good enough to take over... and that has been the case for a
         | long while yet every now and then a new company / product comes
         | saying that the future is AI and no need for human effort.
        
           | Inhibit wrote:
           | True. I was internalizing "no new need" as hyperbole because
           | it didn't make sense (given the reality of medical) but
           | that's my mistake.
           | 
           | On re-reading, a highly specialized and entrenched workforce
           | having a 30% uptake on a new technology in only a few years
           | seems phenomenal.
        
         | koheripbal wrote:
         | Advancements in Medicine happen in spurts because of the
         | regulatory review process and the risk aversion.
         | 
         | The advance must either be VERY significantly better to warrant
         | the approval process, OR an extremely low risk incremental
         | change.
         | 
         | So what we end up with is this sputtering of tiny and big
         | advances.
        
         | sambe wrote:
         | Both that and "people refuse to use technology that makes them
         | redundant" indeed seem to be hinted at. However, one of the
         | quotes says that poor performance is the reason and if you
         | click through to the original article, it seems fairly clear
         | that there is a problem with the existing systems adapting
         | poorly to different setups.
        
           | kspacewalk2 wrote:
           | It's not about "poor performance", in the same way as poor
           | IDE performance isn't really the root cause of my laptop's
           | stubborn inability to write good Python code. Radiology is
           | about using a generalist medical education to diagnose (or be
           | instrumental in diagnosing) patients. Pattern matching or
           | statistical information are a rather modest subset of that
           | skillset.
        
       | tlear wrote:
       | Few years back I contracted for an AI startup, long story short
       | we ran a simple test of one anotator radiologist with 15years of
       | experience to another(similar amount of experience) over 50 or so
       | CT scans. They agreed on about 60% of the time lol and I mean
       | easily spot table size "things" that were annotated as
       | potentially malignant nodules or "not at all because they were
       | "scars".
       | 
       | That's when I knew we did not wtf we were doing.
        
         | ska wrote:
         | Reader variability is one of the many things that make this
         | stuff a lot harder than it looks at from the outside.
        
       | yurlungur wrote:
       | At the end of the day it's the human's duty to provide a
       | diagnosis. The lack of a complete product solution sure isn't
       | helping, but even if there's a company that provides that, most
       | people still wouldn't trust that if it's purely driven by AI. The
       | way AI should go into these fields is to first provide tools for
       | the specialists already in the field, to increase their
       | productivity.
        
       | apercu wrote:
       | The biggest reason corporations want AI to succeed is so they no
       | longer have to share even a meagre percentage of revenue with you
       | any longer.
        
         | gpm wrote:
         | The biggest reason humans want AI to succeed however is so that
         | they can stop spending their time serving other humans in
         | repetitive, mundane, and boring work.
         | 
         | Wealth distribution is a problem that can be solved better or
         | worse with or without AI. Better with AI looks like things
         | along the lines of low working hours with good pay, UBI, or so
         | on, things we simply can't afford without automating the work.
         | Worse without AI looks like things like slavery, something we
         | don't even have incentive to resort to if we do automate the
         | work.
         | 
         | Let's not confuse our current political issues with how we
         | distribute wealth with issues with AI. They're issues with our
         | institutions, some of those issues are exacerbated by better
         | technology, but they can and should be solved.
        
       | ska wrote:
       | This discussion is a lot older than "deep" models. While
       | statements like Hinton's quoted one are obviously silly (often
       | the case with naive takes from deep learning maximalists) there
       | is clearly a lot of room for more impact from algorithms, but I
       | think it's mostly limited by data quality and access.
       | 
       | This is not an easy problem to solve for a range of reasons:
       | privacy, logistics, incentives.
        
       | sebastianvoelkl wrote:
       | I work in a startup that is building a software for radiologists
       | that uses AI. From what I experienced so far is that the software
       | is definitely not the problem. Our software already is better at
       | detecting lesions, aneurisms and we are close to tumors too. Our
       | goal is not to replace the radiologist but rather to decrease the
       | error rate. But there is definitely a difference between training
       | a model at home with perfectly preprocessed data and working with
       | the raw 'real-life' data + monetizing it + making the UX/Ui for
       | it etc.
        
       | bjornsing wrote:
       | Or, as they say: "It is difficult to get a man to understand
       | something when his salary depends upon his not understanding it."
        
       | spoonjim wrote:
       | You gotta give it a minute! This isn't like Facebook shipping
       | their latest startup clone where they just slap something
       | together in 10 weeks and call it a day. This will be a multi
       | decade process.
        
       | jtdev wrote:
       | ML/AI is such an irresistible siren song for so many... the
       | possibilities are seemingly endless. But the "sales and
       | marketing" people are getting over their skis selling AI/ML to
       | the point of smothering the tech. The next AI winter is going to
       | be long and cold...
        
       | [deleted]
        
       | jp57 wrote:
       | I think this article points out the fact that the creation of a
       | technology in the lab, and its effective operational deployment
       | are two different problems, both difficult, requiring different
       | skills and resources.
       | 
       | An imperfect-but-instructive analogy would be between vaccine
       | development and vaccine delivery. Once a vaccine has been
       | developed and shown to be safe and effective, the hard work is
       | just beginning. In the case of COVID, billions of doses must be
       | produced, and then delivered to people, the delivery requires not
       | just shipping of the doses, but matching the doses with an equal
       | number (billions) of syringes, hypodermic needles, cotton swabs,
       | band-aids, alcohol swabs, etc. People have to be recruited to
       | deliver the doses, systems must be created to manage the demand
       | and the queues, etc. The operational problem of delivering the
       | vaccine to the world is arguably harder than its creation and
       | testing.
       | 
       | Likewise, the successful operational rollout of an AI-mediated
       | automated or semi-automated decisioning problem is a complex
       | problem requiring a totally different skillset than that of ML
       | researchers in the lab. Computer systems and human procedures
       | have to be created to manage the automation of the decision;
       | decision results must be tracked and errors fed back to the lab
       | to update models. Radiologists (including new radiologists) will
       | of course be needed to understand the errors and provide correct
       | labels, etc. Trust and mindshare in the medical community has to
       | be built up. These things are not easy.
        
       | microdrum wrote:
       | It can be true both that AI beats radiologists AND the number of
       | radiologist jobs in the U.S. is going up.
        
       | justbjarne wrote:
       | The relevant question is not why radiologists aren't widely using
       | AI software, but how accurate is the AI software relative to
       | their human counterparts. Previous studies on the subject
       | indicate that the accuracy of radiologists and AI software is
       | comparable.
       | 
       | Radiologists are among the highest paid medical specialists and
       | have little incentive to use AI software. It would be against
       | their own interest - their compensation would go down, and their
       | skills would be commoditized. Never mind that if they provided
       | diagnosis feedback to the AI software to further strengthen the
       | ANN models it would accelerate the decline of their profession.
       | 
       | HMOs and governments ultimately set the pay scale for this
       | service. Some hospitals are already outsourcing radiology work to
       | India. It's just a matter of time before AI is used more widely
       | in the field due to cost constraints.
        
       | mchusma wrote:
       | These systems are basically banned from working as actual
       | replacements for radiologists, so it's no surprise they aren't
       | yet. We have repeatedly proved the superiority of expert systems
       | (in the 90s) and AI and select medical tasks. However, there is a
       | legal monopoly (in the US) that requires most medical tasks to be
       | performed by expensive doctors.
       | 
       | If people were able to use these tools directly, we could see
       | dramatically better results because we would be giving people
       | decent healthcare at basically zero cost. Cost is by far the
       | biggest problem in healthcare today. Low cost would change
       | behavior in a large number of medical tasks, and early detection
       | of cancer is the most obvious. If you could get a mediocre
       | readout for free, you would probably do so more often. Cancer in
       | particular is almost entirely an early detection problem.
       | 
       | Using AI to assist radiologists is probably never going to be a
       | huge thing. Just like AI assisted truck driving is never going to
       | be huge (because it doesn't solve the core problem).
        
         | ceejayoz wrote:
         | > We have repeatedly proved the superiority of expert systems
         | (in the 90s) and AI and select medical tasks.
         | 
         | Part of the problem here, though, is a human radiologist may
         | look at a "is the bone broken? x-ray and go "yeah, obviously,
         | but what's this subtle spot here?" and find an early state bone
         | cancer or something along those lines. There's a value to that.
         | 
         | The AI might give you the right answer, too, but miss the more
         | subtle issue the AI isn't equipped to spot.
        
       | boringg wrote:
       | Does anyone know the collective time and energy put into deep
       | learning models versus the social benefit? I recognize that this
       | is near impossible to calculate and the benefits will hopefully
       | be for many years to come.
       | 
       | It does feel like the hype around deep learning has been large
       | though and significant progress has been not as sticky as hoped.
        
       | toolslive wrote:
       | I've done a project in this AI domain a decade ago: look at
       | different scans and then decide if a pixel is cancer tissue or
       | not. Project succeeded and I'm pretty sure some radiologist is
       | enjoying his 4h work week.
        
       | bagswatchesus wrote:
       | I am sure that deep learning-based readings will do a fine job.
       | But for more diffuse inquiries, or for times when there is
       | disagreement or uncertainty over a reading
       | https://www.bagsshoesfr.com/hermes/accessories/blankets.html
        
       | [deleted]
        
       | mikesabbagh wrote:
       | >Many companies/etc keep promising to "revolutionize"
       | 
       | I think the technology is already here, but society does not
       | allow technology to fail at a similar rate as a human. Also the
       | second question, who is to blame when it fails? Doctors have
       | malpractice insurance. A radiologist has to sign every report(and
       | get paid). When a tesla auto-pilot has an accident, it hits the
       | news. This is while humanity is having thousands of accidents a
       | day.
       | 
       | Mammography is the most difficult radiography to interpret. Cant
       | we start with the regular chest xrays? how about bone fractures
       | and trauma x-rays? Those are easier, and i am sure the cost of
       | such xray will be very low.
       | 
       | So I think the problem is with politics and legal.
       | 
       | Do u know that 80% of doctors visits are for simple complaints
       | like headache, back pain or prescription refill? Do u really
       | think AI cant solve this?
       | 
       | It is all about the money baby
        
       | williesleg wrote:
       | Thanks H1B Visa lottery winners.
        
       | ipspam wrote:
       | That guy, Jovan Pulitzer, of election audit fame, claims to have
       | patents on this. Not saying anything.... Just it's a popular
       | field, and seems like lots of people piling in...... Without
       | expected results....
        
       | throwthere wrote:
       | The original article referred to by the blog post is here from
       | last week-- https://qz.com/2016153/ai-promised-to-revolutionize-
       | radiolog... .
       | 
       | The conclusion is that AI will revolutionize radiology. It's just
       | that nobody knows when. And it's not like there's some
       | socioeconomic or whatever barrier preventing AI from being used
       | (as an aside, there are barriers of course)-- it's simply that AI
       | isn't good enough yet.
       | 
       | It's not a surprise to anyone who relies on radiologists and has
       | reviewed the current AI state of the art. Yes, with machine X on
       | patients meeting criteria Y, you can rule out specific disease Z.
       | But the algorithms don't generalize very well. It's like
       | declaring you'll have self-driving cars in 5 years because you
       | can drive one straight on a highway in sunny Arizona that only
       | occasionally causes fatal crashes.
        
       | throwitaway1235 wrote:
       | "only about 11% of radiologists used AI for image interpretation
       | in a clinical practice. Of those not using AI, 72% have no plans
       | to do so"
       | 
       | Highly intelligent people, of which radiologists would fall
       | under, are not going to adopt technology clearly aimed at
       | replacing them.
        
       | justbored123 wrote:
       | "only about 11% of radiologists used AI for image interpretation
       | in a clinical practice. Of those not using AI, 72% have no plans
       | to do so while approximately"
       | 
       | I hope the author is joking. Does he really expect the technology
       | to be pushed forward by the people that is going to replace???
       | They are your main obstacle after the technical issues, don't use
       | their adoption as a metric.
       | 
       | It is just crazy to say to people "hey I'm going to make your
       | profession obsolete and take your job an status in society away
       | by implementing this new tech that is just way better that you,
       | help me do it".
        
       | gowld wrote:
       | Are ignored research papers a source of startup ideas?
       | 
       | Should entrepeneurs build products based on this research, to
       | sell or get aquired by incumbents?
        
       | jofer wrote:
       | There are many parallels with seismic interpretation here. Many
       | companies/etc keep promising to "revolutionize" interpretation
       | and remove the need for the "tedious" work of a
       | geologist/geophysicist. This is very appealing to management for
       | a wide variety of reasons, so it gets a lot of funding.
       | 
       | What folks miss is that an interpreter _isn't_ just drawing lines
       | / picking reflectors. That's less than 1% of the time spent, if
       | you're doing it right.
       | 
       | Instead, the interpreter's role is to incorporate all of the
       | information from _outside_ the image. E.g. "sure we see this
       | here, but it can't be X because we see Y in this other area", or
       | "there must be a fault in this unimaged area because we see a
       | fold 10 km away".
       | 
       | By definition, you're in a data poor environment. The features
       | you're interested in are almost never what's clearly imaged --
       | instead, you're predicting what's in that unimaged, "mushy" area
       | over there through fundamental laws of physics like conservation
       | of mass and understanding of the larger regional context. Those
       | are deeply difficult to incorporate in machine learning in
       | practice.
       | 
       | Put a different way, the role is not to come up with a reasonable
       | realization from an image or detect feature X in an image. It's
       | to outline the entire space of physically valid solutions and,
       | most importantly, reject the non-physically valid solutions.
        
         | TuringNYC wrote:
         | >> seismic interpretation here
         | 
         | Strong disagree here. Lets put aside the math and focus on
         | money.
         | 
         | I dont know much about seismic interpretation, but I know a lot
         | about Radiology+CV/ML. I was CTO+CoFounder for three years full
         | time of a venture-backed Radiology+CV/ML startup.
         | 
         | From what I can see, there is a huge conflict of interest w/r/t
         | Radiology (and presumably any medical field) in the US.
         | Radiologists make a lot of money -- and given their jobs are
         | not tied to high CoL regions (as coders jobs are), they make
         | even more on a CoL-adjust basis. Automating these jobs is the
         | equivalent of killing the golden goose.
         | 
         | Further, Radiologists standards of practice are driven partly
         | by their board (The American Board of Radiology) and the
         | _supply of labor_ is also controlled by them (The American
         | Board of Radiology) by way of limited residency spots to train
         | new radiologists.
         | 
         | So Radiologists (or any medical specialist) can essentially
         | control the supply of labor, and control the standards of best
         | practice, essentially allowing continued high salaries by way
         | of artificial scarcity. _WHY ON EARTH WOULD THEY WANT THEIR
         | WORK AUTOMATED AWAY?_
         | 
         | My experience during my startup was lots of radiologists mildly
         | interested in CV/ML/AI, interested in lots of discussions,
         | interested in paid advisory roles, interested in paid CMO
         | figurehead-positions, but mostly dragging their feet and
         | hindering real progress, presumably because of the threat it
         | posed. Every action item was hindered by a variety of players
         | in the ecosystem.
         | 
         | In fact, most of our R&D and testing was done overseas in a
         | more friendly single payer system. I dont see how the US's fee-
         | for-service model for Radiology is ever compatible with real
         | progress to drive down costs or drive up volume/value.
         | 
         | Not surprisingly, we made a decision to mostly move on. You can
         | see Enlitic (a competitor) didnt do well either despite the
         | star-studded executive team. Another competitor (to be unnamed)
         | appears to have shifted from models to just licensing data.
         | Same for IBM/Merge.
         | 
         | Going back to seismic interpretation -- this cant be compared
         | to Radiology from a follow-the-money perspective because
         | seismic interpretation isnt effectively a cartel.
         | 
         | Happy to speak offline if anyone is curious about specific
         | experiences. DM me.
        
           | unsrsly wrote:
           | Interesting, can you give an example of a radiologist
           | hindering progress? You make an interesting point about
           | radiologists setting practice standards - what alternative do
           | you propose? You may also want to consider that radiologists
           | don't determine practice standards in a vacuum - they have to
           | serve the needs and expectations of their clinical
           | colleagues.
        
           | rossdavidh wrote:
           | So, there are lots of countries with a shortage of
           | cardiologists, some of whom could probably use any halfway
           | effective AI solution if the alternative is no radiologist
           | available at all. Perhaps this sort of thing should be
           | started in a medium-income country rather than the
           | wealthiest? Not the ones who cannot afford the equipment at
           | all, but the ones whose trained radiologists keep leaving for
           | wealthier countries.
        
           | WalterBright wrote:
           | > fee-for-service model for Radiology
           | 
           | It's not exactly a fee for service model that's the problem.
           | It's the monopoly over the supply of labor.
           | 
           | Any business _and union_ that manages to get its competition
           | outlawed is _guaranteed_ to abuse that position.
        
           | cogman10 wrote:
           | I see a lot of parallels to this and airline control towers.
           | 
           | I think most can see that what control towers are doing is
           | highly automatable (It's a queue) yet the industry has
           | sabotaged it at every turn. Because the jobs it will
           | eliminate are the ones that need to verify the system works
           | correctly.
           | 
           | A similar thing happens with train operators. We are busy
           | setting up self driving cars, yet a self driving train is
           | almost a trivial implementation that we don't do because the
           | train conductors would never get on board with such a system.
        
             | bonoboTP wrote:
             | Self-driving subways do exist in a number of cities. But
             | generally, train operator aren't a huge cost. While taxi
             | driver serves 1-2 customers at a time, a conductor can
             | easily move around a thousand people in a train. Or
             | compared with trucks, freight trains can be ridiculously
             | long and so paying one guy is really nothing in comparison.
        
           | mola wrote:
           | For short term oriented VC funded startup any professional
           | who likes to err on the side of caution is immediately looked
           | upon as a corrupt actor hindering progress for personal gain.
        
             | Fomite wrote:
             | "Move fast and break things" is less compelling when the
             | things in question is someone's grandmother.
        
             | TuringNYC wrote:
             | You can just look at this from the outside w/o any
             | startup's opinions: Why are some products and services'
             | costs growthing faster and out of lockset with the rest of
             | the economy. Some example niches come to mind: college
             | textbooks, college tuition, specialty medical costs.
             | 
             | No one on the outside has to opine -- you can just look at
             | the prices for some of these and know there are abnormal
             | market forces at work.
        
           | jollybean wrote:
           | Family member worked on a board of a hospital, they raised
           | charitable donations to buy more powerful software for the
           | radiologist.
           | 
           | The radiologist can now work 10x faster and still bills the
           | hospital the same amount.
           | 
           | The Doctor's Guild is exceedingly powerful.
           | 
           | I really wish Wallmart or Amazon would get into providing
           | healthcare services on the long tail - a lot of common stuff.
           | 
           | I sounds odd but both those companies are built around
           | ripping margins out of the value chain and not keeping much
           | for themselves.
           | 
           | Ok, maybe not either of them ... but something like that: The
           | 'Walmart of Healthcare' that revolutionizes cost.
           | 
           | Also - there are enough Medical Practitioners who would work
           | there. Enough of them do care about patient outcomes, cost
           | etc..
        
             | watwut wrote:
             | You would expect the radiologist to be paid less because
             | he/she does more work now?
             | 
             | My salary did not went down when we migrated to more
             | efficient tech.
        
               | bvrstvr wrote:
               | You're misconstruing the parent's point. The radiologist
               | can be paid the same, even more. Their point is that
               | while the cost (at least in time) of the radiologist's
               | work was cut to 1/10, individual patients' bills remained
               | constant.
        
               | jollybean wrote:
               | They get paid per review. They're now doing considerably
               | more reviews thanks to the new software, paid for by
               | donations.
               | 
               | Cost to patients is the same, but processed faster.
               | Doctor making bank.
        
           | JoeAltmaier wrote:
           | I don't see this as a response to the previous post at all.
           | It was about the technical issues associated with a
           | professional data interpreter, outside the simple image being
           | interpreted. This is just cynicism about money and
           | motivation.
           | 
           | Is Radiology affected by the same external factors as
           | seismology? Does one image area depend deeply on surrounding
           | features? Are there external rules that can override what the
           | image seems to present?
        
           | grogenaut wrote:
           | Would you say this is similar to challenges in other fields
           | such as law?
        
             | TuringNYC wrote:
             | Partly -- law controls standards to some extent, but does
             | not control supply necessarily.
        
             | xkjkls wrote:
             | Lawyers don't really control their own supply the way
             | doctors do, which is why there is a great overabundance of
             | people with law degrees in the country. AI has actually
             | been used in a number of legal contexts, like building
             | normalized contracts, or paralegal work. It's also because
             | a lot of the highly paid legal work is pretty hard to
             | automate in the same way, because it requires much more
             | understanding of precedent or other nebulous ways of
             | interpretation that AI isn't suited for.
        
               | dragonwriter wrote:
               | > Lawyers don't really control their own supply the way
               | doctors do
               | 
               | Bar associations do control standards for qualifications
               | and acceptable on-ramp paths which directly governs
               | supply (in fact, the oversupply differs in jurisdictions
               | as a direct result of these decisions).
               | 
               | A key difference is that the legal pipeline isn't
               | sensitive to federal funding to govern supply of
               | qualified new lawyers the way the medical pipeline is for
               | doctors, though; there's nothing analogous in law to the
               | reliance on medicare funding of residency slots in
               | medicine.
        
               | TuringNYC wrote:
               | >> A key difference is that the legal pipeline isn't
               | sensitive to federal funding to govern supply of
               | qualified new lawyers the way the medical pipeline is for
               | doctors, though; there's nothing analogous in law to the
               | reliance on medicare funding of residency slots in
               | medicine.
               | 
               | This a myth. Residency funding from medicare is an excuse
               | because the funding is so little. The real bottleneck
               | here is the number of seats opened up by the specialty
               | medical boards. Residents earn very little, under six
               | figures, yet billings for residents are multiples of
               | that. Even after resident stipends, benefits, tooling,
               | and infra, i'm certain medical billings more than cover
               | costs.
        
               | lavishlatern wrote:
               | The medical pipeline doesn't have to be sensitive to
               | federal funding either. There is nothing preventing
               | residencies from being privately funded (besides the fact
               | that most are currently publicly funded).
               | 
               | Medicare funds this out of a broad idea of it being a
               | public good if there are more physicians. Note, there is
               | no obligation that physicians work in public service
               | after residency. This is in contrast to if you go to med
               | school on a military scholarship (in which case, there is
               | an obligation to serve).
               | 
               | In other words, if medicine weren't cartel, the
               | government wouldn't need to pay doctors to train new
               | doctors.
        
             | jplr8922 wrote:
             | Its probably a challenge for any profession where there is
             | a legal monopoly where X service must be performed by Y
             | individual, who also get to choose the quality of X and the
             | number of Y in the market.
        
           | tinomaxgalvin wrote:
           | I hear this sort of argument a lot in different fields.
           | Usually it's because the IT guy doesn't really understand the
           | business they are trying to automate or where the true pinch
           | points or time savings are.
        
             | joe_the_user wrote:
             | The thing about Health Care is most efforts to automate it
             | have failed. Arguably that's because no one "understands"
             | the field, in the sense that no one can give, codified
             | summary of the way they operate; each professional who's
             | part of a health care pipeline takes into account twenty
             | different common variabilities in human
             | body/health/behavior/etc.
             | 
             | It's similar to the situation of self-driving cars, where
             | the ability to do the ordinary task is overwhelmed by the
             | existence of many, many corner cases that can't be easily
             | trained-for. Except in health care, corner cases are much
             | more common. Just seeking health care is an exceptional
             | relative to something in ordinary life.
        
             | TuringNYC wrote:
             | Could you provide some examples of fields where
             | practitioners control both supply and standard of practice
             | where automation is also shunned, perpetuating high costs?
             | Also, note, the _largest source of bankruptcy in the US is
             | medical costs_ https://www.cnbc.com/2019/02/11/this-is-the-
             | real-reason-most...
             | 
             | "They dont understand the business" is a great excuse for
             | maintaining status quo. I'm an Engineer, a quant, and a
             | computer scientist by training and I refuse to accept
             | defeat w/o sound reason. I will if I'm given a good reason,
             | but "go away you guys, you dont understand our business" is
             | defeatist. If we all accepted such answers society would
             | never progress. I'm sure horse carriages said the same
             | thing when people tried to invent motor vehicles.
        
               | FredPret wrote:
               | I wonder if you can replace a GP with a decision tree.
               | You could update the tree as new research is done.
               | 
               | If you could collect reliable diagnostic data locally,
               | you could serve this globally and for free.
               | 
               | It would also be a treasure trove of data about how we
               | respond to various treatments.
        
               | rscho wrote:
               | > I wonder if you can replace a GP with a decision tree.
               | 
               | No, you can't.
               | 
               | > If you could collect reliable diagnostic data
               | 
               | And there's the reason. You can't do that either. There
               | is a reason why GPs go through medical school.
        
               | FredPret wrote:
               | > No, you can't.
               | 
               | Any sound reason, or are you either a) a defeatist, or b)
               | a GP?
               | 
               | >There is a reason why GPs go through medical school
               | 
               | The input data would be basic things like:
               | 
               | - blood pressure
               | 
               | - weight
               | 
               | - images of the ear canals and throat
               | 
               | - blood, urine, saliva samples, perhaps analyzed in a
               | regional centre
               | 
               | You don't need a ton of training to get the above from a
               | patient and into a computer, and to ship the samples.
        
               | rscho wrote:
               | > Any sound reason
               | 
               | The job of a GP is actually probably one of the top
               | hardest to automate, because the GP's main (and often
               | only) job is to extract information. And that _does not_
               | consist in performing plenty of tests, but in speaking to
               | and most importantly listening to the patient.
               | 
               | > You don't need a ton of training to get the above from
               | a patient and into a computer, and to ship the samples.
               | 
               | Great! And you know what good that would do to improve
               | diagnostic accuracy? Zilch. Zero. There's a saying that
               | '90% of diagnoses are done on history'. Now tell me why
               | that would be different for an algorithm given identical
               | information? If there was a simple answer to that, we'd
               | already be running statistical models over patient labs
               | all day long, which we're not.
               | 
               | > are you either a) a defeatist, or b) a GP?
               | 
               | I'm an epidmiologist and also a practicing
               | anesthesiologist, which is why the statistical theories
               | of people who have never set foot in the clinics to see
               | what's the job really about make me want to jump off a
               | bridge.
        
               | [deleted]
        
               | rayiner wrote:
               | So first of all, you're incorrect about medical costs
               | being the number one reason for bankruptcies: https://www
               | .washingtonpost.com/politics/2019/08/28/sanderss-...
               | 
               | I'll give you a concrete example in the legal field. Big
               | firms might have reasons to avoid labor-saving
               | automation, because they bill by the hour. But a large
               | fraction of legal work isn't billed by the hour, it's
               | contingency work (where the firm gets a certain fraction
               | of a recovery) or fixed fee work. If you're getting paid
               | 1/3 of the amount you recover (a typical contingency fee)
               | you have enormous incentives to do as little work to get
               | a good result as you can. But those firms don't use a lot
               | of legal technology either, because it's just not very
               | good and not very useful.
               | 
               | The bulk of legal practice is about dealing with case-
               | specific facts and legal wrinkles. And machine learning
               | tends not to be useful for that, at least in current
               | forms.
        
               | nonfamous wrote:
               | That WP article doesn't support your claim. It's about
               | the number of bankruptcies, not the leading cause.
               | Nonetheless it does cite a survey that found medical
               | bills contributed to 60+% of bankruptcies, and that it
               | doesn't really make sense to talk about a single cause.
        
               | bhupy wrote:
               | It's a stat that requires _a lot_ of contextualization.
               | To your point, you 're absolutely correct that the
               | _number_ of bankruptcies is important, because over the
               | last couple decades, 1) bankruptcies in general have been
               | falling, 2) medical bankruptcies have also been falling
               | in absolute terms; but because the denominator has
               | dramatically fallen relative to the numerator, the
               | numerator looks larger than it actually is.
               | 
               | https://www.theatlantic.com/business/archive/2009/06/eliz
               | abe...
               | 
               | In other words, medical bankruptcies have _fallen_ in
               | absolute terms, but you wouldn 't know that by just
               | looking at the %age of bankruptcies.
        
               | ddingus wrote:
               | Why not simplify the medical bankruptcy discussion?
               | 
               | Fact is Americans have high personal cost and risk
               | exposure relative to nearly all of the rest of the world.
               | 
               | Second, our system has making money as the priority,
               | again in contrast to much of the world.
               | 
               | Finally, most of the world recognizes the inherent
               | conflict of interest between for profit and sick/hurt
               | people and both regulate that conflict to marginalize it,
               | and make it so people have options that make sense.
               | 
               | My take, having been chewed up by our toxic healthcare
               | system twice now (having a family does matter, lol), is
               | the temporary dampening on cost and risk escalation
               | starting the ACA brought to us is fading now, and issues
               | are exceberated by the pandemic (demand for care crashing
               | into variable supply), and shifted somewhat as large
               | numbers of people fall into subsidy medicaid type
               | programs due to job loss.
               | 
               | The honeymoon period is long over now, and the drive to
               | "make the number" is going to be front and center and
               | escalating from here.
               | 
               | TL;DR: We are not improving on this front at all. We need
               | to.
               | 
               | I could go on at length about high student debt and it's
               | impact on these discussions too.
               | 
               | The radiology control over labor, preserving income for
               | it's members is totally real, and fron their point of
               | view, necessary. They ask the legitimate question in the
               | US: How can I afford to practice.
               | 
               | Most of the world does not put their medical people in
               | positions to ask that question, with some exceptions,
               | those being far more rare and easily discussed than most
               | of the topic is here.
        
               | mumblemumble wrote:
               | So, machine learning does get used quite a bit in the
               | legal industry, at least outside of small practice. But
               | it tends to be much more successful when it's used as a
               | force multiplier for humans rather than a replacement for
               | humans.
               | 
               | For example, the idea of using document classification to
               | reduce review costs has been around for a long time. But
               | it took a long time to get any traction. Some of that was
               | about familiarity, but a lot of it was about the original
               | systems being designed to solve the wrong problem. The
               | first products were designed to treat the job as a fairly
               | straightforward binary classification problem. They
               | generally accomplished that task very well. The problem
               | was you had to have a serious case of techie tunnel
               | vision to ever think that legal document classification
               | was just a straightforward binary classification problem
               | in the first place.
               | 
               | Nowadays there are newer versions of the technology that
               | were designed by people with a more intimate
               | understanding of the full business context of large-scale
               | litigation, and consequently are solving a radically
               | reframed version of the problem. They are seeing much
               | more traction.
        
               | jrumbut wrote:
               | The coordination problems in creating a system designed
               | from the beginning to be human in the loop is a
               | challenge.
               | 
               | There are a lot of great ML algorithms, even if you limit
               | yourself to 10-20 year old ones, that aren't leveraged
               | anywhere like how they could be because very few know how
               | to build such a system by turning business problems into
               | ML problems and training users to work effectively
               | alongside the algorithm.
               | 
               | CRUD application development projects blow past deadlines
               | and budgets frequently enough. ML projects have even
               | greater risks.
               | 
               | Edit: I hope the people making the successful legal
               | document management system you mentioned write about
               | their experience.
        
               | mumblemumble wrote:
               | FWIW, my experience has been that, if you're trying to
               | build a system that works in tight coordination with
               | humans, you're better off sticking to algorithms that are
               | 40-80 years old. Save some energy for dealing with the
               | part that's actually hard.
        
               | [deleted]
        
               | ghaff wrote:
               | > the largest source of bankruptcy in the US is medical
               | costs
               | 
               | That's not what the article says.
               | 
               | "Two-thirds of people who file for bankruptcy cite
               | _medical issues_ as a key contributor to their financial
               | downfall. "
               | 
               | Those issues can absolutely include direct costs, but
               | they also include things like not being able to work,
               | needing a lot of day to day help, and other things that
               | increase costs and reduce income even if the actual
               | medical costs were largely covered.
        
               | [deleted]
        
               | xkjkls wrote:
               | As a question, why haven't any of these techniques made
               | waves outside the US? Other countries don't have the same
               | monopoly/monopsony powers in the medical industries that
               | are prevalent in the US.
        
               | PeterisP wrote:
               | US is exactly the place where those techniques would make
               | waves because of what the US is paying for radiology; in
               | countries where radiologists don't have the same
               | monopoly/monopsony powers it's not nearly as lucrative to
               | replace them.
               | 
               | For example, I'm distantly involved in a project with
               | non-US-radiologists about ML support for automating
               | radiology note dictation (which is a much simpler and
               | much "politically cleaner" issue than actual radiology
               | automation), and IMHO they and their organization would
               | be happy to integrate some image analyis ML tools in
               | their workflow to automate part of their work. However,
               | the current methods really aren't ready, and the local
               | market isn't sufficiently large to make the jump and make
               | a startup to make them ready, that would have to wait
               | until further improvements, most likely done by someone
               | trying to get the US radiologists' money.
        
               | hik wrote:
               | There's not really a way to disambiguate the two though -
               | the fact that there are lots of medical technology
               | startups and new drugs coming out of the US is _because_
               | of the costs involved and how much can be harvested by
               | being a little better. This creates new technologies that
               | the US can 't really protect against proliferation - so
               | all of the money _has to be harvested_ from the US
               | market.
               | 
               | This isn't necessarily a bad thing - I for one happen to
               | think it's _great_ that our expensive medical system is
               | financing all kinds of wonderful new technologies that
               | benefit the world overall. However, the major problem
               | here is that things that would be useful for other places
               | simply don 't have the market to support it, so most
               | medical innovation exists in the _context_ of the US
               | medical system and it 's problems - some of which are
               | widespread, some of which are not. I do wish there were
               | some other testbed healthcare systems out there for
               | companies to try to disrupt, but I don't think it is (by
               | itself) a call for medical reform.
               | 
               | My preferred medical reform is to "legalize insurance
               | markets" (ie: repeal laws that state that insurance
               | companies operating in state Y cannot sell insurance to
               | people in state X because state Y policies are not
               | legally compatible) and try to break the monopoly that
               | doctors and nurses enjoy....somehow. Telehealth? Maybe?
        
               | watwut wrote:
               | > I for one happen to think it's great that our expensive
               | medical system is financing all kinds of wonderful new
               | technologies that benefit the world overall.
               | 
               | Does it factoring in situation of people unable to pay
               | medical bills?
        
               | [deleted]
        
               | bradleyjg wrote:
               | If the entire rest of the world isn't a big enough market
               | to be worth developing for then maybe we don't need ml
               | radiology we just need medical reform.
        
               | PeterisP wrote:
               | The entire rest of the world isn't a market, it's many
               | separate markets that need to be entered separately by
               | overcoming different moats. Market fragmentation matters,
               | especially in regulated industries like medicine.
               | 
               | But yes, medical reform is definitely something that
               | might be helpful - technological solutions almost always
               | aren't the best way for solving social/political
               | problems.
        
               | [deleted]
        
               | Isinlor wrote:
               | EU seems to have quite a lot of companies offering AI
               | solutions in radiology:
               | 
               | https://grand-challenge.org/aiforradiology/companies/
        
               | Fomite wrote:
               | Or the VA, which is a massive single-payer healthcare
               | system that would _love_ to cut costs.
        
               | tinomaxgalvin wrote:
               | I don't really know of one.. I don't think automation is
               | ever shunned as long as it is useful and known to be
               | useful. Everyone likes things that save time.
               | 
               | There is an essentially an unrestricted demand for
               | healthcare across the world.. they will use the time to
               | either talk to their patients more (or start to if they
               | don't already).. or they will move into other medical
               | fields.. or increase the volume of screening.. (may be
               | harmful, but that's another matter). They probably don't
               | want to do it as it won't really save them much time. OR
               | it will save them time and they have been burnt before.
               | For example, early voice recognition was very poor and
               | over promised. Stopped me using it for ages after it
               | became fairly good. It's still not actually better than
               | typing, but it is closer now. Let's all focus on voice
               | recognition that works before moving on to grander
               | plans....
        
               | WalterBright wrote:
               | > examples
               | 
               | The taxi system, until Uber and Lyft kicked their ant
               | hill.
        
           | petra wrote:
           | >> In fact, most of our R&D and testing was done overseas in
           | a more friendly single payer system.
           | 
           | So are CV/ML radiology systems deployed somewhere globally?
           | Where, and how successful are they ?
           | 
           | And if not, why ?
        
           | ssivark wrote:
           | To balance that, do you have any comments on the arrogance
           | and incentive problems in deep learning? :-P
        
           | pas wrote:
           | It takes one group of great radiologists who have a bit of
           | altruistic/capitalistic/venture side, doesn't it?
        
             | TuringNYC wrote:
             | Yes. Or the right economic setup for societal gain where we
             | can compete on value. Going from pay-for-service to value
             | based care will be great. In the meantime, setups like
             | https://www.nighthawkradiology.com/ are also great because
             | they drive efficiency, I just wish they were more
             | prevalent.
        
           | rscho wrote:
           | Well, I am guessing you are not an MD and as such you do not
           | understand what radiology really is as a profession. You
           | certainly have a very advanced technical knowledge about it,
           | even much more than most radiologists. And that's precisely
           | the catch: why are radiologists (mostly) non-technical
           | people? The only possible answer is 'because what's asked of
           | them as professionals is not technical'. As many (all) other
           | specialties, radiology is more art than science. Its the
           | science of interpreting images in context, and you can't
           | separate the two.
           | 
           | So actually, radiology startups all fail on this crucial
           | issue: to do a good job, you'll not only have to automate
           | image interpretation, but really automate that of the whole
           | EHR. And given the amount of poorly encoded information in
           | there, machines fail now and will continue to do so in the
           | foreseeable future.
        
             | TuringNYC wrote:
             | No, Im not an MD, but my co-founder was.
             | 
             | Globally, hundreds of thousands of radiologists have been
             | trained over the years have have collectively achieved
             | generally consistent practices. Radiology is pattern
             | matching and a set of very complex decision trees. They
             | arent magic, because we consistently churn out more
             | practitioners who achieve the same consistent outputs given
             | inputs.
             | 
             | Anyone trying to improve things is like every other
             | scientist, they aren't trying to figure out the entire
             | decision tree or every single thing, they are trying to
             | chip away on complex problems little by little.
             | 
             | I also strongly disagree with "radiology is more art than
             | science" because if it was, radiologists wouldn't be able
             | to agree on diagnoses.
        
           | teachingassist wrote:
           | > WHY ON EARTH WOULD THEY WANT THEIR WORK AUTOMATED AWAY?
           | 
           | Because any radiologist directly involved in the work of
           | automating it away could capture multiple salaries.
           | 
           | > but mostly dragging their feet and hindering real progress,
           | presumably because of the threat it posed.
           | 
           | It sounds more like they were not offered a stake, or were
           | not sufficiently convinced it would work enough to accept a
           | stake.
        
             | prepend wrote:
             | I agree with you and don't know why people would think
             | radiologists would be against automating their jobs away.
             | 
             | Most radiologists aren't paid by the hour so it's not like
             | the longer it takes to review and diagnose the better.
             | Having automation tools would allow a radiologist to do
             | even more work and make even more money.
             | 
             | Unless someone literally thinks they won't need an
             | authoritative radiologist in the loop any longer. But
             | that's pretty silly since we can't even automate a
             | McDonald's cook out of the picture.
        
               | TuringNYC wrote:
               | >>> I agree with you and don't know why people would
               | think radiologists would be against automating their jobs
               | away...Having automation tools would allow a radiologist
               | to do even more work and make even more money.
               | 
               | I'd love to understand your viewpoint here. What you're
               | describing would be awesome to a small segment of
               | radiologists, but then what happens to the rest of them?
               | 
               |  _Further, why would the rest agree to it?!?!_ This isnt
               | web ad sales or hosting where anyone can come in, do a
               | better job, and win market share and get rich. Rather,
               | here, the limited set of Radiologists would need to agree
               | on standards of practice via the ABR -- why would they do
               | that if it means most of them suffer as a result?
        
           | antipaul wrote:
           | Good points. But I genuinely wonder what role algorithm
           | brittleness plays here.
           | 
           | "Fitting only to the test set" (see Andrew Ng quote in
           | original article) is an acute concern in my circles: digital
           | pathology in cancer research
           | 
           | See "Google's medical AI was super accurate in a lab. Real
           | life was a different story."
           | 
           | https://www.technologyreview.com/2020/04/27/1000658/google-m.
           | ..
        
           | readee456 wrote:
           | I'm pretty sure people said the same things (nothing will
           | ever change, doctors will never advocate for or accept
           | change) when radiology went from films to digital. I'm sure
           | they said the same things when radiology went from having
           | scribes to using voice recognition software (e.g. Nuance) for
           | reports.
           | 
           | There seems to be a misconception that this is some kind of
           | "all or nothing" thing, where AI will "automate away"
           | radiologists. It's like a decade ago when everybody thought
           | we were just about to "automate away" human drivers, except
           | unlike driving, most radiology reads are by definition (i.e.
           | a sick person) exceptional, out-of-baseline scenarios.
           | 
           | I think this is missing some things about radiology
           | economics. There are indeed incentives to automate as much as
           | possible, especially for outsourced radiology practices like
           | Radiology Partners or people getting paid by pharma companies
           | for detailed clinical trial reads. Organizations like these
           | are getting paid a certain amount per read. If they can use
           | software to speed up parts of their work while demonstrating
           | effectiveness, they make more money. Eventually this drives
           | down the price. There would still be a human in the loop to
           | review or sign off on whatever the AI does, and to look for
           | any anomalies that it misses. But there can be less time
           | spent on rote work or routine segmentation, and more on the
           | overall disease picture.
           | 
           | It's true the amount of imaging going on in the US has
           | increased faster than both the population growth and the
           | number of radiologists. At a certain point, the number of
           | existing radiologists doesn't have time to read the images at
           | any price. This gives the alleged cartel a few choices:
           | graduate more radiologists, outsource reads, or use software
           | to produce more output per radiologist. In the last case,
           | which a self-interested group would obviously choose, they
           | get paid the same but each individual patient pays less.
        
             | [deleted]
        
           | mcguire wrote:
           | So you're asserting that the reason your, and other,
           | companies didn't do well is not that you couldn't live up to
           | your promises but rather that there is a grand conspiracy to
           | stop progress?
           | 
           | By the way, have you checked out
           | https://timecube.2enp.com/https://timecube.2enp.com/?
        
             | TuringNYC wrote:
             | >> grand conspiracy
             | 
             | Umm, I'm asserting like " _Medicine is not perfect
             | competition_ and thus prices are not competitive. " If you
             | want to think it is a "conspiracy" you can, but Economics
             | offers great explanations for such setups. I think many of
             | us in technology think all industries are driven by merit,
             | cutting edge technology, margins, and competition.
             | 
             | In reality, not all industries are like this. This shouldnt
             | be surprising. Computers go down in price. So do cloud
             | service costs. So does RAM. But medicine stays expensive.
             | "Conspiracy" is a shallow explanation -- it is just
             | economics, it isnt perfectly competitive. And progress is
             | hindered to maintain scarcity.
        
             | nimithryn wrote:
             | "Grand conspiracy" seems a bit uncharitable IMO. He's just
             | saying that the incentives aren't aligned, which
             | legitimately seems like an issue in this space.
        
           | nanidin wrote:
           | > Further, Radiologists standards of practice are driven
           | partly by their board (The American Board of Radiology) and
           | the supply of labor is also controlled by them (The American
           | Board of Radiology) by way of limited residency spots to
           | train new radiologists.
           | 
           | Perhaps it is time to found the American Board of
           | Computational Radiology (or Medicine)? There seems to be a
           | chilling effect on tech innovation in the medical space in
           | the US. On recent trips to the dentist, it seems like most of
           | the cool new tech is coming out of Israel.
        
           | mandevil wrote:
           | People have been trying to do this with expert systems, flow
           | charts, and every other technology you can imagine, and have
           | for decades. My wife is a pharmacist, and they have software
           | that is supposed to help them out with the bewildering number
           | of medicines that are out there now. This seems like a
           | trivial case, compared to radiology: (here in the US) the FDA
           | publishes guidelines, so just take those and turn them into
           | code, but she finds it "not that much of a help" that mostly
           | gets an override. "Every once in a while I'll get an alert
           | that is helpful, but most of them are not helpful, even a
           | little bit." "Mostly false positives."
           | 
           | And that's for a lot easier case than radiology.
        
             | Fomite wrote:
             | Similarly, in infection control and antimicrobial
             | stewardship, at this point pitching Yet Another Decision
             | Support Tool will get you dirty looks.
        
           | uhhhhhhhhhhhhhh wrote:
           | If you do it right, everyone will now to go to Tobago (or
           | wherever) for the AI treatment that Just Works, and the
           | luddites will go extinct (or maybe lobby for a fresh war in
           | that region)
        
           | amusedcyclist wrote:
           | Yeah the article quoted gave very few details on the
           | supposedly inconsistent performance of the model and lots of
           | details on how few radiologists used it. Doctors (and other
           | regulated professions) are a cabal that need to be broken up.
        
             | mikepurvis wrote:
             | > Doctors (and other regulated professions) are a cabal
             | that need to be broken up.
             | 
             | What do you see as the alternative to self-regulation? Some
             | government office staffed with bureaucrats who have no idea
             | about the realities of the actual work being done?
             | 
             | I got an engineering undergrad degree and had no interest
             | in pursuing professional certification, but I certainly
             | understand the importance of it for those practicing a in
             | way that may harm the public's trust, and it made me
             | appreciate the role that other professional bodies play in
             | regulating who gets to represent themselves to the public
             | as a lawyer, doctor, electrician, etc.
        
           | nradov wrote:
           | The US healthcare system is slowly migrating from a fee-for-
           | service model to a value-based model where at least some of
           | the financial risk is shifted from employers and insurers to
           | providers. The managers running those provider organizations
           | thus have a direct incentive to adopt new technology if it
           | actually works, even over radiologist objections. So far most
           | radiology automation software hasn't generated a clear cost
           | savings. That may change as technology improves.
        
             | vajrabum wrote:
             | Things are changing but as of 2019 by a whisker, most
             | radiologists own their practices and accross all
             | specialties 56.6 percent of physicians in the US are
             | members of small practices. Physicians especially
             | specialists tend to work for and with other physicians. In
             | my area, at least in the recent past, all the cardiologists
             | and all the urologists work for the same small practices
             | organized around their specialties. I'd guess that tends to
             | blunt pricing pressure on providers at least locally (see
             | here for some stats https://www.mpo-
             | mag.com/issues/2019-10-01/view_columns/a-loo...).
        
               | nradov wrote:
               | The general trend is that smaller practices are going
               | away. More and more physicians are becoming employees of
               | larger provider organizations. Small practices just
               | aren't very viable any more because they lack the
               | negotiating power to get high reimbursement rates from
               | payers, and they don't have the economies of scale
               | necessary to comply with interoperability mandates.
               | 
               | When new physicians complete their training fewer and
               | fewer go on to start or join smaller practices.
        
             | verdverm wrote:
             | Cost savings doesn't have to be near term. Imagine a doctor
             | misses something on a reading and doesn't get the care they
             | need... lawsuits are expensive. So you can have software
             | which helps doctors do their job better which results in
             | better patient outcomes. That is something hospitals are
             | buying today for their radiology groups.
        
             | danuker wrote:
             | > where at least some of the financial risk is shifted from
             | employers and insurers to providers
             | 
             | Do you have any evidence to back that up?
        
               | FuriouslyAdrift wrote:
               | Quality of care provisions in Medicare/Medicaid/ACA all
               | help to shift shift costs to the practitioner if care is
               | poor or has bad outcomes.
        
               | telchar wrote:
               | Here's one for you:
               | https://innovation.cms.gov/innovation-models/bundled-
               | payment...
        
               | colinmhayes wrote:
               | The ACA created a capitated payment system for medicare
               | that providers can opt into. I'm not sure what evidence
               | you're looking for other than the definition of
               | capitation is "fee per patient" as opposed to "fee for
               | service." Some states like California also have capitated
               | plans on from private companies.
               | https://www.cms.gov/Medicare-Medicaid-
               | Coordination/Medicare-...
        
           | glitchc wrote:
           | CTO of a CV+AI/ML startup developing a radiology solution eh?
           | Let me ask you a couple of quick questions: What was your
           | liability insurance like? How much coverage per diagnosis did
           | you carry?
           | 
           | Let me make it simpler: How much blame was your company
           | willing to absorb if your algorithm made a faulty diagnosis?
        
             | ware_am_i wrote:
             | This is largely where the art of "labeling/claims" comes
             | into play regarding how explicitly worded a "diagnosis" can
             | be. There is a lot of room to play on the spectrum from
             | truly diagnosing a patient with a disease (which requires
             | the most evidence and carries the most liability) all the
             | way down to gently prompting a healthcare provider to look
             | at one record earlier than another one while reviewing
             | their reading queue.
        
             | TuringNYC wrote:
             | Great question! We did our trials at two overseas locations
             | in parallel with doctors. All uses cases were diagnostic
             | for immigration purposes (e.g., detecting Tuberculosis and
             | other chest infections at border points of entry). Given
             | the non-medical use -- no liability insurance. No coverage
             | diagnosis. Also given everything was run in parallel,
             | double-blind with doctors also doing reads, no blame had to
             | be absorbed. Once we got out of parallel, still we wouldn't
             | need liability.
             | 
             | The importance here was demonstrating efficacy, which we
             | did fantastically well.
             | 
             | Once we prove efficacy for multiple use cases, we can at
             | least remove the "oh you computer scientists dont get it"
             | argument and can have adult conversations about how to
             | progress state of the art rather than continue to bleed
             | patients dry.
             | 
             | I'll admit there are definitely barriers like what you
             | mention. But those barriers are not some impenetrable force
             | once we break down real issues and deal with them
             | separately and start treating the problem as one we can
             | solve as a society.
        
               | readee456 wrote:
               | I can't help but think some of the barriers here involved
               | proving the software in a situation decidedly different
               | than a clinical setting. I would not be surprised if an
               | immigration medical officer developed different views
               | about diseases than a GP or ER doctor. They're not
               | treating the person, they're not in a doctor-patient
               | relationship with the person, they're not really even
               | "diagnosing" the person, they're just deciding whether
               | they're "too sick" to come into the country. Maybe if the
               | person looks messed up in some other way, their chest
               | x-ray gets interpreted a little more strictly.
        
               | dragonwriter wrote:
               | But AI theater being good enough to replace no-stakes
               | (because no one is liable to anyone for any errors, in
               | either direction) medical theater is a step, just not as
               | big a step or relevant to any use case of any importance
               | as being sold upthread
        
               | TuringNYC wrote:
               | >> I can't help but think some of the barriers here
               | involved proving the software in a situation decidedly
               | different than a clinical setting.
               | 
               | Totally agree. But science moves in baby steps and
               | progress builds on progress. We started ML by doing
               | linear regression. Then we moved onto recognizing digits.
               | Then we moved onto recognizing cats. Suddenly, Google
               | Photos can find a friend of mine from 1994 in images it
               | appears to have automatically sucked up. That is amazing
               | progress.
               | 
               | Similarly, our viewpoint as co-founders in the space was
               | to solve a single use-case amazingly well and prove AUC
               | and cost/value metrics. The field wont be moved by me or
               | you, it will be moved by dozens of teams building upon
               | each other.
        
               | rscho wrote:
               | > Once we prove efficacy for multiple use cases, we can
               | at least remove the "oh you computer scientists dont get
               | it"
               | 
               | No, you can't. Stating this is a clear proof that you
               | don't understand what you're dealing with. In medical
               | ML/AI, efficacy is not the issue. What you are detecting
               | is not relevant. That's the issue. But I know I won't
               | convince you.
        
               | aeternum wrote:
               | From where does the efficacy come if what you are
               | detecting is irrelevant?
        
               | rscho wrote:
               | They are detecting what they are testing for. But that's
               | in most cases irrelevant regarding what happens to the
               | patient afterwards, because it's lacking major connexions
               | to the clinical situation that will have to be filled up
               | by a human expert.
               | 
               | So it does in fact work. Unfortunately, only in trivial
               | cases.
        
               | aeternum wrote:
               | Maybe, but then the problem isn't an issue with AI/ML,
               | it's that humans just suck at math.
               | 
               | We're terrible at bayesian logic. Especially when it
               | comes to medical tests, and doctors are very guilty of
               | this also, we ignore priors and take what should just be
               | a Bayes factor as the final truth.
        
               | rscho wrote:
               | We're terrible at bayesian logic all right, but still
               | better than machines lacking most of the data picture.
               | That's why the priority is not to push lab model
               | efficiency but to push for policy changes that encourage
               | sensible gathering of data. And that's _far_ more
               | difficult than theorizing about model efficiency vs.
               | humans.
        
           | billjings wrote:
           | It's worse than even that.
           | 
           | The cartel arrangement is as described, but it's increasingly
           | not even a great deal for the radiologists.
           | 
           | The business of radiology is increasingly centralized into
           | teleradiography farms. That means that radiologists are
           | working in shifts, and evaluated according to production
           | metrics, like line workers in a factory.
           | 
           | The cartel arrangement will probably continue, as it is
           | advantageous for people at the top of this food chain, but
           | it's not an arrangement that's going to result in a lot of
           | wealth and job security flowing to individual radiologists.
           | Nor will it result in great outcomes for patients.
        
           | jofer wrote:
           | "...because seismic interpretation isnt effectively a
           | cartel..."
           | 
           | I know some people who would disagree with you on that one!
           | 
           | Seriously, though, you're making an excellent point that I
           | hadn't considered. Healthcare has a lot of "interesting"
           | incentive structures and are baked-in constraints that would
           | prevent even a perfect solution from being widely deployed.
           | 
           | It's not the same as geology, for sure, even though there are
           | some parallels in terms in of image interpretation.
        
         | mikesabbagh wrote:
         | >Many companies/etc keep promising to "revolutionize"
         | 
         | It is all about the money baby
         | 
         | If u fall and go to the ER, u get an xray to rule out a
         | fracture. Many times the radiologist will read it after you
         | leave the ER, yet he gets paid.
         | 
         | If u think ML cant read a trauma xray, and offer a faster
         | service, you are wrong!! The problem is who gets paid, and who
         | is paying the malpractice insurance
         | 
         | Check out in China, they have MRI machines with ML built in. U
         | get the results before u get dressed!!
        
           | catblast01 wrote:
           | Who do you think I'd rather go after for malpractice? Someone
           | that went to school for many years dedicated to medicine or
           | the idiot stiffs behind a machine that can't even spell the
           | word "you". That is in large part also what it is really
           | about.
           | 
           | Having said that I do ML research on cross-sectional
           | neuroimaging, and basically everything you said is nonsense.
        
         | woeirua wrote:
         | I have but one upvote to give, but as someone who worked as an
         | interpreter and then moved onto the software side this is the
         | problem that 99% of people don't get.
         | 
         | You can train a DL model to pick every horizon, but you can't
         | train to pick _the_ horizon of interest. Same with faults.
         | Let's not even get started with poorly imaged areas.
        
           | tachyonbeam wrote:
           | IMO a part of the problem here is that you have a
           | misunderstanding on the part of deep learning people. They
           | look at radiology, and they say "these people are just
           | interpreting these pictures, we can train a deep learning
           | model to do that better".
           | 
           | Maybe there's a bit of arrogance too, this idea that deep
           | learning can surpass human performance in every field with
           | enough data. That may be the case, but not if you
           | fundamentally misunderstood the problem that needs to be
           | solved, and the data you need to solve radiology, for
           | instance, isn't all in the image.
           | 
           | Somewhat related: another area where DL seems to fail is
           | anything that requires causal reasoning. The progress in
           | robotics, for instance, hasn't been all that great. People
           | will use DL for perception, but so far, using deep
           | reinforcement learning for control only makes sense for
           | really simple problems such as balancing your robot. When it
           | comes to actually controlling what the robot is going to do
           | next at a high level, people still write rules as programming
           | code.
           | 
           | In terms of radiology and causal reasoning, you could imagine
           | that if you added extra information that allows the model to
           | deduce "this can't be a cancerous tumor because we've
           | performed this other test", you would want your software to
           | make that diagnosis reliably. You can't have it misdiagnose
           | when the tumor is on the right side of the ribcage 30% of the
           | time because there wasn't enough training data where that
           | other test was performed. Strange failure modes like that are
           | unacceptable.
        
             | triska wrote:
             | Expanding on this, particularly regarding causal reasoning
             | and rules, what I find especially puzzling is the desire to
             | apply deep learning even in cases where the rules are
             | explicitly _known_ already, and the actual challenge would
             | have been to reliably automate the application of the
             | known, explicitly available rules.
             | 
             | Such cases include for example the application of tax law:
             | Yes, it is complex and maybe cannot be automated entirely.
             | However, even today, computer programs handle a large
             | percentage of the arising cases automatically in many
             | governments, and these programs often already have
             | automated mechanisms to delegate a certain percentage of
             | (randomly chosen, maybe weighted according to certain
             | criteria) cases to humans for manual assessment and quality
             | checks, also a case of rule-based reasoning. Even fraud
             | detection can likely be better automated by encoding and
             | applying the rules that auditors already use to detect
             | suspicious cases.
             | 
             | The issue today is that all these rules are hard-coded, and
             | the programs need to be rewritten and redeployed every time
             | the laws change.
        
               | ethbr0 wrote:
               | I wasn't alive in the 70s, but it feels like there's a
               | counter-bias against expert systems borne out of those
               | failures.
               | 
               | "If you're putting in rules, you're don't know how to
               | build models."
               | 
               | But that's probably the difference between people having
               | success with "AI" and banging their heads against the
               | wall: do what works for your use case!
        
               | tachyonbeam wrote:
               | There's a perception in the DL field that encoding things
               | into rules is bad, and that symbolic AI as a whole is
               | bad. Probably because of backlash following the failure
               | of symbolic AI. IMO the ideal is somewhere in the middle.
               | There are things you want neural networks for, and there
               | are also things you probably want rules for. The big
               | advantage of a rule-based system is that it's much more
               | predictable and easier to make sense of.
               | 
               | It's going to be very hard to engineer robust automated
               | systems if we have no way to introspect what's going on
               | inside and everything comes down to the neural networks
               | opinion and behavior on a large suite of individual
               | tests.
               | 
               | > The issue today is that all these rules are hard-coded,
               | and the programs need to be rewritten and redeployed
               | every time the laws change.
               | 
               | The programs are probably not being rewritten from
               | scratch. I would argue that: the laws are, or basically
               | should be, unambiguous code, as much as possible. If they
               | can't be effectively translated into code, that signals
               | ambiguity, a potential bug.
        
               | MikeDelta wrote:
               | I have once seen an AI tool to determine what needed to
               | be reported.
               | 
               | I found this remarkable, as there were clear (yet
               | complex) rules on what needed to be reported, otherwise
               | even the regulator wouldn't know what it was supposed to
               | check.
        
         | ludamad wrote:
         | I wonder how often these projects truly need someone with on
         | the ground experience guiding it, as the textbook tasks as you
         | say are easy for even the humans
        
         | [deleted]
        
         | derf_ wrote:
         | I don't know anything about seismology, and I am going to put
         | aside the money and focus on the math.
         | 
         |  _> The features you 're interested in are almost never what's
         | clearly imaged -- instead, you're predicting what's in that
         | unimaged, "mushy" area over there through fundamental laws of
         | physics like conservation of mass and understanding of the
         | larger regional context. Those are deeply difficult to
         | incorporate in machine learning in practice._
         | 
         | I was part of a university research lab over 15 years ago that
         | was doing exactly this [1], with just regular old statistics
         | (no AI/ML required). By modeling the variability of the stuff
         | that you _could_ see easily, you could produce priors strong
         | enough to eek out the little bit of signal from the mush (which
         | is basically what the actual radiologists do, which we know
         | because they told us). It isn 't a turn-key, black box solution
         | like deep learning pretends to be. It takes a long time, it is
         | highly dependent on getting good data sets, and years of labor
         | goes into a basically bespoke solution for a single radiology
         | problem, but the results agree with a human as closely as
         | humans agree with each other. You also get the added bonus of
         | understanding the relationships you are modeling when you are
         | done.
         | 
         | From university lab to clinically proven diagnosis tool is of
         | course a longer road, and I have not been involved in these
         | projects for a long time, but my point is that the math problem
         | on its own is tractable.
         | 
         | [1] http://midag.cs.unc.edu/
        
         | duxup wrote:
         | I recall stories of IBM Watson's failures were focused around
         | how they sold it as just dumping data into the machine and
         | wonders coming out.
         | 
         | Meanwhile actual implementation customers weren't ready / were
         | frustrated with how much prepping the data was required, how
         | time consuming it was, and in a lot of ways how each situation
         | was a sort of research project of its own.
         | 
         | It seems like any successful AI system will require the team
         | working with the data to be experts in the actual data, or in
         | this case experts in radiology ... and take a long time to
         | really find out good outcomes / processes, if there are any to
         | be found.
         | 
         | Add the fact that the medical industry is super iterative /
         | science takes a long time to really figure out ... that's a big
         | job.
        
           | visarga wrote:
           | There's no free ride, ML is data centric, you got to get
           | close and personal with the data and its quality. That means
           | 90% of our time is spent on data prep and evaluations.
           | 
           | Getting to know the weak points of your dataset takes a lot
           | of effort and custom tool building. Speaking from experience.
        
         | boleary-gl wrote:
         | This is a really good point and example. I spent 10 years in
         | mammography software, and I saw first hand how many outside
         | factors can impact a physician's decision to biopsy or not a
         | given artifact on an image.
         | 
         | Things like family history, patient's history, cycle timing,
         | age, weight, other risk factors all play a role in a smart
         | radiologist making the right decision for that patient. And the
         | pattern recognition on top of that is really hard - it's not
         | just about the pattern you see at a particular spot in an
         | image, it's the whole image in the context of what that looks
         | like. Could ML get better over time with this? Sure...but
         | they've been using CAD in mammography for decades and it still
         | hasn't replaced radiologists at all.
         | 
         | Could a model be made to include those over variables?
         | Sure...but again the complexity of that kind of decision making
         | is something that requires a lot more "intelligence" than any
         | AI or ML system exhibits today and in my mind in the
         | foreseeable future. Just collecting that data in a structured,
         | consistent way is more challenging than people realize.
        
           | wutbrodo wrote:
           | > I spent 10 years in mammography software, and I saw first
           | hand how many outside factors can impact a physician's
           | decision to biopsy or not a given artifact on an image.
           | 
           | This is slightly tangential, but I'm curious about your
           | perspective on a classic example of medical statistical
           | illiteracy. Whenever surveyed, the strong majority of doctors
           | vastly overestimate the odds of a true positive mammogram
           | (half of them by a factor of 10!!), due to flawed Bayesian
           | thinking and the low base rate of breast cancer.[1]
           | 
           | Does your anecdotal experience contradict this data? If not,
           | wouldn't two minutes of stats education (or a system that
           | correctly accounted for base rate) utterly swamp intuition-
           | driven analysis of tiny artifacts? Or is it simply that,
           | through folk wisdom and experience, they implicitly adjust
           | other terms in their mental calculus in order to account for
           | this huge error in estimating one factor?
           | 
           | [1] https://blogs.cornell.edu/info2040/2014/11/12/doctors-
           | dont-k...
        
             | mjburgess wrote:
             | The doctor diagnosing a patient isn't solving a puzzle of
             | the kind posed here.
             | 
             | They are doing, as the previous comment said,
             | interpretation. In practice, much of their thinking is
             | profoundly rational and bayesian.
             | 
             | Human (and animal) thinking isn't primarily cognitive, ie.,
             | explicit reasoning. It is the application of learned
             | concepts to sensory-motor systems in the right manner.
             | 
             | We don't look to dr's to formulate a crossword puzzle when
             | a patient arrives; we look to them to be overly attentive
             | to yellow skin when the patient's family has a history of
             | alcoholism.
        
               | gmadsen wrote:
               | I'm not convinced this just couldn't be a large personal
               | data set into an algorithm.
               | 
               | Doctors barely have any data as it is. I think personal
               | bio testing and monitoring is going to be a huge market
               | and medical paradigm shift.
               | 
               | would you rather have you heart rate and temp constantly
               | monitored for months , or get it checked once a year by a
               | GP to see if you have hypertension or any negative
               | markers
        
               | wutbrodo wrote:
               | > In practice, much of their thinking is profoundly
               | rational and bayesian.
               | 
               | Right, this was the third option I mentioned; I'm
               | certainly not leaping all the way to the conclusion that
               | one shouldn't listen to a doctor about the best course of
               | action after mammogram results[1]. If their explicit
               | understanding of mammography's false positive rate is so
               | incredibly flawed, there is presumably an implicit
               | counterbalance in the calculus that's built on experience
               | (both their own and their mentors'/institution's), or an
               | _order-of-magnitude_ error would show up in patient
               | outcomes. I'd guess that this and the other instances of
               | critical thinking failure that plague medical culture
               | have their rough edges sanded over time, through decades
               | of what is effectively guess-and-check iterative
               | evolution, combined with institutional transmission of
               | this folk wisdom.
               | 
               | Though I disagree that I would call this "profoundly
               | rational", as IMO leaving explicit reasoning tools on the
               | table instead of intentionally combining them with
               | intuition/experience/epistemic humility is optimal.
               | Iterative evolution is not an efficient process, and
               | adding an attempt to explicitly model the functions
               | you're learning can be a powerful tool. It's very
               | difficult for me to imagine that a doctor explicitly
               | internalized the basics of Bayesian reasoning wouldn't
               | make them at least marginally more effective in terms of
               | patient outcome, etc. Medical history is full of blind
               | alleys built by medical hubris like your comment's
               | "doctors know A Deeper Truth in their soul that defies
               | explicit logical articulation". (Though I should note I
               | don't claim to have a pat answer to this problem: one can
               | theorize about improving a specific doctor's
               | effectiveness, but scaling it to the whole culture is
               | another, and can bump into everything from supply
               | problems to downstream cultural impacts with negative
               | consequences)
               | 
               | [1] Though with knowledge of flaws in such basic
               | reasoning skills in one subpart of the total calculus, a
               | patient can't rationally escape updating in the direction
               | of checking their reasoning more thoroughly. Medicine is
               | a collaborative endeavor between patient and doctor, both
               | bring flaws in their reasoning to the table, and stronger
               | evidence of a huge flaw in reasoning should lower
               | confidence in the overall decision (though at a much
               | lower magnitude, for the reasons we both describe here).
               | This is the same logic that doctors use to rationally
               | discount patient's opinions when they don't perceive them
               | as coming from, eg, overly emotional reasoning.
        
           | mikesabbagh wrote:
           | Mammography is one of the most difficult to interpret. You
           | need more data like the age and family history to decide on
           | the next step.
           | 
           | Radiology is huge. I am sure ML can help in some of the
           | specialties (it does not need to be all or none). The reason
           | it is not is because of the medical system refusing to give
           | in.
        
           | jvanderbot wrote:
           | In my field (space) we have an unspoken mantra that
           | autonomous systems should aid human decision making.
           | 
           | It's just so much easier to build a system that allows a
           | human to focus on the tough calls, than it is to build an
           | end-to-end system that makes all the decisions. Only in the
           | most extreme examples does full autonomy make sense.
           | 
           | If there were one doctor in the world, I'd build an
           | autonomous mammogram machine and have him periodically audit
           | the diagnoses. Otherwise, better tools is the way to go.
           | 
           | I noticed this when visiting the OBGYN for sonograms to check
           | the development of our children. The tools are _really good_.
           | You can easily measure distances and thicknesses, visualize
           | doppler flow of blood, everything is projected from weird
           | polar space (what the instrument measures) to a Cartesian
           | space (what we expect to see), and you can capture images or
           | video in real time.
           | 
           | Sure, the cabal factor is real, as is the curmudgeon doctor,
           | but I think we should be building tools, not doctors. We know
           | how to build doctors.
        
             | divbzero wrote:
             | Building tools to aid humans seems like the best of both
             | worlds. This is already happening in some radiology
             | subspecialties: The autonomous systems can highlight
             | potential areas of interest but it's up to humans to make
             | the final call. For some cases it's a quick and easy call,
             | but for tougher calls the radiologist can bring other
             | factors into consideration.
        
           | nradov wrote:
           | Incorporating more variables into the model wouldn't be
           | sufficient. You would also need to get the input data for
           | those variables into a form that algorithm could consume.
           | Often the raw data simply isn't recorded in the patient's
           | chart, or if it is recorded it's in unstructured text where
           | even sophisticated NLP struggles to extract accurate discrete
           | clinical concept codes.
        
           | mumblemumble wrote:
           | I see the same thing in natural language processing. A lot of
           | important details come from outside the four corners of the
           | document.
           | 
           | Ironically, I often find myself in the unenviable position of
           | being the machine learning person who's trying to convince
           | people that machine learning is probably not a good fit for
           | their problem. Or, worse, taking some more fuzzy-seeming
           | position like, "Yes, we could solve this problem with machine
           | learning, but it would actually cost more money than paying
           | humans to do it by hand."
           | 
           | Part of why I hate hate hate the term "AI" is because you
           | simply can't call something artificial intelligence and then
           | expect them to understand that it's not actually useful for
           | doing anything that requires any kind of intelligence.
        
             | mcguire wrote:
             | There's an old AI joke that all actual problems reduce to
             | the artificial general intelligence problem---everything
             | else, by definition, doesn't require intelligence.
        
               | mumblemumble wrote:
               | There's some truth to that. But I'd also argue that
               | there's a tendency to try to bill every single kind of
               | spinoff technology that the artificial intelligence
               | community has produced as artificial intelligence.
               | 
               | Which a bit like characterizing mixing up a glass of Tang
               | as a kind of space exploration.
        
             | 2sk21 wrote:
             | You are absolutely correct. In fact, most NLP software
             | ignores the formatting of documents which conveys a lot of
             | information as well. For example, section headings must be
             | treated differently from the text that makes up the body of
             | a section. Its very hard to even determine section headings
             | and then its hard to take advantage of them since the big
             | transformer models simply accept a stream of unspecialized
             | tokens.
        
               | [deleted]
        
           | awillen wrote:
           | But isn't this just what ML should be good at - taking a huge
           | number of data points and finding the patterns? Or are you
           | saying it's not an issue of ML working poorly but rather one
           | of there not being a good enough data set to train it on
           | properly?
        
             | azalemeth wrote:
             | One of the main tasks that doctors do is take patient's
             | vague and non-specific problems, build up a rapport with
             | them, understand what is normal and what is _not_ , dealing
             | with the "irrelevant" information present at the time, and
             | focus the results into a finite tree of possibilities.
             | 
             | In principle, this would be a _great_ task for an ML
             | algorithm. It 's all conditional probability. But _every_
             | such system has failed to do that well -- because the  "gut
             | feeling" the doctor develops is funded by a whole host of
             | prior information that an ML algorithm won't be trained on:
             | what is "normal" for an IV-drug addict, or a patient with
             | psychosis; how significant is the "I'm a bit confused" in
             | the middle-aged man who was cycling and came off his bike
             | and hit his head? Do the burns on the soles of this child's
             | feet sit consistently with the story of someone who ran
             | over a fire-pit barbecue "for fun", or is it a non-
             | accidental injury? It's a _world_ of shades of grey where,
             | if you call it the wrong way, ultimately someone could die.
             | Doctors do Bayesian maths, and they do it with priors
             | coming from both their own personal experience as a member
             | of society, and professional training. That is, in my
             | ignorant opinion, the main distinction between what I do --
             | oft-called  "academic medicine" or "academic radiology" --
             | and clinical medicine. The former looks at populations. The
             | latter looks at individuals.
             | 
             | In other words, I don't think it's even possible to
             | _codify_ what the data the ML algorithms should be trained
             | on -- they 're culturally specific down to the level of an
             | individual town in some sense; and require looking at huge
             | populations at others.
        
               | tomrod wrote:
               | On codification:
               | 
               | I actually disagree with this, but only slightly.
               | 
               | Imagine if instead of face to face, doctor transactions
               | were via text. The questioning of the doctor can be
               | monitored and patterns in decision trees observed could
               | be codified, and weighed against the healthcare outcome
               | (however defined).
               | 
               | What is missing, however, is the Counterfactual
               | Reasoning. The "why" matters. The machine cannot reduce
               | the doctor's choice of decision trees from all possible
               | combinations, only that which it observes the doctor
               | perform.
               | 
               | Tail-cases like rare genetic disorders would often be
               | missed.
        
               | medvezhenok wrote:
               | Tail-case like rare genetic disorders are often missed by
               | doctors too. I have several friends who had Lyme disease
               | with fairly serious complications (in the Northeast) (Not
               | that Lyme disease is that rare - it's actually much more
               | common than is expected). Each of them got misdiagnosed
               | for multiple years by multiple different doctors until
               | finally getting the correct diagnosis/treatment. So every
               | system is fallible.
        
               | [deleted]
        
               | version_five wrote:
               | > In other words, I don't think it's even possible to
               | codify what the data the ML algorithms should be trained
               | on -- they're culturally specific down to the level of an
               | individual town in some sense; and require looking at
               | huge populations at others.
               | 
               | This is inciteful. ML is by definition generalizing, and
               | should only be used where it's ok to generalize. There is
               | an implicit assumption in use cases like medical
               | diagnosis that there is a latent representation of the
               | condition that had a much lower dimensionality than the
               | data _and_ that the model is trained in such a way that
               | there are no shortcuts to generalization that miss out on
               | any information that may be important. The second
               | condition is the hardest to meet I believe, because even
               | if a model could take in lots of outside factors, it
               | probably doesn 't need to to do really well in training
               | and validation, so it doesn't. The result is models that
               | generalize, as you say, to the population instead of the
               | individual, and end up throwing away vital context to the
               | personal case.
               | 
               | I also believe this is an important consideration for
               | many other ML applications. For example those models that
               | predict recidivism rates. I'm sure its possible to build
               | an accurate one, but almost certainly these models
               | stereotypes in the way I mention above, and do not
               | actually take the individual case into account, making
               | them unfair to use on actual people.
        
               | watwut wrote:
               | Personally, my exlerience with doctors is neither rapport
               | nor nuanced understanding of my specific situation.
               | 
               | That is really not what they do, are trained to do or
               | have time to do.
        
             | tomrod wrote:
             | The gap is interpretation and application of those
             | patterns. Building expert systems is expensive, but ML hits
             | the low hanging fruit of showing patterns to experts out of
             | the park.
        
             | jjoonathan wrote:
             | Right, but "the training data is bad" is a very ML centric
             | way of looking at the issue. It pushes all the difficult
             | parts of the problem into the "data prep" sphere of
             | responsibility.
        
               | awillen wrote:
               | How else would you describe the issue?
        
               | jjoonathan wrote:
               | Structural. The problem hasn't even been correctly
               | formulated yet -- and it will take an enormous amount of
               | work to do so.
        
               | srean wrote:
               | Note that there are different ways in which data can be
               | bad (i) image resolution not good enough, too many
               | artifacts and noise (ii) its woefully incomplete, doctors
               | collect and use information from other channels that
               | aren't even in the image, regular conversations, sizing
               | up the patient, if the doctor knows the patient for a
               | long time then a sense of what is not normal for the
               | patient given his/her history etc., etc.
               | 
               | Some of the issues that have been discussed in the thread
               | can be incorporated in to a Bayesian prior for the
               | patient, but there is still this incompleteness issue to
               | deal with.
        
               | jjoonathan wrote:
               | The first step would be to build an information
               | collection pipeline that is in the same league as the
               | doctors. That alone will be a monumental effort because
               | doctors have shared human experiences to draw from and
               | they are allowed to iteratively collect information.
               | 
               | I'm just complaining that it seems fantastically
               | reductive to call the absence of such a pipeline "bad
               | data" because developing such a pipeline would be a
               | thousand times the effort of implementing an image
               | detection model. Maybe a million times. It will require
               | either NLP like none we have seen before or an expert
               | system with so much buy-in from the experts and investors
               | that it survives the thousand rounds of iterative
               | improvement it needs to address 99% of the requirements.
               | 
               | Comparing issues like low resolution and noise to such a
               | development effort seems like comparing apples to... jet
               | fighters.
        
             | jofer wrote:
             | "There's not enough good data to train it on properly"
             | 
             | Bingo: You're in _very_ data poor environment, compared to
             | something like predicting a consumer's preferences in
             | videos or identifying and segmenting a bicycle. The
             | external data is also very qualitative and hard to encode
             | into meaningful features.
        
             | ethbr0 wrote:
             | ML _should_ be good at drawing _basic_ conclusions. End
             | users are misunderstanding the boundary between _basic_ and
             | _advanced_.
             | 
             | Or, to put it another way, everyone agrees there's a
             | difference in value and quality of output between an
             | analyst with 1 year & 10 years of experience, right? So why
             | are we treating ML like it should be able to solve both
             | sorts of problems equally easily?
             | 
             | I have faith it will get there. But it's not there yet, in
             | a general purpose way.
        
               | mcguire wrote:
               | Because people like Hinton are outright _saying_ that it
               | already is there.
        
         | audit wrote:
         | I think you are onto something.
         | 
         | The feedback from radiologists I get, about companies like
         | path.ai and similar -- is that they are 'evolutionary' dead-
         | ends (meaning that they need to exist to show that something
         | should not be done that way).
         | 
         | They lack innovativness not just in technology but also in the
         | overall process.
         | 
         | That is, they are missing innovation around overall context in
         | which pathologists or radiologists work. Process includes steps
         | (and steps of steps), information sources, information feedback
         | loops, etc.
         | 
         | Certainly, there is also a view, that the overall imaging
         | process needs to evolve more (sort of like we need smart
         | highways for safe self-driving cars)
        
         | markus_zhang wrote:
         | So it seems that instead of an image recognition algo we need
         | to feed years of univ education into the AI.
        
         | riedel wrote:
         | I guess some animals are also good at seismic interpretation.
         | For radiology we first need to beat pigeons:
         | https://www.mentalfloss.com/article/71455/pigeons-good-radio...
         | (there was a HN post I think on this)
         | 
         | Actually mammography screening is done to my knowledge with out
         | any background which could bias the decision. But here humans
         | are fast anyways and even pidgins don't promise a relevant
         | price cut. When complicated decisions need to be made . E.g on
         | treatment we will have other problems with ai...
        
         | mark_l_watson wrote:
         | We used AI to analyze seismic data in the DARPA nuclear test
         | monitoring system in the 1980s. I don't think that it was
         | considered to have anything but a fully automated system. That
         | said, we had a large budget, and great teams of geophysicists
         | and computer scientists, and 38 data collection stations around
         | the world. In my experience, throwing money and resources at
         | difficult problems usually gets those problems solved.
        
           | jofer wrote:
           | Very different sort of seismic data, FWIW.
           | 
           | You're referring to seismology and deciding whether something
           | is a blast or a standard double-couple earthquake. That's
           | fairly straightfoward, as it's mostly a matter of getting
           | enough data from different angles. Lots of data processing
           | and ambiguity, but in the end, you're inverting for a
           | relatively simple mathematical model (the focal mechanism):
           | https://en.wikipedia.org/wiki/Focal_mechanism
           | 
           | I'm referring to reflection seismic, where you're
           | fundamentally interpreting an image after all of the
           | processing to make the image (i.e. basically making a
           | mathematic lens) has already been done.
        
         | shiftpgdn wrote:
         | Surely you've seen the improvement over the last 5-6 years from
         | machine learning in all the interpretation toolsets. The last
         | place I worked we internally had a seismic inversion tool that
         | blew all the commercial suites out the water. I'm currently
         | contracting for an AI/ML service company currently that has a
         | synthetic welllog tool that is can apparently beat the pants
         | off actual well logging tools for a fraction of the cost
         | (though I'm not a geologist or petrophysicist so I can't
         | personally verify this.)
         | 
         | I think the problem is more the media and advertisers likes to
         | paint the picture of a magical AI tool which will instantly
         | solve all your problems and do all the work instead of a
         | fulcrum to make doing the actual work significantly easier.
        
           | woeirua wrote:
           | Automatic interpretation has been a thing for decades and the
           | promise of replacing a geoscientist completely is always just
           | over the horizon. Even with DL. The new tools are better yes,
           | but honestly I wouldn't invest in this space. Conventional
           | interpretation is dead in the US. All the geos got laid off.
           | 
           | I'm going to call bullshit. No artificially generated well
           | log is going to _ever_ be better than a physically measured
           | log.
        
           | jofer wrote:
           | Bluntly, no. There hasn't been an improvement. At all.
           | 
           | We've been using machine learning in geology for far longer
           | than it's been called that. Hell, we invented half the damn
           | methods (seriously). Inverse theory is nothing new. Gaussian
           | processes have been standard for 60 years. Markov models for
           | stratigraphic sequences are commonly applied but again, have
           | been for decades.
           | 
           | What hasn't changed at all is interpretation. Seimsic
           | inversion is _very_ different from interpretation. Sure, we
           | can run larger inverse problems, so seismic inversion has
           | definitely improved, but that has no relationship at all to
           | interpretation.
           | 
           | Put another way, to do seismic inversion you have to already
           | have both the interpretation _and_ ground truth (i.e. the
           | well and a model of the subsurface). At that point, you're in
           | a data rich environment. It's a very different ball game than
           | trying to actually develop the initial model of the
           | subsurface with limited seismic data (usually 2d) and lots of
           | "messier" regional datasets (e.g. gravity and magnetics).
        
           | Workaccount2 wrote:
           | I am wondering (knowing nothing about this) if there is an
           | issue with the approach to acquire data that it putting AI in
           | a difficult position. This is akin to trying to train and AI
           | to walk in the footsteps of a geophysicist, rather than
           | making new footsteps for the AI. I guess I would extend this
           | to radiology too since it seems to be the same issue.
           | 
           | Let me give an example:
           | 
           | People often mention that truck drivers are safe from
           | automation because lots of last mile work is awkward and non-
           | standard, requiring humans to navigate the bizarre atypical
           | situations the truck encounter. Training an AI to handle all
           | this is far harder than getting it to drive on a highway.
           | 
           | What is often left out though is the idea that the
           | infrastructure can/will change to accommodate the short
           | comings of AI. This could look like warehouses having a
           | "conductor" on staff who commandeers trucks for the tricky
           | last bit of getting on the dock. Or perhaps preset radar and
           | laser path guidance for the tight spots. I'd imagine most
           | large volume shippers would build entire new warehouses just
           | to accommodate automated trucks.
           | 
           | A long time ago people noted that horses offered much more
           | versatility than cars since roads were rocky and muddy. How
           | do you make a car than can traverse the terrain a horse does?
           | You don't, you pave all the roads.
        
         | smaddox wrote:
         | Interesting perspective. What's your take on tools that use
         | AI/ML to accelerate applying an interpretation over a full
         | volume? For example: https://youtu.be/mLgKtmLY3cs
        
           | jofer wrote:
           | Bluntly, they're useless except for a few niche cases.
           | 
           | Anything they're capable of picking up _isn't_ what you're
           | actually concerned about as an interpeter. Sure they're good
           | at picking reflectors in the shallow portion of the volume.
           | No one cares about picking reflectors. That's not what you're
           | doing as interpreter.
           | 
           | A good example is the faults in that video. Sure, it did a
           | great job at picking the tiny-but-well-imaged and mostly
           | irrelevant faults. Those are the sort of things you'd almost
           | always ignore because they don't matter it detail for most
           | applications.
           | 
           | The faults you care about are the ones that those methods
           | more-or-less always fail to recognize. The significant faults
           | are almost never imaged directly. Instead, they're inferred
           | from deformed stratigraphy. It's definitely possible to
           | automatically predict them using basic principles of
           | structural geology, but it's exactly the type of thing that
           | these sort of image-focused "automated interpretation"
           | methods completely miss.
           | 
           | Simply put: These methods miss the point. They produce
           | something that looks good, but isn't relevant to the problems
           | that you're trying to solve. No one cares about the well-
           | imaged portion that these methods do a good job with. They
           | automate the part that took 0 time to begin with.
        
             | deeviant wrote:
             | You seem extremely biased against AI in general, to the
             | point where I very much doubt anybody would benefit from
             | hearing your opinions on it.
        
               | jofer wrote:
               | I work in machine learning these days. I'm not biased
               | against it -- it's literally my profession.
               | 
               | I'm biased against a specific category of applications
               | that are being heavily pushed by people who don't
               | actually understand the problem they're purporting to
               | solve.
               | 
               | Put another way, the automated tools produce verifiably
               | non-physical results nearly 100% of the time. The video
               | there is a great example -- none of those faults could
               | actually exist. They're close, but are all require
               | violations of conservation of mass when compared to the
               | horizons also picked by the model. Until "automated
               | interpretation" tools start incorporating basic
               | validation and physical constraints, they're just drawing
               | lines. An interpretation is a _4D_ model. You _have_ to
               | show how it developed through time -- it's part of the
               | definition of "interpretation" and what distinguishes it
               | from picking reflectors.
               | 
               | I have strong opinions because I've spent decades working
               | in this field on both sides. I've been an exploration
               | geologist _and_ I've developed automated interpretation
               | tools. I've also worked outside of the oil industry in
               | the broader tech industry.
               | 
               | I happen to think that structural geology is rather
               | relevant to this problem. The law of conservation of mass
               | still applies. You don't get to ignore it. All of these
               | tools completely ignore it and product results that are
               | physically impossible.
        
               | jofer wrote:
               | Incidentally, I don't even mean to pick on that video
               | specifically. I actually quite deeply respect the folks
               | at Enthought. It's just that the equivalent functionality
               | has been around and been being pushed for about 15 years
               | now (albeit it enabled via different algorithms over
               | time). The deeper problem is that it usually solves the
               | wrong problem.
        
         | magicalhippo wrote:
         | And maybe look for things that are not expected...
         | 
         | My dad went to take a shoulder x-ray in preparation for a small
         | bit of surgery. In the corner of the image the radiologist
         | noticed something that didn't look right. He took more
         | pictures, this time of the lungs, and quickly escalated the
         | case.
         | 
         | My dad had fought cancer, and it turned out the cancer had
         | spread to his lungs. He had gone to regular checks every six
         | months for several years at that point, but the original cancer
         | was in a different part of his body.
         | 
         | For a year prior he'd been short of breath, and they'd given
         | him asthma medication... until he went to get that shoulder
         | x-ray.
        
           | chefkoch wrote:
           | As a cancer patient that feels like negligance.
        
             | magicalhippo wrote:
             | I agree. Essentially the same scenario has happened twice
             | in my close circle since my dad.
             | 
             | Sadly it seems treatment here is very much focused on the
             | organ, not the patient.
             | 
             | Hence why I tell people I come across who's diagnosed for
             | the first time: learn where your cancer might spread to,
             | and be very vigilant of changes/pain in those areas.
        
       ___________________________________________________________________
       (page generated 2021-06-07 23:01 UTC)