[HN Gopher] A New Artificial Intelligence Tool for Cancer
       ___________________________________________________________________
        
       A New Artificial Intelligence Tool for Cancer
        
       Author : mgh2
       Score  : 89 points
       Date   : 2024-10-20 05:12 UTC (17 hours ago)
        
 (HTM) web link (hms.harvard.edu)
 (TXT) w3m dump (hms.harvard.edu)
        
       | aydyn wrote:
       | Sounds promising but still needs to be independently validated.
       | Its always wise to take AI medical research with a grain of salt.
        
         | TechDebtDevin wrote:
         | Its Harvard, I would literally be more excited for the same
         | announcement from the University or Wyoming.
         | 
         | >Sounds promising
         | 
         | More fabrications from one of the biggest grifting institutions
         | on Earth, Harvard.
         | 
         | So sick of their name even having merit. They literally license
         | their name to sell fake textbooks at airports. Why are they
         | even allowed on here.
        
           | Alifatisk wrote:
           | Is Harvard that bad? I assumed they were prestige.
        
             | LoganDark wrote:
             | That's what they want you to think. I mean this entirely
             | non-sarcastically. I don't know exactly how bad they are or
             | aren't, but they work hard to look like prestige.
        
             | InkCanon wrote:
             | The root of prestige is the Latin word praestigium, which
             | means an illusion or delusion. One of the most poetic
             | pieces of etymology in todays society.
        
             | next_xibalba wrote:
             | There has been a lot of news about academic fraud at
             | Harvard lately (although some cases date back decades).
             | It's pretty bad when the leader of the institution gets
             | busted for it. Harvard's reputation is in free fall.
             | Anytime I see Harvard attached to some big announcement, I
             | just assume the result has been p-hacked, exaggerated, or
             | otherwise manipulated.
        
           | neom wrote:
           | "They literally license their name to sell fake textbooks at
           | airports."
           | 
           | ...huh? Can't find anything about this on google.
        
       | camillomiller wrote:
       | Undoubtedly interesting but still hard to take at face value
       | given Harvard's recent issues and retractions in cancer research.
        
         | mgh2 wrote:
         | https://www.nbcnews.com/science/science-news/cancer-institut...
        
       | pama wrote:
       | The paper is here:
       | https://www.nature.com/articles/s41586-024-07894-z
        
         | fernly wrote:
         | The _abstract_ is there; the full paper costs $29.99 or you
         | need a university access id.
        
       | chefandy wrote:
       | Good thing silicon valley is pumping billions of dollars and
       | burning through unimaginable natural resources in the midst of a
       | climate crisis to make systems that compete with commercial
       | artists by selling cheap knock-offs of their artwork and
       | relieving us of the burden of doing things like writing school
       | papers or heartfelt personal correspondence or making animated
       | avatars for instant messaging. There's money to be made, so why
       | use this amazing new technology to solve humanity's actual
       | problems instead of just shoving a bunch of mediocre who-gives-a-
       | shit features into people's phones?
        
         | tomr75 wrote:
         | cool whine
        
           | chefandy wrote:
           | Cool response
        
         | suby wrote:
         | I'm confused by your comment. This is an article about AI
         | potentially helping with treatment of cancer.
         | 
         | I strongly disagree in any case that we shouldn't be investing
         | in AI. I think that there's likely a rising tides lifts all
         | boats effect that occurs with improving AI in general -- it all
         | feeds into each other. People who work on image generators are
         | building expertise in AI and can potentially discover things
         | which advance the field, or inspire others to work in the
         | field. Humanity getting better at making AI generate images, or
         | compete at video games, or predict protein folding, it all
         | probably contributes to the rate of improvement in AI.
         | 
         | And it's not unreasonable to think that AI will one day solve
         | problems like cancer or climate change, so we should very much
         | care about the rate of AI improvement. As for the power
         | concerns, ironically, thanks to the increased demand we are
         | seeing companies like Microsoft and Oracle make large
         | investments in nuclear power now. It could be the case that
         | this kicks starts a boom in the nuclear industry which
         | eventually brings costs / eases regulations enough to put us in
         | a better place in the long run.
         | 
         | It's extremely complex, I don't think we can say at this point
         | that investing in AI in the midst of a climate crisis is a
         | mistake.
        
           | chefandy wrote:
           | You misread my comment. I think solving problems like finding
           | genetic patterns in cancer is exactly what we should be
           | investing in. That is not even close to the largest resource
           | sinks for AI model training right now. In the resources were
           | going into generally making AI better, great. They're not.
           | They're going until training models for mediocre consumer
           | "gee-whizz aint that neat" products.
        
             | evantbyrne wrote:
             | I think people may not be aware of how much medical
             | research is happening right now based around ML. It is a
             | key component of our liquid cancer biopsy. If anything,
             | there might be a bit too much hype-driven development at
             | the moment. I would be cautious of any ML-based diagnostic
             | that isn't leveraging the technology to better understand
             | the biological aspects of cancer itself.
        
               | chefandy wrote:
               | Great. I'd rather add a drop in that bucket than have
               | another model trained to make my phone do something I
               | wish it didn't do.
        
             | borski wrote:
             | That's not true. That's just what you see.
             | 
             | There is _tons_ of research going on using AI for a lot
             | more than memes.
        
               | chefandy wrote:
               | I'm not implying that there isn't money going into AI in
               | medical research, and a bunch of other worthy pursuits,
               | also. However, there's also an extraordinary amount of
               | resources going into dumb shit that nobody wants that
               | _could_ actually benefit humanity. Not a trivial sum for
               | a test, not a large sum that will contribute to
               | entertainment-- it 's the equivalent of an airport gift
               | shop trinket. The person receiving the useless bauble
               | would be better off without it but the giver paid $20 for
               | it so...
        
         | alehlopeh wrote:
         | Any idea when it won't be the midst of a climate crisis? Oh ok.
        
           | chefandy wrote:
           | Surely the nihilist approach will be an effective solution.
        
         | throwaway918299 wrote:
         | I'm more skeptical than most on the current wave of AI tech
         | innovation.
         | 
         | However, believe it or not, humanity can collectively work on
         | different things at the same time. And the people putting emoji
         | generators in phones are probably not the people I would want
         | doing cancer research. And many many things that we rely on
         | today were not directly created by research in those topics and
         | were born from innovation in other unrelated areas.
        
           | chefandy wrote:
           | You don't feel that the astonishing amount of resources
           | poured into current consumer level AI products is different?
        
             | borski wrote:
             | No. We poured similarly large amounts of resources into
             | hundreds of companies in the dotcom boom, crypto, and so
             | on.
             | 
             | This is a phase, just like many others, and will pass.
             | 
             | AI and LLMs will stick around and be important. The hype?
             | That will die in favor of something else.
        
               | clcaev wrote:
               | In each round of expansion more externalities happen; we
               | just fail to tax the externalities to reflect the real
               | world consequences.
        
               | borski wrote:
               | I'm not sure I understand what you mean. Could you
               | clarify?
        
               | chefandy wrote:
               | "It's what we've always done" is a classic non-argument
               | against doing something. Is there an amount we could
               | spend on something that essentially winds up being
               | useless that you think would be bad? Do you not think
               | there's a trade-off at some level about the sort of
               | things people invest in?
        
               | borski wrote:
               | That's not the argument I made. You were responding to an
               | argument that "humanity can collectively work on
               | different things at the same time," and "many things that
               | we rely on today were not directly created by research in
               | those topics and were born from innovation in other
               | unrelated areas."
               | 
               | Your response was "You don't feel that the astonishing
               | amount of resources poured into current consumer level AI
               | products is different?"
               | 
               | To which I responded that no, I don't feel that the
               | amount of resources poured into current consumer level AI
               | products is different; it is the same as it has always
               | been.
               | 
               | That is not the same as making an argument that that is
               | how it _should_ be.
        
       | bcks wrote:
       | Unrelated, "Harvard cancer institute moves to retract six
       | studies, correct 31 others amid data manipulation claims"
       | https://news.ycombinator.com/item?id=39097031
        
         | mgh2 wrote:
         | Don't forget this: https://www.npr.org/sections/thetwo-
         | way/2016/09/13/493739074...
        
       | sjmcmahon wrote:
       | It's probably worth noting that there's a lot of discussion about
       | challenges reproducing the workflow of this paper, and that as-
       | described it seems to suffer from data leakage, so much so that
       | you can replace sections of their algorithm with random
       | initialisation and get at least as good results. See, e.g.:
       | 
       | https://pubpeer.com/publications/C8CFF9DB8F11A586CBF9BD53402...
       | 
       | Having been on both sides of the reviewing process, it seems
       | incredibly difficult to get good peer review of data-intensive
       | studies in medicine, as few people have the time to really dig
       | into the detail of these models.
        
         | ericjmorey wrote:
         | Maybe there should be a system in place to fund that sort of
         | thing.
        
         | TeslaCoils wrote:
         | Nature article and reproducibility? Reminds me of
         | https://spectrum.ieee.org/chip-design-controversy
        
       | BaculumMeumEst wrote:
       | Expecting the code to be plagiarized and for the results to fail
       | to replicate
        
       | theptip wrote:
       | > CHIEF achieved nearly 94 percent accuracy in cancer detection
       | and significantly outperformed current AI approaches across 15
       | datasets containing 11 cancer types
       | 
       | I would have thought that performance vs. human level is the most
       | interesting benchmark? Maybe that is covered in the Nature
       | article.
        
       | HexDecOctBin wrote:
       | > Title: A New Artificial Intelligence Tool for Cancer
       | 
       | Curing it or causing it? Because that's the typical AI marketing
       | appeal, no? "It will kill everyone, except our customers."
        
       | FerretFred wrote:
       | UK reader here... As someone who's lived with a dear friend's
       | cancer and affects of cancer for the past few years, I'd say that
       | early/earlier diagnosis by whatever method is to be welcomed.
       | However, if you then can't treat the disease then surely the
       | early diagnosis will be for nothing? Sadly, with incidences of
       | cancers of all sorts apparently increasing exponentially,
       | wordldwife, not having the means to treat it is heartbreaking?
       | Actually, as much money and effort needs to be invested in
       | finding the cause of various cancers as curing it (or trying to
       | cure it).
        
       ___________________________________________________________________
       (page generated 2024-10-20 23:01 UTC)