[HN Gopher] Prostate cancer includes two different evotypes
       ___________________________________________________________________
        
       Prostate cancer includes two different evotypes
        
       Author : panabee
       Score  : 115 points
       Date   : 2024-03-13 20:17 UTC (2 hours ago)
        
 (HTM) web link (www.ox.ac.uk)
 (TXT) w3m dump (www.ox.ac.uk)
        
       | skywhopper wrote:
       | Kudos for this identification, but at this point, calling the use
       | of neural networks in statistical work "AI" is misleading at
       | best. I know it won't stop because it gets attention to claim
       | "AI", but it's really depressing. Ultimately it's not really any
       | different than all the talk about "mechanical brains" in the 50s,
       | but it's just really tiresome.
        
         | badRNG wrote:
         | I think the average person can safely call "the use of neural
         | networks in statistical work" AI.
        
           | johnmaguire wrote:
           | It's technically correct, but AI has become such an
           | overloaded term that it's impossible to know it refers to
           | "the use of neural networks" without explicitly saying so. So
           | you know, maybe just say that?
        
             | adw wrote:
             | This debate is a classic. AI has always been an overloaded
             | term and more of a marketing signifier than anything else.
             | 
             | The rule of thumb is, historically, "something is AI while
             | it doesn't work". Originally, techniques like A* search
             | were regarded as AI; they definitely wouldn't be now.
             | Information retrieval, similarly. "Machine learning", as a
             | brand, was an effort to get statistical techniques (like
             | neural networks, though at the time it was more "linear
             | regression and random forests") out from under the AI
             | stigma; AI was "the thing that doesn't work".
             | 
             | But we're culturally optimistic about AI's prospects again,
             | so all the machine learning work is merrily being rebranded
             | as AI. The wheel will turn again, eventually.
        
               | eichin wrote:
               | ... and once it works, it "earns" a name of its own, at
               | least among people actually doing it. Even in 2024 there
               | are Machine Learning Conferences of note.
        
         | whelp_24 wrote:
         | Should the term AI just not be used until we have a skynet
         | level ai?
        
           | Filligree wrote:
           | No, it should only be used about things that don't exist.
        
         | outworlder wrote:
         | > calling the use of neural networks in statistical work "AI"
         | is misleading at best.
         | 
         | Neural Networks are not considered AI anymore?
         | 
         | That just reinforces my thesis that "AI" is an ever sliding
         | window that means "something we don't yet have". Voice
         | recognition used to be firmly in the "AI" camp and received
         | grants from even the military. Now we have that on wrist
         | watches (admittedly with some computation offloaded) and nobody
         | cares. Expert systems were once very much "AI".
         | 
         | LLMs will suffer the same treatment pretty soon. Just wait.
        
           | depereo wrote:
           | Another entry in the 'marketing and technical terms don't
           | mean the same thing despite using the same words' saga.
        
           | dartos wrote:
           | The current usage of AI is a rather new market term.
           | 
           | Where would you draw the line? Is prediction via linear
           | regression AI?
           | 
           | Also language is fuzzy and fluid, get used to it.
        
             | Jensson wrote:
             | It was called machine learning 10 years ago since AI had
             | bad connotations, but today people have forgotten and call
             | it all AI again.
        
           | beeboobaa wrote:
           | AI has always meant Artificial Intelligence. Intelligent and
           | capable of learning, like a person.
           | 
           | LLMs are not AI.
        
             | outworlder wrote:
             | > LLMs are not AI.
             | 
             | Neither are neural networks, by that definition. Or
             | 'machine learning' in general. They all have been called
             | "AI" at different points in time. Even expert systems -
             | that are glorified IF statements - they were supposed to
             | replace doctors.
        
               | Jensson wrote:
               | People thought those techniques would ultimately become
               | something intelligent, so AI, but they fizzled out. That
               | isn't the doubters moving the goalposts, that is the
               | optimists moving the goal posts always thinking what we
               | have now is the golden ticket to truly intelligent
               | systems.
        
             | readthenotes1 wrote:
             | Some people are incapable of learning. Therefore, LLMs are
             | AI?
             | 
             | As far as I recall, the turing test was developed long ago
             | to give a practical answer to what was and was not
             | practically artificial intelligence because the debate over
             | the definition is much older than we are
        
               | kristov wrote:
               | I think the Turing test is subjective, because the result
               | depends on who was giving the test and for how long.
        
           | kromem wrote:
           | Pretty soon? I already regularly see people proudly stating
           | that LLMs "aren't really AI" and just "a Markov chain" (yeah
           | sure, let's ignore the self-attention mechanism of
           | transformers which violate the Markov property).
           | 
           | For the sake of my sanity I've just started tuning out what
           | anyone says about AI outside of specialist spaces and forums.
           | I welcome educated disagreement from my positions, but I
           | really can't take the antivaxx equivalent in machine learning
           | anymore.
        
             | mort96 wrote:
             | What if we hold off on calling it AI until it shows sign of
             | intelligence
        
               | cwillu wrote:
               | And what signs of intelligence are we looking for this
               | year?
        
               | jhbadger wrote:
               | Chess was a major topic of AI research for decades
               | because playing a good game of chess was seen as a sign
               | of intelligence. Until computers started playing better
               | than people and we decided it didn't count for some
               | reason. It reminds me of the (real) quote by I.I. Rabi
               | that got used in Nolan's movie when Rabi was frustrated
               | with how the committee was minimizing the accomplishments
               | of Oppenheimer: "We have an A-bomb! What more do you
               | want, mermaids?"
        
               | Jensson wrote:
               | They chased chess since they thought if they could solve
               | chess then AGI would be close. They were wrong, so then
               | they moved the goalpost to something more complicated
               | thinking that new thing would lead to AGI. Repeat
               | forever.
               | 
               | > we decided it didn't count for some reason
               | 
               | Optimists did move their goals once you realized that
               | solving chess actually didn't lead anywhere, and then
               | they blamed the pessimists for moving even though
               | pessimists mostly stayed still throughout these AI hype
               | waves. It is funny that optimists constantly are wrong
               | and have to move their goal like that, yes, but people
               | tend to point the finger at the wrong people here.
               | 
               | The AI winter came from AI optimists constantly moving
               | the goalposts like that, constantly saying "we are almost
               | there, the goal is just that next thing and we are
               | basically done!". AI pessimists doesn't do that, all that
               | came from the optimists that tried to get more funding.
               | 
               | And we see that exact same thing play out today, a lot of
               | AI optimists clamoring for massive amounts of money
               | because they are close to AGI, just like what we have
               | seen in the past. Maybe they are right this time, but
               | this time just like back then it is those optimists that
               | are setting and moving the goal posts.
        
           | jijijijij wrote:
           | Maybe it's a good indicator of misuse when the paper didn't
           | mention 'AI', or 'intelligence' once.
           | 
           | > my thesis that "AI" is an ever sliding window that means
           | "something we don't yet have
           | 
           | Or maybe it's the sliding window of "well, turns out this
           | ain't it, there is more to intelligence than we wanted it to
           | be".
           | 
           | If everything is intelligent, nothing is. If you define
           | pattern recognition as intelligence, you'd be challenged to
           | find unintelligent lifeforms, for example. You haven't
           | learned to recognize faces, you are literally born with this
           | ability. And well, life at least has agency. Is evolution
           | itself intelligent? What about water slowly wearing down rock
           | into canyons?
        
         | lja wrote:
         | The taxonomy of AI is the following:
         | 
         | AI
         | 
         | -> machine learning
         | 
         | ...-> supervised
         | 
         | ........-> neural networks
         | 
         | ...-> unsupervised
         | 
         | ...-> reinforcement
         | 
         | -> massive if/then statements
         | 
         | -> ...
         | 
         | That is to say NN falls under AI but everything falls into AI.
        
           | p1esk wrote:
           | Where did you pull this "taxonomy" from?
        
             | peteradio wrote:
             | Jo mommas bhole
        
         | nonameiguess wrote:
         | It implies agency on the part of the software that doesn't
         | exist. It should really just say "researchers ran some math
         | calculations and found X." The fact that the math runs on a
         | computer and involves the use of functions that were found by
         | heuristic-guided search using a function estimator, instead of
         | human scientists finding them from first principles, is surely
         | relevant in some way, but this has been the case for at least a
         | century since Einstein's field laws required the use of
         | numerical approximations of PDE solutions to compute gravity at
         | a point, probably longer.
         | 
         | I don't want to say there is no qualitative difference between
         | what the PDE solvers of 1910 could do and what a GPT can do,
         | but until we don't need scientists running the software at all
         | and it can do decide to do this all on its own and know what to
         | do and how to interpret it, it feels misleading to use
         | terminology like "AI" that in the public consciousness has
         | always meant full autonomy. It's going to make people think
         | someone just told a computer "hey, go do science" and it
         | figured this out.
        
         | leesec wrote:
         | Why would it be depressing? Who cares
        
           | johnnyanmac wrote:
           | taking credit from the drivers of tools to give credit to the
           | tools themselves as PR? Yes, depressing. No one says "Unreal
           | Engine Presents Final Fantasy VII". It's an important tool
           | but not the creative mind behind the work.
        
           | golemotron wrote:
           | Because regulating "AI" has the potential to encompass all
           | software development. Many programs act as decision support.
           | From the outside there's little difference between an
           | application that uses conventional programming, ML, RNNs, or
           | GPT.
        
         | advael wrote:
         | I mean. "AI" has meant "whatever shiny new computer thing is
         | hot right now" in both common vernacular and every academic
         | field besides AI research basically since the term was
         | coined...
        
         | meindnoch wrote:
         | In PR materials, AI = computers were involved
        
           | hawski wrote:
           | That's a problem with the term now. Whenever there is a PR
           | statement of using AI it does not have much meaning attached
           | to it. Sometimes even simple algorithms are called AI, if
           | there is a bit of statistics involved even more. I liked the
           | term machine learning, because now with AI-everything I don't
           | really know what it is about.
        
         | Aloisius wrote:
         | John McCarthy coined the term AI in 1955 for a research project
         | that included NN. He then founded the MIT AI Project (later AI
         | Lab) with one of the researchers who joined the project, Marvin
         | Minsky, who also had created the first NN in 1951.
         | 
         | If NNs aren't AI, what is?
         | 
         | http://jmc.stanford.edu/articles/dartmouth/dartmouth.pdf
        
         | krisoft wrote:
         | > but it's really depressing
         | 
         | Why do you feel it depressing?
        
       | whalesalad wrote:
       | I thought this was pretty obvious? "Cancer" is not a thing, its a
       | billion different things that happen differently in every single
       | patient.
        
         | renewiltord wrote:
         | Thought I'd include the first line of the article:
         | 
         | > _A Cancer Research UK-funded study, published in Cell
         | Genomics, has revealed that prostate cancer, which affects one
         | in eight men in their lifetime, includes two different subtypes
         | termed evotypes._
         | 
         | In some cosmic sense, the number "one billion" and the number
         | "two" are the same, I suppose.
        
           | whalesalad wrote:
           | and in a year we'll have a new report: "foo bar genomics has
           | revealed that prostate cancer includes 3 new evotypes"
           | 
           | it's all just mutations and there is no upper bound on the
           | number of mutations that can exist
        
           | luqtas wrote:
           | it's completely impressive how hackers here are multi-field
           | specialists
        
             | dudul wrote:
             | I'm always amazed to see all these "well actually" comments
             | on every single post, regardless of the topic :)
        
           | eig wrote:
           | While the parent commenter did exaggerate, they are correct
           | in their idea. You could subclassify cancers all the way down
           | to individual gene mutations, and even then there is
           | heterogeny within the cancer itself.
           | 
           | Medicine tries to draw boundaries where different therapies
           | help differently or where there is different pathophysiology
           | going on. The article was able to draw one such additional
           | boundary. Its relevance is yet to be confirmed with its
           | phenotype or druggability.
        
           | purkka wrote:
           | I'd guess what they mean is there are two clusters with some
           | clear distinguishing properties, and perhaps some resulting
           | implications for treatment.
        
         | mr_toad wrote:
         | Yes but this is true of nearly every illness, and it's true of
         | natural phenomena in general, and it's true of most problems in
         | statistical classification.
         | 
         | But we still like to classify things, because it often has
         | predictive power and informs treatment.
        
         | f6v wrote:
         | Ok, it's "obvious". Now find good drug targets. Yeah, you can't
         | before you quantify all the "obvious" things. The actual
         | patient data and derived insights are precious.
        
         | bnjemian wrote:
         | Yes, that's somewhat true, but in practice we have subtypes. As
         | a counterfactual to that assertion, if it were meaningfully a
         | billion different things, then we would need a billion highly
         | precise treatments. Yet, we've managed to do decently with
         | relatively few.
        
       | panabee wrote:
       | paper link: https://www.cell.com/cell-
       | genomics/fulltext/S2666-979X(24)00...
        
       | up2isomorphism wrote:
       | No, at least in this case, human reveals something, not "AI".
       | Unfortunately people need to use 'AI' to get some attention (no
       | pun intended).
        
       | aftbit wrote:
       | Not to be rude or anything, but .... no duh? This is why looking
       | for a "cure for cancer" is a bit nonsensical. There are many
       | different ways for cell division to go wrong. Prostate cancer is
       | just a cancer that affects the prostate. There's no reason to
       | assume there would be one pathology for that.
        
         | AlecSchueler wrote:
         | It's one thing to intuit something but it's another to actually
         | show it.
        
         | fridder wrote:
         | There is still a lot we do not know about why some prostate
         | cancers grow slowly and are effectively benign and some are
         | viciously malignant. This split leads some health practitioners
         | to either relax or ignore screening guidelines. Being able to
         | better narrow things down so we can avoid overly aggressive
         | treatment while at the same time being appropriately aggressive
         | for those that have more malignant variants. (BTW, this is not
         | just theoretical for me)
        
         | jeremyjh wrote:
         | You are responding to the headline and not the content of the
         | study. Yes, science headlines are stupid clickbait.
        
         | f6v wrote:
         | It's not that you're wrong, but you miss the depth of the
         | issue. Yes, we know that people are different and there're many
         | redundant pathways, and every poor bastard probably has his own
         | mutation, etc.
         | 
         | But we need to actually identify the mechanisms, describe them
         | in a lot of detail, and look for very specific biomarkers.
         | That's what "personalized medicine" is going to be.
         | 
         | It's extremely difficult to put a study like this together. So
         | many parts can go wrong. So it's an achievement not just for
         | scientists who published in a good journal, but for the whole
         | humanity.
        
         | lr4444lr wrote:
         | IANAD, but the early and ongoing successes of immunotherapy
         | across a wide variety of cancers suggests that your
         | characterization of this effort as "nonsensical" is cynically
         | oversimplified, if not wrong.
        
       | dataangel wrote:
       | Isn't it the case that there are a near infinite number of forms
       | of cancer for any organ? Any combination of mutations that causes
       | unrestricted growth right? So how can it just be 2 forms?
        
         | f6v wrote:
         | There're redundant pathways, but it's not a million. I guess
         | different mutations can converge to a common phenotype. And in
         | any case, what you present in a study is kind of like a model
         | that you derive from the data. You can probably go deeper, but
         | you'd need more patients/more resources.
        
         | bnjemian wrote:
         | In principle, yes, in practice, no; real-world mutations are
         | (more often than not) non-random and their frequencies can be
         | affected by a variety of factors. For example, the location of
         | the mutated gene or region within the bundled chromatin
         | structure inside the cell nucleus (this structure is highly
         | conserved into what are known as topologically associated
         | domains, or TADs), or the interaction between a region of DNA
         | and cellular machinery that increases the likelihood of some
         | mutation. There are tons of examples.
         | 
         | In practice, we've now molecularly characterized most well-
         | studied cancers and know that they tend to have the same
         | mutations. For example, certain DNMT3A mutations are very
         | common in AML and the BCR-ABL fusion protein in CML (and
         | results from an interaction between chromosomes 9 and 22 that
         | produces the mutant 'Philadelphia chromosome'). There are even
         | a wide range of cancers that share similar patterns of
         | mutations and fall under the umbrella of 'RAS-opathies', which
         | all exhibit some kind of mutation in a subset of genes on a
         | specific pathway related to cell differentiation and growth.
         | Examples include certain subtypes of colon cancer, lung cancer,
         | melanoma, among many others.
         | 
         | More generally, when a cancer is subtyped, that subtyping is
         | always done with respect to some quantifiable biological trait
         | or clinical endpoint and - as you've hinted - that subtyping is
         | commonly a statistical assessment. Each cancer is unique and,
         | even within an individual cancer, we have clonal subpopulations
         | - groups of cells with differing mutations, characteristics,
         | and behaviors. That's one of the reasons treating cancer can be
         | so challenging; even if we eliminate one clonal population
         | entirely, another resistant group may take its place. The
         | implication is that cancers that emerge with post-treatment
         | relapse are often 1. more or completely resistant to the
         | original therapy, and 2. exhibit different behaviors and
         | resistance, often to the detriment of the patient's outcome.
        
         | molticrystal wrote:
         | I think there is a difference in the infinite such as the
         | example of monkeys typing Shakespeare, and the most vulnerable
         | and likely areas to cause cancer which are what need to be
         | focused on. In this specific, particular, and very likely for
         | most people case it seems settle into 2 subtypes.
        
         | o11c wrote:
         | If you weld 2 sticks of metal together end to end, then bash
         | them on the ground, they're likely to break at the weld. Insert
         | additional physical examples here.
         | 
         | Likewise DNA is likely to break (and fail to be corrected) at
         | particular points. Some of those points cause cancer, but
         | there's only a finite set of those points since DNA is largely
         | identical across all humans. Additionally, even if the failure
         | is slightly to one side of the expected break, it usually shows
         | the same symptom.
        
         | eig wrote:
         | You could theoretically subclassify cancers all the way down to
         | individual gene mutations, and even then there is heterogeny
         | within one cancer itself. That is the idea behind "personalized
         | medicine" though it's being distorted by hype.
         | 
         | However, medicine is practical, and tries to draw boundaries
         | where different therapies help differently or where there are
         | different pathophysiology going on.
         | 
         | The article was able to draw a new additional boundary. Its
         | relevance is yet to be confirmed with its phenotype or
         | druggability. If it turns out to be useful either in predicting
         | therapy or outcome, it'll stick and oncologists will learn it.
         | 
         | This process has already been repeated a lot in the
         | hematological cancers, where previously cancers like "Hodgkin's
         | lymphoma" have been subdivided as we made new treatments and
         | discovered the individual pathways.
        
       | herodotus wrote:
       | The article itself never uses the term "AI" or "Artificial
       | Intelligence". They do mention the use of a Neural Network as one
       | part of their attempt to help find commonalities in their data
       | set. It is too bad that the Oxford University press person chose
       | to use that word in the title and in the article - badly (in my
       | opinion) characterizing the work.
        
         | staplers wrote:
         | Probably institutional grant fishing.
        
       | crispycas12 wrote:
       | Ok very quick notes
       | 
       | * Prostate cancers are known to have a wide spectrum of outcomes.
       | 
       | * Stage IV (metastatic) disease tends to have genetic testing.
       | These panels tend to be on the order of a few hundred genes to
       | 1000s of genes.
       | 
       | * Classically prostate cancer is driven by androgen receptor
       | upregulation. Disease progression is often due to the disease
       | overcoming treatment with antiandrogenic such as enzalutamide.
       | 
       | Correction: enzalutamide was designed to overcome castrate
       | resistant prostate cancer. abiraterone would have been more
       | appropriate to bring up here.
       | 
       | * Upon review of NCCN guidelines: there are two main genetic
       | indicators for targeted therapies. Both of these mutations are
       | indicated for germline and somatic contexts: BRCA1/2 for parp
       | inhibition and dMMR/MSI-H for pembrolizumab
       | 
       | o Note that there are some somatic mutations with HRD pathway
       | that are indicated for treatment. But that is only if they are
       | somatic
       | 
       | * This study aims to figure out the etiology of the disease in an
       | evolutionary manner. That is what are the key events that lead to
       | oncogenesis.
       | 
       | edit note: the word that escaped me was epistasic given that we
       | are looking into the nuts and bolts cause and effects of
       | different mutations.
       | 
       | edit note 2: I'm going to be honest, most of the time I've read
       | about prostate cancer is in the metastatic setting and thus it
       | has already become castrate resistant. Abiraterone is also meant
       | to aid in sensitizing castrate resistant prostate cancer. Let's
       | just say androgen deprivation therapy for now. On the other hand,
       | I hope this was instructive in showing how important AR is as a
       | pathway for prostate cancer
       | 
       | * Quick thought: this could be useful if this matches with
       | molecular screening in earlier stage disease. If we can reliable
       | map out which chain of events (tumor suppressor loss of function
       | mutations/ gain of function mutations for oncogenes, chromosomal
       | level mutations) lead to more aggressive disease, we can inform
       | changes in surveillance and earlier/ more aggressive treatment.
       | 
       | * Granted this isn't too out there, tissues cores are taken out
       | to begin with to get initial snapshot into how aggressive disease
       | (it's how you get the Gleason score after all).
       | 
       | * Regarding switching pathways, that's not too crazy given
       | neuroendocrine transformations exist in prostate + lung cancer
        
       | stergios wrote:
       | Gleason scores already designate 5 different types/categories of
       | PaC, and this score strongly influences what type of barbaric
       | treatment one will receive.
       | 
       | I hope their findings help discovery of humane immunotherapy's.
        
         | sroussey wrote:
         | My father was diagnosed with a Gleason score of 10 this year.
         | 
         | When it's this bad, ironically some barbaric options are off
         | the table.
        
         | kmbfjr wrote:
         | I was just diagnosed with 7/7.5 Gleason score prostate cancer
         | (adenocarcinoma).
         | 
         | The barbarians are cutting it out in 4 weeks.
         | 
         | A life of pissing my pants beats dying before retirement.
        
       ___________________________________________________________________
       (page generated 2024-03-13 23:00 UTC)