[HN Gopher] $5 device tests for breast cancer in under 5 seconds...
___________________________________________________________________
$5 device tests for breast cancer in under 5 seconds: study
Author : Brajeshwar
Score : 203 points
Date : 2024-02-16 15:04 UTC (7 hours ago)
(HTM) web link (studyfinds.org)
(TXT) w3m dump (studyfinds.org)
| amelius wrote:
| False positive / false negative rates?
| soco wrote:
| No larger scale tests yet, they only announced what is
| basically their current direction or research.
| andyjohnson0 wrote:
| https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/Hig...
|
| Discusses sensitivity but not accuracy or rates of false
| positive/negative.
| iwontberude wrote:
| Underlying research about salivary biomarker detection
| https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6566681/
| zacharyvoase wrote:
| > The study is published in the Journal of Vacuum Science &
| Technology B
|
| Dare I ask why?
| rainbowzootsuit wrote:
| Thin films grown in vacuums are how you get graphene layers and
| FETs that appear to be the basis of the technology here.
| Someone with a vacuum deposition system would be the prime
| candidate to develop a custom thin film to target adsorption of
| the molecules that they're trying to sense.
| neuronexmachina wrote:
| This part of the study describes what they implemented:
|
| >Instead of using the transistors as the sensors, which need
| to be disposed of after each use, a system with a reusable
| printed circuit board (PCB) containing a MOSFET and
| disposable test strips were employed. In this approach,
| synchronized double-pulses were applied at the gate and drain
| terminals of the transistor to ensure that the channel charge
| does not accumulate, and there is no need to reset the drain
| and gate paths to mitigate the charge accumulation at the
| gate and drain of the sensing transistor for sequential
| testing. With the double-pulse approach, it only takes a few
| seconds to show the result of the test, due to the rapid
| response of the functionalized test strips and resulting
| electrical signal output. As an example, the LoD has been
| demonstrated to reach 10-15 g/ml and the sensitivity to
| 78/dec for COVID-19 detection. Similar approaches have been
| used to detect cerebrospinal fluid (CSF), cardiac troponin I,
| and Zika virus.27-30
|
| > In this work, use of this double-pulse measurement approach
| to detect HER2 and CA15-3 in saliva samples collected from
| healthy volunteers and breast cancer patients was
| investigated. The voltage output responses of the transistor
| correlated to the HER2 and CA15-3 concentrations, detection
| limits, and sensing sensitivity were determined.
| moi2388 wrote:
| Because much like vacuums, cancer sucks
| m-htt wrote:
| you are doing the lords work i applaud you
| andrew_eu wrote:
| The publication is linked in the article [0]. Even if it's only
| for HER2 patients, even if it's only useful as a first-pass test,
| this is still great news.
|
| The experimental design seems very small scale though. 17 cancer
| positive samples (of which only 1 was HER2 positive), 4 control.
| Since the strips are focused on HER2 detection I read this as "in
| 1 out of 1 samples, our test detected HER2 overexpression" but
| maybe I misread it.
|
| [0]
| https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/Hig...
| e63f67dd-065b wrote:
| Press release: https://publishing.aip.org/publications/latest-
| content/would...
|
| Actual publication:
| https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/Hig...
|
| Experiment size is literally N=21, with 4 healthy participants, 3
| _in-situ_ breast cancers, and 14 invasive breast cancers.
|
| N=21 might as well be useless in my opinion. You can't draw any
| meaningful conclusions about statistical power of this test; if
| your priors were 10% for breast cancer, after taking this test,
| your posterior probably remains unchanged.
| causal wrote:
| Yah headline jumping the gun a bit, but hopefully this
| motivates funding to get a bigger study / refine the technique.
| pvaldes wrote:
| > We tried it with 21 people
|
| The new Theranos is here again
| dheera wrote:
| No, not at all. Polar opposites.
|
| Theranos was actual lies. TFA is just being honest about a
| low sample size.
| pvaldes wrote:
| If somebody claims: our device "accurately tests", and we
| discover later that "In fact the tests hadn't been done
| still" (at a meaningful way), It should be taken as a red
| flag. Specially when it came in the same package with "to
| make lots of tests would be really cheap" (but for some
| reason they didn't tried it).
|
| Maybe be a great, honest work, brilliant step?. Yes.
|
| But please don't claim that your model is reproducible if
| you didn't ever tried seriously to reproduce it first. What
| if the results are just a random effect?. What if the test
| says just positive most of the time? The test negative
| response has been tested in a control group of 4 people?
| This is 20 bucks well spent.
|
| --At this moment-- isn't different than other thousand
| projects that seemed too good to be true, took the money
| and soon vanished. I hope to be wrong.
| dheera wrote:
| > didn't ever tried seriously to reproduce it first
|
| Maybe they found something worth testing and just need
| more funding to actually do the tests?
|
| I have no problem with early free speech about "I found
| something interesting" as long as you're honest about the
| sample size and don't misrepresent it.
| jacquesm wrote:
| Without a larger n you can't really tell what the fp rate
| is and that means that the accuracy figure isn't very
| useful. If accuracy holds in light of a false positive rate
| that is much lower than other tests then this may well be
| very useful. But you can not conclude that at all based on
| the evidence presented.
| sonicanatidae wrote:
| Not quite. Theranos was stating that they were capable of
| doing what was impossible prior. The real issue with Theranos
| was physics. A single drop of blood is simply not a large
| enough sample for them to do all the testing they claimed. As
| in, impossible with today's tech and in the near future.
|
| These folks are just stating that, "so far, this looks
| promising".
|
| At least, thats my read. YMMV.
| chaxor wrote:
| It is important to point out that what they claimed to do
| is not technically impossible actually. They just didn't do
| it, and lied about it. That was the real problem.
|
| It's very unfortunate, because it is _technically_
| possible, just very difficult to achieve, even in academia
| (so it won 't be done first in industry).
|
| This was the absolute worst part of Theranos, is that they
| deter others from trying to make headway in the space _at
| all_.
| sonicanatidae wrote:
| I don't believe they could run a battery of 81+ tests on
| a drop of blood. There simply isn't enough there. Why do
| you think they take tubes of blood for testing,
| currently?
|
| The issue is concentrations and the presence of them in
| sufficient amounts to be detected reliably and
| consistently.
|
| edit: typo.. words are hard.
| fuhrtf wrote:
| That's a bit harsh. It's an exploratory study testing a new
| paradigm.
| oliwarner wrote:
| It's only as harsh as the the headline is rose-tinted.
|
| "Exploratory study shows promise" is better. When n=21,
| flipping a coin as about as accurate.
| matthewmacleod wrote:
| This statement belies a fundamental misunderstanding of
| statistics on your part.
| ethanbond wrote:
| If you're really good at statistics, you can just intuit
| what seems like a sufficient N. This is actually a
| powerful statistical method because you can vary your
| intuition depending on whether you want to believe the
| study was well-powered or not, enabling you to easily
| discard Bad Evidence and include Good Evidence.
|
| Few understand this.
| jncfhnb wrote:
| No, that's utter nonsense. Statistical power is well
| defined and the required N to reliably detect an effect
| is a function of its effect size, which is the thing that
| wannabe statisticians don't understand.
| lazyasciiart wrote:
| > you can vary your intuition depending on whether you
| want to believe the study
|
| I suspect it is deliberate nonsense.
| solardev wrote:
| > I suspect it is deliberate nonsense.
|
| But is that your intuition talking?
| Spivak wrote:
| You don't intuit it, you calculate it. If you claim to be
| able to predict coin flips with a specific accuracy I can
| give you the exact number of trails N you need to be x%
| confident.
|
| If you claim 90% accuracy and I want to be 95% confident
| with the standard alpha of 0.05 you need 38 trials.
| redder23 wrote:
| Totally agree!
| dheera wrote:
| It's still a nonzero sample size, and if anything, says two
| things:
|
| (a) We should do more tests to increase N
|
| (b) If a $5 device tests you positive, maybe you should go get
| checked out for $5000 or whatever the doctors charge you
| (because insurance often only pays AFTER "shit happens" and
| often does not pay to test "whether shit might happen"), that
| you wouldn't have thought of doing otherwise.
| jncfhnb wrote:
| You can absolutely draw conclusions about the statistical power
| of the test. That's what statistics is for.
|
| If this sample data is randomly sampled it looks like it will
| be a fairly high precision test for invasive cancers, with room
| for false negatives. You would have had to have gotten very
| unlucky to see such a difference in distributions even on a
| small sample like this. But sure, let's get more samples.
| hn_throwaway_99 wrote:
| Thank you for this! I get frustrated with the "N of only 21?
| Might as well flip a coin!" responses. Like you say, the
| whole purpose of statistical testing is to give an accurate,
| numeric value that says how likely the results are due to
| chance.
|
| One thing I'd note, though, is that the paper's title is
| "High sensitivity saliva-based biosensor in detection of
| breast cancer biomarkers: HER2 and CA15-3". My understanding
| is that _sensitivity_ was never really the problem with
| breast cancer detection - it 's specificity that is the real
| challenge with all types of broadly-deployed medical
| screening tools.
| lofatdairy wrote:
| Sensitivity matters in this case because the medium is
| saliva which contains a fraction of the antigens contained
| in serum. Specificity is challenging when the biomarkers
| are non-specific and you only have 1 testing modality but
| that seems to be not so much the case here.
| jncfhnb wrote:
| Well, depends. It's bayes law fucking us over with the vast
| majority of people not having breast cancer but really,
| really needing to know if they do.
|
| This is why it's better to have tiers of confidence for
| actual policy decisions and not just confusion matrix
| results frankly.
|
| Like if you run this test and you're in a cohort that is
| 99.999% likely to be cancer, that's useful info. It's not a
| problem, per se, if the test instead comes back with an
| answer of mixed certainty. We just need to be comfortable
| with getting neither a positive nor a negative prediction;
| nor expecting it to cover all cases.
|
| The policy driving thresholds on tests need to consider the
| weights of each possible outcome to determine what to do.
| ryandrake wrote:
| No matter what the value of N is, _someone_ always comes
| out of the woodwork to complain about it. It 's a pretty
| common criticism of any study that relies on statistics,
| and one anyone can make in a few sentences.
| wolverine876 wrote:
| People on HN (maybe elsewhere too) devote a lot of
| attention to sample sizes. I don't know the upthread
| commenter at all, but in general I suspect it's because
| that is an easy thing to understand about research and
| statistics, and it's a valid critique they've seen
| professionals use.
|
| A normal human fallacy is to focus on the thing you
| understand, that is easy to understand (e.g., easy to
| quantify), and overlook the difficult issues that are far
| more important. There is much more, of much more importance
| going on in most of these studies, and in their statistics
| and validity, than the sample size.
| treflop wrote:
| Elsewhere too
|
| I first took stats in high school and we read a bunch of
| valid small sample size studies so I don't know where
| people are failing to get educated.
|
| There were so many factors going into whether a study was
| good.
| ajb wrote:
| Exactly.
|
| Famous example: Lady tasting Tea [1]. N=1 or N=8
| depending on how you look at it. Still significant.
|
| [1] https://en.wikipedia.org/wiki/Lady_tasting_tea
| renewiltord wrote:
| This is why science has gotten harder these days. When
| there were only half a billion people, sampling 500 got you
| 1 in a million coverage. Now there are 8 billion people, so
| it's worth 16 times less. As more people are made, science
| will suffer because the percentage is so much less. In
| fact, this is why I always trust science in small towns
| more. In Colma, for instance, you sample 500 and you have
| covered 33% of the people. Sometimes, it's even better to
| just sample people in one house. 100% coverage.
| hn_throwaway_99 wrote:
| TBH, I can't tell if this is sarcastic or not, because
| there is so much in this comment that belies a
| fundamental misunderstanding of statistical testing and
| statistical power, which is the point I was trying to
| make.
|
| > Now there are 8 billion people, so it's worth 16 times
| less.
|
| That is not how statistics works, at all.
|
| > In fact, this is why I always trust science in small
| towns more. In Colma, for instance, you sample 500 and
| you have covered 33% of the people.
|
| These are the sentences that made me think this had to be
| satire (presumably you want "science" to apply to places
| _outside_ of Colma...), but in all honesty these days it
| 's really hard to tell.
| serial_dev wrote:
| My assumption is that 98% of people have no idea what the
| right sample size is for this particular experiment,
| including myself of course.
|
| To all who criticize and ridicule someone who would like to
| have more samples, why do you think 21 is such a perfect
| number in this case? Wouldn't 15 be enough, if statistics
| and all applies? 10? 1? Would 30 be too much?
| hn_throwaway_99 wrote:
| > To all who criticize and ridicule someone who would
| like to have more samples, why do you think 21 is such a
| perfect number in this case? Wouldn't 15 be enough, if
| statistics and all applies? 10? 1? Would 30 be too much?
|
| This is the point that you are fundamentally
| misunderstanding. Nobody is making the argument that "21
| is such a perfect number in this case". The rest of your
| sentences ("Wouldn't 15 be enough, if statistics and all
| applies? 10? 1? Would 30 be too much?") seem to point to
| a belief that people are pulling numbers based on a
| finger in the wind.
|
| The whole point of (a lot of) statistical testing is that
| it allows you to come up with a _specific number_ that
| determines how likely a particular result is due to
| chance. That is what the p < .05 "standard" is about -
| it's a determination that the results have less than a 5%
| likelihood to be due to random chance (though that 5%
| number used as "significant" _is_ just basically pulled
| out of thin air, and p-hacking is another topic...) That
| is, I and the comment I replied to aren 't making the
| argument that 21 "is such a perfect number". We're making
| the argument that even with a small sample size it's
| possible to determine the relative error bars, with
| precision, using statistical methods, not just pulling a
| number based on feels. Yes, larger sample sizes reduce
| the size of those error bars. But often not in ways that
| are "intuitive". You have to do the math.
|
| None of what I wrote above is meant to imply that
| statistical tests can't be misused, or that they often
| require assumptions about the underlying population
| distribution that may not be correct.
| jncfhnb wrote:
| It is a function of the effect size, not the experiment.
| When the effect size is large, you need fewer samples to
| detect it reliably. In this case it looks pretty large.
| Would you want more people before rolling it out? Of
| course. But it's very unlikely to be vaporware provided
| the samples are random.
| kelseyfrog wrote:
| Thank you! Came here to say, but found, that you've
| already written it.
|
| Effect size is going to be important when talking about
| _clinical_ significance, not just _statistic_
| significance, too.
|
| I also want to point out that we have no stats on
| sensitivity, specificity, or diagnostic odds ratio, which
| are all clinically relevant to physicians deciding when
| to test and how to interpret test results.
|
| The good news is that it's a non-invasive, low cost test
| which plays a factor in clinical decision-making.
| kenjackson wrote:
| And this result probably helps the funding to get more
| patients. I've never worked in clinical trials but it seems
| like it would be so tiresome.
| bookofjoe wrote:
| I ran many clinical trials during my career as an academic
| neurosurgical anesthesiologist. "... it seems like it would
| be so tiresome" is accurate; it's also exhausting in terms
| of the huge amount of time and effort required to get a
| study approved by the institutional review board; getting
| informed consent from prospective patients after an
| exhaustive explanation repeated over and over to each
| individual; actually doing the study; organizing the
| results; doing the statistical analysis required; writing
| the paper; waiting months to hear back from the journal's
| reviewers, often receiving a rejection letter; resubmitting
| the paper to another journal, sometimes several more; after
| having it accepted, revising the paper per the reviewers'
| comments before resubmission, sometimes going back and
| forth for months.
|
| You can find my published papers here:
|
| https://scholar.google.com/citations?user=5DdrMc8AAAAJ&hl=e
| n
| wolverine876 wrote:
| So how do you feel about it? :) Seriously, what are you
| feelings about it after all those experiences?
|
| You forgot the end of the story: After all that, it's
| done and published, and then a random person with no
| expertise and who barely read the paper posts on HN: the
| sample size is too small - as if you were in your first
| week of statistics 101 - and therefore the whole thing
| must be invalid! :)
| bookofjoe wrote:
| How do I feel about it? My feelings after all those
| experiences?
|
| I dunno... that was me then, I guess: hard core academic
| with a drive/compulsion to publish good work.
|
| I admit to being amused by comments here about statistics
| by people who wouldn't know a Bonferroni correction from
| a bonfire.
| mbreese wrote:
| With so few controls, it's still not a well designed
| comparison. Most scientists deal with low sample sizes,
| especially in first trials. However, what is often overlooked
| is the need for sufficient numbers of negative controls. If
| they only had 4 control patients, that's completely
| inadequate to draw any real conclusions. With access to a
| biobank at U of Florida, they should have been able to test
| more samples -- especially with a $5 test.
|
| There are two other issues with this paper [1] that I quickly
| see. Their main figure lacks errors bars. It's pretty clear
| to me that the groups would overlap quite a bit. More numbers
| would make this problem clearer (either to narrow error bars
| or clearly show an overlap). The lack of error bars across
| the paper make me think they didn't do any technical
| replicates, which is also a problem.
|
| I'm also not sure a one way test is correct here, but I'm
| also not entirely sure how they are measuring the data. In
| this one way analysis, all you can tell is if one group is
| higher than the other. When the data are so unbalanced, what
| you don't see is what the predictive value of the test is --
| false positive/negative. That's the real issue here. It looks
| like you'd have a really high false negative rate, as the
| cancer samples have a much wider range than the controls.
| This is the worst thing you can have in a test like this.
|
| Finally -- this paper was published in the "journal of vacuum
| sciences and technology B." There is no way this paper got a
| valid peer review to make these claims. I don't know anything
| about this journal, but I doubt it has much experience with
| cancer testing.
|
| [1] https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/
| Hig...
| jncfhnb wrote:
| A small control sample is not "few controls". Misleading
| term. "Negative controls" is not a thing either. Just the
| control group is fine.
|
| The error bars are not really needed. Your eyeballs are
| doing just fine. Yes, they overlap. Yes, it would likely
| have a high false negative rate.
|
| A false negative rate is not a problem for a cheap test.
| This test is not meant to replace higher quality and more
| invasive tests. A high precision, low recall test still has
| significant value. You merely have to accept that a
| negative result tells you very little.
|
| If we imagined for instance, that the false positive rate
| is very low despite the overwhelmingly larger population of
| people without cancer, then this would be of enormous
| value.
| mbreese wrote:
| This is an unbalanced study. As such, it doesn't tell you
| anything about the ability to differentiate between the
| three populations. You can argue about the terms
| "negative control" (which is appropriate here, as there
| is a positive control test in the paper), but there are
| only 4 non-cancer samples tested. That is not enough to
| be able to adequately know range of measurable values in
| the population of patients w/o cancer.
|
| But, that's not really the point. They aren't trying to
| diagnose cancer vs. healthy.
|
| Error bars here are absolutely necessary. Two reasons:
| First, you want to know the approximate ranges for each
| group in Figures 3 and 5. Not showing them is misleading.
| Secondly -- you actually also want error bars for each
| patient sample. I'd expect for there to be at least three
| replicates for each saliva sample to show that the strips
| are able to consistently measure a known value from each
| sample.
|
| I also mis-read part of the paper the first time. For the
| HER2 cases, there aren't 4 negative samples -- there are
| 20. There is only one positive sample. Part of the
| problem is really how they are presenting the data -- it
| is not all clear what they are testing. But, there is
| only one HER2+ sample in the mix.
|
| One... N=1.
|
| Samples include:
|
| * Non-cancer: 4
|
| * In situ cancer: 3
|
| * Invasive cancer, HER2-: 13
|
| * Invasive cancer, HER2+: 1
|
| What you'd really like to show is that the HER2+ patients
| could be differentiated from HER2- patients. Which, does
| look really good, but with only one HER2+ sample, you
| really can't tell much. (And the presence of so much
| signal in the HER2- samples raises some very interesting
| biological/mechanistic questions).
|
| Note: I'm not trying to say that the authors of the study
| are wrong or are trying to deliberately mislead people.
| There is so much here that could have been corrected to
| make this a much stronger paper. To me, this seems like a
| paper where the authors are likely engineers and not that
| well versed in biomedical statistics. The paper is
| published in a physics journal, so the journal itself is
| not a good place to make some of these arguments.
|
| Is the idea of a non-invasive test worthwhile? Yes!
| Absolutely. But they didn't show that it was a good test
| of clinical utility. They showed that it could measure
| differences in protein concentrations from saliva. That's
| not nothing, but that's it. Now, _if_ that is an
| appropriate way to differentiate patients is a completely
| different question and requires substantially more
| testing (and orders of magnitude more patients).
| jncfhnb wrote:
| > This is an unbalanced study. As such, it doesn't tell
| you anything about the ability to differentiate between
| the three populations.
|
| Complete Nonsense
|
| > That is not enough to be able to adequately know range
| of measurable values in the population of patients w/o
| cancer.
|
| True. But it is a promising initial signal that the
| distribution of non cancerous folks is probably very
| different from the invasive cancer folks. The effect size
| here is huge. "Range" is less interesting than
| "Distribution"
|
| > Error bars here are absolutely necessary. Two reasons:
| First, you want to know the approximate ranges for each
| group in Figures 3 and 5. Not showing them is misleading.
|
| You don't really need error bars when you're showing all
| of a small number of data points. But sure whatever
|
| > Secondly -- you actually also want error bars for each
| patient sample. I'd expect for there to be at least three
| replicates for each saliva sample to show that the strips
| are able to consistently measure a known value from each
| sample.
|
| Would be nice. Sounds like they did ten measurements per.
|
| > What you'd really like to show is that the HER2+
| patients could be differentiated from HER2- patients.
| Which, does look really good, but with only one HER2+
| sample, you really can't tell much. (And the presence of
| so much signal in the HER2- samples raises some very
| interesting biological/mechanistic questions).
|
| I'm not clear why you're saying HER+ vs HER- is the
| important difference here.
| mbreese wrote:
| Look, you obviously have your opinions here. I'm not sure
| just saying "Complete nonsense" at things is really all
| that helpful.
|
| What I said ( _" it doesn't tell you anything about the
| ability to differentiate between the three populations"_)
| is quite correct. This study shows that there is a
| difference between the groups of samples tested with
| their HER2 test strip, with a one-way p-value of ~0.002.
|
| I'm not convinced that the samples are representative of
| their populations. The number of non-cancer samples too
| low.
|
| >* You don't really need error bars when you're showing
| all of a small number of data points.*
|
| Error bars are visually helpful ways to show that the
| group values overlap. Which, in this case, they do (I did
| replot this data to confirm).
|
| >* Would be nice. Sounds like they did ten measurements
| per.*
|
| This is for one test. They sampled the test strip 10
| times. I mean they should have tested each sample on at
| least 3 different test strips to get a mean value for the
| sample. This is a paper that is trying to say that their
| test strips are accurate, so it would make sense to test
| them multiple times.
|
| > _I'm not clear why you're saying HER+ vs HER- is the
| important difference here._
|
| I'm not sure what they are trying to claim in there
| paper... are they trying to say that they can diagnose
| breast cancer (which would requires many more
| biomarkers), or are they trying to say that they can
| differentiate between HER2+ and HER2- cancers (which
| would be more appropriate for a HER2 test).
|
| The other biomarker has even more overlap, so not sure
| how helpful that would be.
|
| Really, I think they are also missing an opportunity --
| the bigger use for me would be in longitudinal testing.
| If they could show changes in signal over time for a
| particular patient that corresponded to treatment status
| -- that would be a great use for a cheap non-invasive
| test.
| e63f67dd-065b wrote:
| It's been a while since I took statistics, but isn't the
| whole point of <thing> testing that it needs to have a high
| Bayes factor, and to be confident that said Bayes factor is
| high?
|
| In this case, and my lack of understanding in both bio and
| stats is showing here, we're trying to develop a test for
| breast cancer. An ideal test will have high sensitivity and
| specificity, and a tight confidence interval for both of
| those numbers. This way we can be confident that a
| positive/negative test actually moves our priors meaningfully
| in either direction.
|
| I guess my question/comment is more on the fact that I don't
| see how any of the results shown actually translate, as the
| headline suggests, into a cancer test of high power. The
| priors for any kind of cancer is pretty low from what I can
| find, so we need high power tests in both the positive and
| negative direction to meaningfully effect health outcomes. I
| can't find any CI numbers in the paper, which may just be me
| not reading it closely enough, but it doesn't help my
| confidence.
| jncfhnb wrote:
| Suppose you had cancer and could roll a die, and if the die
| came up six, it would tell you with high certainty you had
| cancer, and if it came up as anything else, it would tell
| you nothing.
|
| If the test is very cheap, it's probably a great test
| presuming you get to see the dice roll itself.
|
| The cost and invasiveness of the test is important.
| an_d_rew wrote:
| The problem here is that the underlying and unknown
| correlations in the sample (aka people) is that they are NOT
| independent.
|
| The larger sample sizes are a ... sort-of "proxy" ... to
| overwhelm underlying latent correlations.
|
| The whole thing is actually a subtle sort-of generalization
| of the "Prosecutor's Fallacy".
|
| So skepticism with the small sample sizes is absolutely
| warranted, unless some strong evidence is shown indicating
| mechanism-based independence.
| jncfhnb wrote:
| I feel like you're taking a very roundabout way to describe
| the concept of sample bias. So long as it's a random
| sample, this is accounted for in the statistics. Yes, we
| should still get more data all the same.
| lofatdairy wrote:
| I appreciate the links but I think this actually misses the
| point. The novelty isn't in diagnosis of cancer but in
| sensitivity/cost-efficacy in detection of known biomarkers for
| breast cancer (and associated risks of recurrence, etc). I'm
| not familiar with how common ELISA-based HER2 testing takes
| place, but it seems like it has some impact on drug
| decisions[^1].
|
| In terms of applicability, it depends on whether or not ELISA
| is in fact the current standard of care, but it could be useful
| in low-resource settings where you don't have lab personnel
| trained to carry out those assays, and drug choice is also
| restricted by limited availability.
|
| Additionally, there's a point-of-care argument as well. Since
| breast cancer does benefit from early detection, I can see a
| future in which biomarker testing is a more regular thing, and
| high saliva concentrations are flagged. At the very least as
| something worth bringing up at one's next appointment or wtv.
|
| [^1]: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC7033231/
| tetramer wrote:
| The standard of care in the US is immunohistochemistry (IHC),
| with FISH testing in equivocal cases so not really ELISA
| based.
|
| HER2 testing is done on all breast cancers as it affects
| treatment choices though the majority of breast cancers are
| not HER2 positive. (HER2 is also expressed in some normal
| tissue (notably cardiac) and is also seen in other cancers as
| well).
| neuronexmachina wrote:
| > Table I shows the median and the range of digital readings by
| disease status and overall p-value using the Kruskal-Wallis
| test to examine if there exist statistically significant
| distinctions among two or more groups. The overall p-value is
| significant while the value for HER2 is 0.002, which show the
| probability of false-positive detection. This indicates that
| this sensor technology is an efficient way to detect HER2
| biomarkers in saliva.
|
| > ... In Fig. 5, the test results for detecting CA15-3 of the
| human samples are displayed. The digital reading decreases from
| the healthy group to the invasive breast cancer group,
| indicating an increase in CA15-3 concentration. The median, the
| range by disease status, and overall p-values analyzed with the
| Kruskal-Wallis test for the CA15-3 test are listed in Table I.
| The overall p-value for CA15-3 is 0.005, indicating that this
| device provides an efficient way to detect the salivary
| biomarkers related to breast cancer.
| jacquesm wrote:
| You'd need a much larger n to determine the false positive
| rate. Sensitivity by itself isn't very useful.
| shelkie wrote:
| Cool how they're using BNC connectors and ICs from the 1980's /s
| Ginger-Pickles wrote:
| First standout thing I noticed is the "patern generator" array
| of DIPs - admittedly without having dived into the paper, any
| answers from the hive mind what they're doing?
| joezydeco wrote:
| I was focusing more on the "current to frequency" stage,
| which looks like an empty IC socket. As is the "pulse width
| counting" stage just a bunch of header pins?
|
| I mean yeah, I get it, it's a prototype and a finished
| product will be on a $2 ASIC to drive the correct signals and
| etc. But I'm not up to speed on affinity sensors vs.
| traditional ELISA tech so <internet shrug>.
| lambdaone wrote:
| Nothing jumps out at me as being fishy here. There's what
| appears to be a small device mounted in the middle of that
| empty socket. It's also possible some of the rest of the
| pins on the socket are being used as test points. A circuit
| diagram would have be good to have here.
| shelkie wrote:
| Breast cancer is a terrible disease, and I don't mean to be a
| downer but my BS detector is screaming on this one. I'd give
| the device image a pass if were just a journalist grabbing a
| stock PCB image, but that doesn't seem to be the case here.
| Anyone with even a trivial knowledge of electronics would be
| amused by the callouts. And all for just $5? After Theranos I
| guess I'm a bit sceptical of claims such as this.
| shermantanktop wrote:
| Yeah, this looks like an Apple IIe logic board to me.
| Ginger-Pickles wrote:
| Commodity logic ICs ubiquitous and totally capable if
| needs are well-matched; it's perhaps a deliberate
| prototype engineering/development choice.
| SV_BubbleTime wrote:
| Any FPGA could do the work smaller, faster, cheaper. So,
| I'm still skeptical.
| Ginger-Pickles wrote:
| From the paper:
|
| > Synchronous voltage pulses are sent to both the electrode
| of the strip connecting to the gate and drain electrodes of
| the MOSFET. The drain pulse is applied for around 1.1 ms at a
| constant voltage. The gate pulse starts at 40 ms after the
| drain pulse and ends at 40 ms before the end of the drain
| pulse.
|
| > the antigen-antibody complexes undergo stretching and
| contracting, akin to double springs, in response to a pulsed
| gate electric field. This motion across the antibody-antigen
| structure, corresponding to the pulse voltage applied on the
| test strip, induces an alteration in the protein's
| conformation, resulting in a time-dependent electric field
| applied to the MOSFET gate. Consequently, a springlike
| pattern emerges in the drain voltage waveform due to the
| external connection between the sensor strip and the MOSFET's
| gate electrode.
|
| So they shake 'em _just so_ , and listen to the response...
|
| ICs are perhaps variable timing & pulse-shaping logic?
| SV_BubbleTime wrote:
| There are a million people that could do this with a single
| FPGA in an afternoon. Why were none of them approached?
| rainbowzootsuit wrote:
| These are vacuum nerds more than electronics nerds. Most
| homebrew electronics designed by physicists for basic research
| are going to have these aesthetic qualities.
| Aachen wrote:
| > employs paper test strips coated with specific antibodies.
| These antibodies interact with cancer biomarkers [from] a drop of
| saliva
|
| > "[...] cost-effective, with the test strip costing just a few
| cents and the reusable circuit board priced at $5," Wan says.
|
| That _is_ cool: the 5$ isn 't even the cost of the test, it's the
| one-time cost of your lab equipment. Of course, per sibling
| comments, the efficacy has yet to be seen, but even a few percent
| more early detections due to frequent testing would be a win
| iamthepieman wrote:
| "but even a few percent more early detections due to frequent
| testing would be a win"
|
| This is not the current medical thought on early screenings for
| various cancers. It used to be and I was confused about it
| until very recently. Indeed the medical community is still
| wrestling with the issue of screening harms. The consensus is
| shifting that screening should only be done if there is an
| existing condition or symptom or family history.
|
| https://www.cancer.gov/news-events/cancer-currents-blog/2022...
| bluGill wrote:
| That depends on how harmful the screening is, how harmful
| treatment is, how harmful the disease is, how well the test
| works, and how common the disease is. (probably more that I'm
| not aware of)
|
| Most cancer treatments are really nasty. Thus false positives
| are really bad: you destroy someone's quality of life. The
| earlier cancer is discovered the better chance that we can
| use a less harmful treatment (if only because of smaller dose
| of the harmful drugs)
|
| The current breast cancer screening is an xray - which itself
| causes cancer (about 1 in 3000 cases of breast cancer
| discovered by xray wouldn't have got breast cancer in the
| first place without the screening - the screening is still
| wroth doing if you are at risk, but don't do it if you are
| not at risk).
|
| Breast cancer can be deadly, but if caught early it is easy
| to treat (normally).
|
| The medical concern generally isn't should we test all women
| for breast cancer, but when do we start testing and how often
| should we test. If this test is safer than an xray and
| sensitive enough it can be useful. Avoiding current breast
| cancer tests is good.
| mh- wrote:
| _> about 1 in 3000 cases of breast cancer discovered by
| xray wouldn't have got breast cancer in the first place
| without the screening_
|
| I had no idea the risks associated with xrays for breast
| cancer screenings were that high. Do you have a source (for
| that 1:3000 assertion) I can read?
| bluGill wrote:
| Communication with someone who claims to be in the know.
| It seems reasonable, but I don't have a source and I
| welcome someone who cares more to go more in depth.
| tetramer wrote:
| You're right that benefits of screening vary widely
| depending on type of cancer, type of test, and even a
| patients own co-morbidities. However, there are a lot of
| inaccuracies in this comment.
|
| False positives on a screening test are bad because you
| follow a screening test up with a confirmatory test (a
| biopsy for cancer) - sometimes the procedure for the biopsy
| results in additional complications and even death in rare
| cases (and if it's a false positive, a patient goes through
| all of that for a benign finding).
|
| I want to be very clear that oncologists are not going to
| start cancer treatment on the results of a screening test,
| you need confirmation.
| deweller wrote:
| Would a US company have enough financial incentive to front the
| cost of FDA approval in the US?
|
| I'm guessing it is expensive to jump through all of the hoops.
| Without a patent, why would a company pay for this?
| adam_arthur wrote:
| Usually with medical innovations, the idea is patented and the
| product is sold well above cost of production. e.g. this $5
| device could be sold for $100 each... large profit margin.
|
| If there is no patent, and no in-place infrastructure already
| producing the device, then yes, it's unlikely to see rapid
| scale up by manufacturers.
|
| Though if it's really that cheap to produce, I'm sure it will
| come onto market in some form (whether through charitable
| foundations or otherwise). All assuming that the device is
| actually as efficacious as the study implies (with a small
| sample size).
| zdragnar wrote:
| As a non-invasive diagnostic equipment, let us say it is akin
| to a pregnancy test, or a creatine in urine test. That puts it
| in Class 1, and may be exempt from a lot of the hoops a
| medicine would go through.
|
| There are a little over 6,100 hospitals in the US at any given
| time. That means roughly $30,000 in cost to produce to provide
| every hospital with one, using the cheapest of cheapest parts.
| Presumably, you'd want higher end components and things like a
| shell casing to protect from spills, static, etc. maybe the
| cost is $120k.
|
| You could sell these for $100 each, and maybe make $5 million.
| That's a decent amount of money for a tiny business, but a
| pittance for even a small business that needs to pay salaries,
| lawyers, etc.
|
| In reality, they'd probably go for much more- let's say $10k
| each, plus $100 for each test strip. Still easily in range for
| the budget of most US hospitals.
|
| The real question is, if breast cancer is suspected, is this
| test any better than imaging using equipment the hospital
| already has?
|
| Can it detect all types, or the degree that it is progressing?
|
| I suspect the utility in a clinical setting is not high enough
| to really change clinical practices any.
| aj7 wrote:
| Huh? The electronics is cheap and trivial. All the science is
| in those strips.
| iandanforth wrote:
| Figures 3 and 5 from the paper
| (https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/Hig...)
| have overlap between all the groups. The same measured output
| could have come from any of the non-cancer, or known-cancer,
| participants. While the _means_ of these groups look nicely
| separable, that overlap means there will be significant false
| positives or false negatives. So I 'd label the studyfinds
| headline of this being 'accurate' to be false.
| ImageXav wrote:
| Not at all. The model is quite accurate. In fact, with the
| distribution of samples that they have a model that predicts
| all cases as having cancer would also be very accurate. It
| would get 17/21 predictions right. The model lacks _precision_.
| I suspect that even with a fairly high cut off point the model
| would still produce a bevvy of false positive predictions due
| to that. It might still be useful as a screening step if they
| can increase the sensitivity further, but you would still rely
| upon further tests to get a true diagnosis.
| dmoy wrote:
| Sure we can be technically more surgical in our terminology,
| but GP is addressing the usage of 'accuracy' in the news
| headline. In that context, 'accuracy' is kinda a catchall
| term for both accuracy and precision.
| ImageXav wrote:
| It's a bit difficult to say, isn't it? The headline is
| using the term accuracy correctly, the reader might be
| ascribing the meaning you are to it, especially if they are
| non technical. As was the parent comment.
|
| My goal in pointing out the difference was not to be
| snarky. It was to point out the very real statistical
| consequences. Any model can be accurate on a sufficiently
| biased dataset, but what matters once a screening test hits
| the real world are the precision (positive predictive
| value) and negative predictive value. These are the hurdles
| that the test will have to pass to see widespread adoption.
| jacquesm wrote:
| Exactly. This is the real test and so far we simply do
| not know the answer. It's a nice first step but the
| headline is simply not justified. But whether solar
| power, wind power or cancer it's 99.99% of the time
| (possibly more nines) far less impressive by the time all
| of the data is in. And that's fine, but headline writers
| seem to be stuck in a hype cycle.
| parhamn wrote:
| Theyre not being surgical in the terminology. Accuracy and
| precision are the two things that matter for a diagnostic
| test. High accuracy, low precision = screener.
| tetramer wrote:
| It also only looks at HER2 and CA15-3 (aka MUC1) expression -
| what about breast cancers that don't express either of these?
|
| I realize this is an early technology and I think it should
| continue to be explored, but I would anticipate that if
| compared head to head with screening mammograms, it would be
| inferior.
|
| For patients with relapsed disease, this kind of technology
| would be neat to non-invasively re-assess biomarker status
| but as a screening tool, I find it lacking (and certainly a
| positive screening will require dedicated imaging and biopsy
| anyway).
| jncfhnb wrote:
| Backwards. The distributions here imply it will be a high
| precision model with not great recall.
| m3kw9 wrote:
| If it's any good it will filter thru to a doctor near you
| otherwise, it all sounds nice
| shermantanktop wrote:
| Really? Will the $5 Arduino people woo the doctor near you with
| golf trips? Will they strong arm HMOs to cover the cost? Will
| they carpetbomb the airwaves telling consumers to "ask their
| doctor?" Will they defeat the army of marketers and salespeople
| with entrenched competing tech?
|
| This device may be total crap, I don't know, but "trust the
| system" isn't a great way to navigate the American medical
| system. Other countries have it better, or at least different.
| s1artibartfast wrote:
| Doctors do gravitate towards effective tests and medicines,
| and insurance plans are interest in cheaper alternatives.
|
| There is a lot that could be improved about the US medical
| system, but Nobody has to bribe doctors and insurance to sell
| tongue depressors.
| shermantanktop wrote:
| I would bet someone out there has attempted to come up with
| a high-margin tongue depressor with fancy built-in
| features. But the plain wooden stick is already entrenched
| and obviously effective (for depressing tongues).
|
| This is the opposite. If the state of the art was a fancy
| tongue manipulation machine that cost $30k, used licensed
| consumables billed at $100/patient, and did a bunch of non-
| essential things that doctors found convenient once in a
| while, would someone selling a box of sticks get anywhere?
| loeg wrote:
| Someone has to actually commercialize this and get it approved
| and it will cost far more than $5. As part of that they have to
| demonstrate that it is, ya know, useful (vs existing methods).
| yakito wrote:
| As a reference. Currently an MRI scan can go from $400-$2000
| (mostly covered by insurance) https://scan.com/body-parts/breast
|
| An MRI machine cost arounds $350.000 on average.
| https://www.blockimaging.com
| ImageXav wrote:
| A device such as this would never replace an MRI scan. The
| information provided is for screening purposes, at best.
|
| Also, diagnosis would typically be done using a mammography.
| The cost of such a scan is lower - around $100[0].
|
| [0]https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4142190/
| bluGill wrote:
| It doesn't have to replace it in all cases. If it can give a
| good indication that we don't have to do a MRI that is
| already a good thing.
| ImageXav wrote:
| That's right, that's typically what is meant by screening
| purposes [0], apologies if it wasn't clear.
|
| [0] https://www.nhs.uk/conditions/nhs-screening/
| Dylan16807 wrote:
| The unclear part was more the phrase "would never replace
| an MRI scan". But it's clear now.
| sparklingmango wrote:
| Diagnosis is done via biopsy.
| ImageXav wrote:
| That is correct, my bad. The next screen after a test like
| this would likely be a mammography, and only after that
| would a biopsy be done if anything suspicious was seen.
| lofatdairy wrote:
| Better reference would be an ELISA test (which is actually
| brought up as the reference in the paper)[^1]. That seems to
| also run about $5 per kit per antigen[^2]. However this device
| seems to only require the test strip to be replaced, whereas
| you can only run ELISA once per strip/reagents. Also note that
| ELISA is harder to run so you have personnel costs and also
| this device claims higher sensitivity.
|
| [^1]:
| https://pubs.aip.org/avs/jvb/article/42/2/023202/3262988/Hig...
|
| [^2]: https://www.thermofisher.com/elisa/product/ErbB2-HER2-Hum
| an-..., note that I didn't shop around for necessarily the best
| prices.
| aeyes wrote:
| MRI is very rarely used for breast cancer screening.
| Mammography is much cheaper.
| alhirzel wrote:
| The "press release" looks more like a final report for a
| microcontroller applications class than an actual press release.
| SV_BubbleTime wrote:
| Diagram aside.
|
| The thirty through hole parts made me the think the same thing.
| moi2388 wrote:
| If I look at figure 3 and 5 I immediately see quite a bit of
| overlap between the readings of the different groups..
| masto wrote:
| Seems like this was originally built for SARS-CoV-2. Here's the
| paper from 2021: https://pubs.aip.org/avs/jvb/article-
| abstract/39/3/033202/59...
|
| and then in 2022 for detecting oral cancer:
| https://pubs.aip.org/avs/jvb/article/41/1/013201/2866658/Hig...
| asdefghyk wrote:
| The PCB picture above the words "...The printed circuit board
| used in the saliva-based biosensor,..." makes me think the
| article is a scam. As a electronics engineer the word labels seem
| to be not especially relevant to the invention. Who makes boards
| with lots DIP chips?
| SV_BubbleTime wrote:
| Yea, I was really curious about those.
|
| I mean... even if we were talking some special hardware, that's
| still PLC territory.
|
| My reading was this isn't $5, but could be made to be $5.
|
| This is clearly a prototype.
| amelius wrote:
| Given that only one such device is needed per (say) 100-500
| people, I think the device can probably cost a lot more and still
| be as affordable and effective.
| biomcgary wrote:
| Using the performance in the paper, the false positive rate means
| the current test in clinically useless (due to harm from
| screening and follow-up). Refining the test requires improving
| the selection of biomarkers and antibodies, not the hardware
| (which looks great).
| SV_BubbleTime wrote:
| No offense, but as someone that makes hardware devices, that
| hardware does not look great. It looks the very opposite of
| great.
|
| Through hole DIP array? This is a prototype and it makes me
| skeptical of the whole thing. There is nothing a single $5 FPGA
| couldn't do. So why not start there? I suspect because the
| people that made this doesn't know electronics or programming
| well - but then also didn't find someone that did.
|
| This was put together the way long way, and it's strange to me.
| DoreenMichele wrote:
| _The biosensor, a collaborative development by the University of
| Florida and National Yang Ming Chiao Tung University in Taiwan,
| employs paper test strips coated with specific antibodies. These
| antibodies interact with cancer biomarkers targeted in the test.
| Upon placing a drop of saliva on the strip, electrical pulses are
| sent to contact points on the biosensor device. This process
| leads to the binding of biomarkers to antibodies, resulting in a
| measurable change in the output signal. This change is then
| converted into digital data, indicating the biomarker's
| presence._
|
| It tests for biomarkers in the saliva. Possibly not outright
| crazy charlatan territory.
|
| Could certainly use a larger sample size though, especially given
| that one of its bragging points is "fast and cheap!"
| adamredwoods wrote:
| >> HER2 and/or CA15-3 in serum are essential biomarkers used in
| breast cancer diagnosis.
|
| This is WRONG, it is used as an indicator, NOT a diagnosis. In
| fact, I have personal experience that CA15-3 is not always in
| indicator of anything. You first measure a baseline, and use it
| for reference.
|
| Also there's no HER2 expression in TNBC.
|
| Can this be used for possible early detection? Maybe, but it will
| not be an exact science.
| vpribish wrote:
| clickbait misleading headline from a garbage-tier SEO-mill
| website.
|
| flag this and ban the website
| redder23 wrote:
| $0 device tests for breast cancer in under 5 seconds: its called
| a hand.
| F51user wrote:
| Largely BS.
|
| 1) These are not clinical biomarkers used for detection of cancer
| now, and in fact, they are known NOT to be clinical biomarkers
| useful for detecting cancer. 2) This is a publication focusing on
| the device/method, not the clinical application. It is published
| in a journal of vacuum science, not a cancer biology or medical
| journal. 3) There are several inaccurate things in the paper, one
| being that they state that current technology requires 1-2 weeks
| to measure either biomarker. Wrong, clinical tests exist today
| that can perform those immunoassays in minutes on large automated
| analyzers. 4) This isn't fraud, it's just a really typically
| overhyped report of a novel device/meausrement strategy (and it's
| not that novel) that targets a biomarker that has a role in
| cancer, and then some mass media picks it up and says that they
| have "the test for cancer". This happens all the time. 5) This
| should maybe be considered a proof of concept about the
| electronics of their detection strategy, since the immunoassay
| component is known (immunoassays to both biomarkers are not only
| published, but commercialized) and the clinical use of the
| biomarkers is not at all diagnostic for cancer or useful for
| screening.
___________________________________________________________________
(page generated 2024-02-16 23:00 UTC)