[HN Gopher] Whole-body magnetic resonance imaging at 0.05 Tesla
       ___________________________________________________________________
        
       Whole-body magnetic resonance imaging at 0.05 Tesla
        
       Author : Jimega36
       Score  : 114 points
       Date   : 2024-05-12 15:20 UTC (7 hours ago)
        
 (HTM) web link (www.science.org)
 (TXT) w3m dump (www.science.org)
        
       | Jimega36 wrote:
       | "The lower-power machine was much cheaper to manufacture and
       | operate, more comfortable and less noisy for patients, and the
       | final images after computational processing were as clear and
       | detailed as those obtained by the high-power devices currently
       | used in the clinical setting."
        
         | tkzed49 wrote:
         | 300-1800W power draw seems impressive! It looks like standard
         | machines are using something on the order of 25kW while
         | scanning, which certainly sounds prohibitive for less developed
         | infrastructure.
        
           | ChrisMarshallNY wrote:
           | Also, they need to keep vats of liquid helium around.
           | 
           | Difficult stuff to store. I knew they needed cold gas, but
           | liquid helium is crazy.
        
             | jahnu wrote:
             | There was some buzz years ago about using liquid nitrogen
             | instead but I don't know if it made it into widespread
             | production
             | 
             | https://www.wired.com/story/mri-magnet-cooling/
        
               | nullc wrote:
               | Sounds like that's more about using a cryocooler to
               | minimize the helium used-- but presumably that requires
               | keeping the coils in a particularly hard vacuum to
               | adequately insulate them.
               | 
               | There is some research towards operating at liquid
               | hydrogen temperatures -- but hydrogen has its own
               | logistical challenges.
        
         | BobbyTables2 wrote:
         | A cardboard box containing preprinted scan results would also
         | be even cheaper and faster.
         | 
         | But some people actually like to have something that works.
        
       | parpfish wrote:
       | > Each protocol was designed to have a scan time of 8 minutes or
       | less with an image resolution of approximately 2x2x8 mm3
       | 
       | very cool, but is it clinically useful if one edge of your voxel
       | is 8mm?
        
         | hesdeadjim wrote:
         | Easier to scale up than down once you have a starting point
         | like this.
        
         | nullc wrote:
         | 8mm slice thickness isn't particularly at odds with what is
         | commonly done on commercial machines, though usually there is a
         | second transverse scan (which can't be readily fused due to
         | patient movement).
         | 
         | But even if it were, plenty of interesting structures are many
         | centimeters in size, a thousand fold decrease in costs from
         | eliminating cryogenic / high power magnets could be very
         | useful.
        
           | parpfish wrote:
           | the structures are many centimeters, but I assume that the
           | sort of anomalies you'd be looking for in a clinical scan
           | aren't going to be that large.
           | 
           | if you had a fracture/tumor/damage-of-some-type that's small
           | enough to fit between those slices and you didn't get the
           | slices lined up _just right_ the scan would miss it, no?
        
       | xyst wrote:
       | "Tesla" is a unit of measure for magnetic strength; and not the
       | car manufacturer.
        
         | throwup238 wrote:
         | I figured they were using 1/20th of Nikola Tesla's cadaver.
         | That's the only logical interpretation of that headline.
        
           | xyst wrote:
           | lmao, same bro. I reading the paper and only then I
           | discovered it's about the viability of a low powered MRI
           | machine for diagnostic imaging.
           | 
           | Particularly useful in poorer countries.
        
           | hinkley wrote:
           | The device is actually designed to scan 20 people at the same
           | time. It's cheap because you get a bulk discount on your
           | scans.
        
       | brnt wrote:
       | Medical imaging devices and medical devices in general are a
       | racket. There are only a few companies and they are legal and
       | lobbying departments first and foremost. This isn't the first
       | time radical and radically cheaper prototypes have been proposed,
       | but the unsolved bit it actually convincing anyone to buy.
       | 
       | A colleague had a device and a veteran adviced him to 10x the
       | price.
        
         | londons_explore wrote:
         | > the unsolved bit it actually convincing anyone to buy.
         | 
         | Surely a lot of small hospitals would jump at the chance at a
         | small cheap MRI? I don't understand how the incumbents have
         | much legal leverage here...
        
           | brnt wrote:
           | It's about insurance, certification of personnel, often these
           | technicians are a cartel in an of themselves.
           | 
           | Everybody loves the idea of cheaper stuff, but nobody is
           | going to take a chance. Medicine is extremely conservative.
           | Overly in my opinion.
        
       | alwa wrote:
       | I can't access the full paper, but from the abstract, is it
       | accurate that they're using ML techniques to synthesize higher-
       | quality and higher-resolution imagery, and _that's_ the basis for
       | their claim that it's comparable to the output of a conventional
       | MRI scan?
       | 
       | Do clinicians really prefer that the computer make normative
       | guesses to "clean up" the scan, versus working with the imagery
       | reflecting the actual measurements and applying their own
       | clinical judgment?
        
         | deepsun wrote:
         | My understanding as well. That... will bias towards training
         | data, and will miss more anomalies. And anomalies is the point
         | of scanning.
        
         | eig wrote:
         | I can say that most radiologists would not want a computer
         | trying to fix poor scan data. If the underlying data is bad,
         | they would have recommend an orthogonal imaging abnormality. "I
         | don't know" is a possible response radiologists can give.
         | Trying to add training data to "clean up" an image would bias
         | the read towards "normal".
        
           | falcor84 wrote:
           | To nitpick, wouldn't it by definition bias the read toward
           | normal? I suppose the problem is more that you don't want to
           | bias it to normal if it wasn't.
        
           | bone_slide wrote:
           | Spot on. When I can't interpret a study due to artifact, I
           | say that in my report.
           | 
           | Let's say there's a CTA chest that is limited because the
           | patient breathed while the scan was being acquired, I need to
           | let the ordering clinician know that the study is not
           | diagnostic, and recommend an alternative.
           | 
           | If AI eliminates the artifact by filling in expected but not
           | actually acquired data, I am screwed and the patient is
           | screwed.
        
         | j7ake wrote:
         | They already use computers to make guesses to clean up the
         | scan.
         | 
         | A core part of processing MRI is the compressed sensing
         | algorithm .
        
       | habosa wrote:
       | This is remarkable. 1800W is like a fancy blender, amazing to be
       | able to do a useful MRI at that power.
       | 
       | For anyone who is unaware, a standard MRI machine is about 1.5T
       | (so 30x the magnetic strength) and uses 25kW+. For special
       | purposes you may see machines up to 7T, you can imagine how much
       | power they need and how sensitive the equipment is.
       | 
       | Lowering the barriers to access to MRIs would have a massive
       | impact on effective diagnosis for many conditions.
        
         | _vaporwave_ wrote:
         | This reminded me of the recent request for startups proposal by
         | Surbhi Sarna "A way to end cancer". The proposal states that we
         | already have a way (MRI) to diagnose cancer at very early
         | stages where treatment is feasible but cost and scaling need to
         | be tackled to make it widely accessible.
         | 
         | Something like this low power MRI could be a key part of
         | enabling a transformation of cancer treatment.
        
       | imjonse wrote:
       | The key is cheaper device combined with deep learning.
        
         | toomuchtodo wrote:
         | Could you use the deep learning to improve the device to reduce
         | the need for deep learning to fill in the gaps from traditional
         | devices? Essentially teaching an algorithm to build a better,
         | more simple imaging device in a pid loop?
        
       | arkades wrote:
       | I have a hard time picturing the radiologist whose reputation and
       | malpractice rely on catching small anomalies being comfortable
       | using a machine predicated on inferring the image contents.
        
       | blegr wrote:
       | Does this make MRIs safe for some people who wouldn't qualify due
       | to metal implants? Or at least reduce the risk of accidents?
        
         | rasmus1610 wrote:
         | Probably yes. But most medical implants today are MRI-scannable
         | anyway. Even patients with pacemakers can be scanned today with
         | proper preparation.
        
       | RivieraKid wrote:
       | Well in theory you use a neural net to can generate realistic MRI
       | images with 0 Tesla.
        
         | Toutouxc wrote:
         | I love how succinct this argument is, and yet it contains
         | everything.
        
       | cornholio wrote:
       | > We conducted imaging on healthy volunteers, capturing brain,
       | spine, abdomen, lung, musculoskeletal, and cardiac images. Deep
       | learning signal prediction effectively eliminated EMI signals,
       | enabling clear imaging without shielding.
       | 
       | So essentially, the neural net was trained to what a healthy MRI
       | looks like and would, when exposed to abnormal structures,
       | correct them away as EMI noise leading to wrong diagnostics?
       | 
       | I won't be very dismissive of this approach and probably deep
       | learning has a strong role to play in improving medical imaging.
       | But this paper is far, far from sufficient to prove it. At a
       | minimum, it would require mixed healthy / abnormal patients with
       | particularities that don't exist in the training set, and each
       | diagnostic reconfirmed later on a high resolution machine. You
       | need to actually prove the algorithm does not distort the data,
       | because an MRI that hallucinates a healthy patient is much more
       | dangerous than no MRI at all.
        
         | rossant wrote:
         | Seems like a huge and obvious red flag to me indeed. I can't
         | imagine how the authors managed to not even mention the issue
         | in the abstract. If the model is trained on healthy scans,
         | well, yes, it will spit out healthy scans. The whole point of
         | clinical radiology is to get enough precision to detect
         | (potentially subtle) anomalies.
        
       | eig wrote:
       | A few months ago there were articles going around about how
       | Samsung galaxy phones were upscaling images of the Moon using AI
       | [0]. Essentially, the model was artificially adding landmarks and
       | details based on its training set when the real image quality was
       | too poor to make out details.
       | 
       | Needless to say, AI upscaling as described in this article would
       | be a nightmare for radiologists. 90% of radiology is confirming
       | the _absence_ of disease when image quality is high, and _asking
       | for complementary studies_ when image quality is low. With AI
       | enhanced images that look  "normal", how can the radiologist ever
       | say "I can confirm there is no brain bleed" when the computer
       | might be incorrectly adding "normal" details when compensating
       | for poor image quality?
       | 
       | [0] - https://news.ycombinator.com/item?id=35136167
        
         | atoav wrote:
         | This is one aspect about machine learning models I keep
         | discussing with non-technical passengers of the AI-hype-train:
         | They are (in their current form) unsitable for applications
         | where correctness is absolutely critical.
        
           | teaearlgraycold wrote:
           | I don't know enough to make absolute statements here, but
           | deep learning models can beat out human experts at discerning
           | between signal and noise. Using that to guess at data and
           | then hand it off to humans gives you the worst of both
           | worlds. Two error probabilities multiplied together. But to
           | simply render a verdict on whether a condition exists I'd
           | trust a proven algorithm.
        
             | coffeebeqn wrote:
             | There are a lot of models that are simply good at that
             | without hallucinating nonsense. LLMs are a specific thing
             | with their own tradeoffs and goals. If you have a ML model
             | that says how much does this microscope photo look like an
             | anomaly in this persons blood on a scale from 0-100 it can
             | certainly do better than a human.
        
         | BobbyTables2 wrote:
         | The Samsung phone wasn't a technological advancement, it was
         | sheer fraud.
         | 
         | A camera is supposed to take pictures of what it sees.
         | 
         | Imagine going to a restaurant, ordering French onion soup, and
         | getting a bowl of brown food coloring in water.
        
           | sieste wrote:
           | > Imagine going to a restaurant, ordering French onion soup,
           | and getting a bowl of brown food coloring in water.
           | 
           | Welcome to England!
        
             | GaylordTuring wrote:
             | I know this isn't Reddit, but haha. Take my upvote!
        
             | seanmcdirmid wrote:
             | I still remember the zoom and enhance joke they played on
             | red dwarf. Parody has become reality.
        
               | willis936 wrote:
               | Immortalized in Super Troopers (2001).
               | 
               | https://youtu.be/KiqkclCJsZs
        
           | zarmin wrote:
           | It's kinda like the classic Ebay scam where you buy a picture
           | of the item instead of the item.
        
             | falcor84 wrote:
             | Yes, or the increasingly common Amazon one, where you get
             | an AI-generated summary of the book, instead of the actual
             | book.
        
           | pavlov wrote:
           | _> "A camera is supposed to take pictures of what it sees."_
           | 
           | Feels like that's just a matter of expectations.
           | 
           | A phone used to be a device for voice communications. It's
           | right there in the Greek etymology, "phone" for sound. But
           | 95% of what people do today on devices called phones is
           | something else than voice.
           | 
           | Similarly, if people start using cameras more to produce
           | images of things they want rather than what exists in front
           | of the lens, then that's eventually what a camera will mean.
           | Snapchat thinks of themselves as a camera company, but the
           | images captured within their apps are increasingly
           | synthesized.
           | 
           | (The etymology of "camera" already points to a journey of
           | transformation. A photographic camera isn't a literal room,
           | as the camera obscura once was.)
        
             | rzzzt wrote:
             | Taking this thought to its logical conclusion:
             | https://bjoernkarmann.dk/project/paragraphica
        
             | kyriakos wrote:
             | small correction "phone" means voice not sound :)
        
             | thsksbd wrote:
             | Some of us want a record of what was, not a hallucination
             | of what might have or could have been.
             | 
             | Courts, for example. Forensic science was revolutionized by
             | widespread adoption of photography leading to a reduction
             | of the importance given to witnesses. Who also hallucinate
             | what might have happened.
        
           | eig wrote:
           | An MRI machine is a fancy 3D camera. Is this "3D Deep-DSP
           | Model" so different from the processing Samsung did on their
           | phones?
        
             | peddling-brink wrote:
             | Samsung would replace a white circle with an image of the
             | moon. Even calling it AI was a stretch.
        
           | vhcr wrote:
           | Where do you draw the line? RAW, HDR, photo stitching, blur
           | removal?
        
             | ed312 wrote:
             | This is an excellent ponit, and I don't know where to
             | exactly draw the line ("I know it when I see it"). I
             | personally use "auto" (probably heuristic, maybe soon-ish
             | AI-powered) features to adjust levels, color balance etc.
             | Using AI to add things that are _not at all present_ in the
             | original crossed the line into digital art vs photography
             | for me.
        
               | Toutouxc wrote:
               | I draw the line where the original pixel values are still
               | part of the input. As long as you're manipulating
               | something that the camera captured, it's still
               | photography, even if the math isn't the same for all
               | pixels, or is AI powered.
               | 
               | But IMO it's a point worth bringing up, most people have
               | no idea how digital photography works and how difficult
               | it is to measure, quantify and interpret the analog
               | signal that comes from a camera sensor to even resemble
               | an image.
        
               | sneak wrote:
               | There was the small complication of the fact that the
               | moon texture that Samsung got caught putting onto moon-
               | shaped objects in photos is, of course, the same side of
               | the same moon.
        
             | Dylan16807 wrote:
             | None of those are adding data, assuming normal definitions
             | of 'blur removal' and not the AI kind. So with those the
             | line is very easy to draw.
        
           | bawolff wrote:
           | > A camera is supposed to take pictures of what it sees.
           | 
           | If people wanted cameras to actually take what it sees, then
           | we wouldn't have autofocus, photoshop or instagram filters.
           | 
           | The goal of a cell phone camera is to capture what you are
           | experiencing, not to literally record what light strikes the
           | cmos chip.
        
             | UberFly wrote:
             | A camera takes a picture of what it sees. What comes next
             | is a different thing all together.
        
               | enriquto wrote:
               | > A camera takes a picture of what it sees.
               | 
               |  _All_ images taken with digital cameras have been
               | filtered by a pipeline of advanced algorithms. Nobody
               | ever looks at  "what the camera sees". What kind of
               | savage would look at an image before demosaicing the
               | Bayer pattern? (Except from the people who work in
               | demosaicing, of course.)
        
             | wtallis wrote:
             | > If people wanted cameras to actually take what it sees,
             | then we wouldn't have autofocus,
             | 
             | Bad example. Autofocus makes changes to the light that goes
             | into the camera, not just the data that comes out.
             | 
             | > photoshop or instagram filters
             | 
             | Bad examples. Those both give the user a before-and-after
             | comparison so the user can decide what kind of alterations
             | are reasonable or desirable.
        
         | nullc wrote:
         | The state of the art MRI stuff uses "compressed sensing" --
         | essentially image completion in some domain or another.
         | Presumably, carefully designed to not hallucinate details or
         | one would hope.
         | 
         | There isn't necessarily a particularly neutral choice here: the
         | MRI scan isn't in the pixel domain, artifacts are going to be
         | 'weird' looking-- e.g. edges that move during the scan ringing
         | across the whole image.
        
           | CooCooCaCha wrote:
           | Compressed sensing is far more mathematically rigorous.
        
             | nullc wrote:
             | I don't think we know what's in the black box here. It
             | could be an equivalent relatively unopinionated regularizer
             | ("the pixel domain will be locally smooth, to the extent it
             | has edges they're spatially contiguous") or it could be
             | "just look up the most similar image from a library and
             | present that instead" or anywhere in between. :)
        
               | CooCooCaCha wrote:
               | They specifically said they use deep learning which
               | implies a sizeable neural network.
        
       | m3kw9 wrote:
       | It may miss some scans because there could be special cases which
       | the model wasn't trained with and would predict a different
       | result/error. Maybe it's acceptable in places where you may not
       | even get a chance to be diagnosed
        
       | BobbyTables2 wrote:
       | Enhance!
        
       | ryankrage77 wrote:
       | I think this could be useful as a starting point for diagnostics
       | - a cheaper, lower-power device massively lowers the barrier to
       | entry to getting _an_ MRI scan, even if it 's not fully reliable.
       | If it does find something, that's evidence a higher-quality scan
       | is worth the resources. In short, use the worse device to take a
       | quick look, if it finds anything, then take a closer look. If it
       | doesn't find anything, carry on with the normal procedure.
        
         | mnau wrote:
         | Is cost of machines really barrier? I can get MRI for $400-$500
         | as a self payer (Eastern Europe, i.e. if i just wanted it, not
         | that doctor would say he wants it).
         | 
         | I read a paper few years ago about utilization rate,
         | machine/service cost, how many machines per citizen/hospital...
         | They were running day and night. Cursory glance at other
         | countries also reveal sensible prices.
         | 
         | Unless it gets to a point of ultra sound machine(i.e. machine
         | in a the consulting room a doctor can use in 10 minutes), I
         | don't think it will decrease price much.
        
       | Aurornis wrote:
       | The idea sounds great, but the examples they provide aren't
       | encouraging for the usefulness of the technique:
       | 
       | > The brain images showed various brain tissues whereas the spine
       | images revealed intervertebral disks, spinal cord, and
       | cerebrospinal fluid. Abdominal images displayed major structures
       | like the liver, kidneys, and spleen. Lung images showed pulmonary
       | vessels and parenchyma. Knee images identified knee structures
       | such as cartilage and meniscus. Cardiac cine images depicted the
       | left ventricle contraction and neck angiography revealed carotid
       | arteries.
       | 
       | Maybe there's more to it that I'm missing, but this sounds like
       | the main accomplishment is being able to identify that different
       | tissues are present. Actually getting diagnostic information out
       | of imagining requires more detail, and I'm not sure how much this
       | could provide.
        
       | sitkack wrote:
       | The application of a system like this could be as augmentation to
       | imagers like CT and ultrasound. Because of its up resolution
       | techniques and lower raw resolution (2x2x8mm), it might not be
       | used for early cancer detection. But it looks _really_ useful in
       | a trauma center or for guiding surgery, etc. These same
       | techniques could also be applied to CT scans, I could see a multi
       | sensor scanner that did both CT and NMRI use super low power,
       | potentially even battery powered.
       | 
       | Regardless, this is super neat.
       | 
       | > We developed a highly simplified whole-body ultra-low-field
       | (ULF) MRI scanner that operates on a standard wall power outlet
       | without RF or magnetic shielding cages. This scanner uses a
       | compact 0.05 Tesla permanent magnet and incorporates active
       | sensing and deep learning to address electromagnetic interference
       | (EMI) signals. We deployed EMI sensing coils positioned around
       | the scanner and implemented a deep learning method to directly
       | predict EMI-free nuclear magnetic resonance signals from acquired
       | data. To enhance image quality and reduce scan time, we also
       | developed a data-driven deep learning image formation method,
       | which integrates image reconstruction and three-dimensional (3D)
       | multiscale super-resolution and leverages the homogeneous human
       | anatomy and image contrasts available in large-scale, high-field,
       | high-resolution MRI data.
        
       | rasmus1610 wrote:
       | I'm a radiologist and very sceptic about low-field MRI + ML
       | actually replacing normal high-field MRI for standard diagnostic
       | purposes.
       | 
       | But in a emergency setting or especially for MRI-guided
       | interventions these low-field MRIs can really play a significant
       | role. Combining these low-field MRIs with rapid imaging
       | techniques makes me really excited about what interventional
       | techniques become possible.
        
         | sitkack wrote:
         | There is an opinion piece in the same issue that agrees with
         | you.
         | 
         | https://www.science.org/doi/10.1126/science.adp0670
         | 
         | > This machine costs a fraction of current clinical scanners,
         | is safer, and needs no costly infrastructure to run (2).
         | Although low-field machines are not capable of yielding images
         | that are as detailed as those from high-field clinical
         | machines, the relatively low manufacturing and operational
         | costs offer a potential revolution in MRI technology as a
         | point-of-care screening tool.
         | 
         | I don't think this machine is being billed as replacement to
         | high-field machines.
        
           | xattt wrote:
           | > I don't think this machine is being billed as replacement
           | to high-field machines.
           | 
           | Countries where health regulation is less developed are
           | likely to see misrepresentation where this form of MRI will
           | be equated to full-field MRI by snake oil salesmen.
        
         | bagels wrote:
         | What is it about lower fields that means you cannot get a good
         | image? Interference? Tissue movement in longer exposures? Why
         | can't the device just integrate over a longer period of time?
        
       | modeless wrote:
       | Wow, this seems like it could be a DIY project! I know people are
       | complaining about the AI stuff but look at the images _before_ AI
       | enhancement. They look pretty awesome already!
        
       | w10-1 wrote:
       | With a voxel size of 2x2x8mm^3, this would do what X-rays/CT's do
       | now, and a bit more (but likely not replace high-energy MRI's?
       | I'm not understanding how they rival high-energy accuracy in-
       | silico, but that's how the paper's written)
       | 
       | In the acute setting, faster and more ergonomic imaging could be
       | big. E.g., in a purpose-build brain device, if first responders
       | had a machine that tells hemorrhagic vs ischemic stroke, it would
       | be easier to get within the tPA time window. If it included the
       | neck, you could assess brain and spine trauma before transport
       | (and plan immobilization accordingly).
        
       | bilsbie wrote:
       | How big of a deal is this? Isn't it basically a 10/10? Seems like
       | it could open up MRI's to everyone.
        
       | elektropionir wrote:
       | It is just weird that papers like this can be published. "Deep
       | learning signal prediction effectively eliminated EMI signals,
       | enabling clear imaging without shielding." - this means that they
       | have found a way to remove random noise, which if true, should be
       | the truly revolutionary claim in this paper. If the "EMI" is not
       | random you can just filter it so you don't need what they are
       | doing. If it isn't random, whatever they are doing can "predict"
       | the noise, they even use the word in that sentence. They are
       | claiming that they can replace physical filtering of noise before
       | it corrupts the signal (shielding) with software "removal" of
       | noise after it has already corrupted the signal. This is simply
       | not possible without loss of information (i.e. resolution). The
       | images that they get from standard Fourier Transform
       | reconstruction are still pretty noisy so on top they "enhance"
       | the reconstruction by running it through a neural net. At that
       | point they don't need the signal - just tell the network what you
       | want to see. The fact that there are no validation scans using
       | known phantoms is telling.
        
         | op00to wrote:
         | It would suck if lesions or tumors look like noise.
        
           | fnordpiglet wrote:
           | Except there are other uses for an MRI and something that
           | doesn't require super conductors would be pretty awesome and
           | deployable to places that lack the infra to support a complex
           | machine depending on near absolute zero temperatures and the
           | associated complexities.
        
         | MrLeap wrote:
         | Remember the early atomic age when people were doing wild shit
         | like adding radium to your toothpaste so you can brush your
         | teeth in the dark?
         | 
         | This is that, but again, with AI.
        
         | azalemeth wrote:
         | I'm a professional MR physicist. I genuinely think the
         | profession is hugely up the hype curve with "AI" and to a far
         | lesser extent low field. It's also worth saying that the
         | rigorous, "proper" journal in the field is Magnetic Resonance
         | in Medicine, run by the international society of magnetic
         | resonance in medicine -- and that papers in nature or science
         | generally nowadays tend to be at the extreme gimmicky end of
         | the spectrum.
         | 
         | A) Many MR reconstructions work by having a "physics model",
         | typically in the form of a linear operator, acting upon the
         | required data. The "OG" recon, an FT, is literally just a
         | Fourier matrix acting on the data. Then people realised that
         | it's possible to I) encode lots of artefacts, and ii)
         | undersample k-space while using the spatial information using
         | different physical rf coils, and shunt both these things into
         | the framework of linear operators. This makes it possible to
         | reconstruct it-- and Tikhonov regularisation became popular --
         | so you have an equation like argmin _theta (yhat - X_1 X_2
         | X_3.... X_n y) + lambda Laplace(y) to minimise, which does
         | genuinely a fantastic job at the expense, usually, of non
         | normal noise in the image. "AI" can out perform these
         | algorithms a little, usually by having a strong prior on what
         | the image is. I think it's helpful to consider this as some
         | sort of upper bound on what there is to find. But as a warning,
         | I've seen images of sneezes turned into knees with torn
         | anterior cruciate ligaments, a matrix of zeros turned into
         | basically the mean heart of a dataset, and a fuck ton of people
         | talking bollocks empowered by AI. This isn't starting on
         | diagnosis -- just image recon. The major driver is reducing
         | scan time (=cost), required SNR (=sqrt(scan time)) or/and,
         | rarely measuring new things that take too long. This almost
         | falls into the second category
         | 
         | The main conference in the field has just happened and
         | ironically the closing plenty was about the risks of AI, as it
         | happens.
         | 
         | B) Low field itself has a few genuinely good advantages. The T2
         | is longer, the risks to the patient with implants are lower,
         | and the machines may be cheaper to make. I'm not sold on that
         | last one at all. I personally think that the bloody cost of the
         | scanner isn't the few km of superconducting wires in it -- it's
         | the tens of thousands of phd-educated hours of labour that went
         | into making the thing and their large infrastructure
         | requirements, to say nothing of the requirements of the people
         | who look at the pictures. There are about 100-250k scanners in
         | the world and they mostly last about a decade in an institution
         | before being recycled -- either as niobium titanium or as a
         | scanner on a different continent (typically). Low field may
         | help with siting and electricity, but comes at the cost of
         | concomitant field gradients, reduced chemical shift dispersion,
         | a whole set of different (complicated) artefacts, and the same
         | load of companies profiteering from them.
        
           | fnordpiglet wrote:
           | Would it be easier to deploy devices like this to developing
           | counties without the infrastructure to support liquid helium
           | distribution? I imagine a much simpler device WRT exotic
           | cooling and distribution of material requirements is a plus.
           | Couple that with the scarcity and non-renewable nature of
           | helium, maybe using devices like this at scale for gross MRI
           | imagery makes sense?
           | 
           | The AI used here as I read it is a generative approach trying
           | to specifically compensate for EMI artifacts rather than a
           | physics model and it likely wouldn't be doing macro changes
           | like sneezes to knees, no?
        
           | bone_slide wrote:
           | As one of the people that look at the images, this is the
           | best comment in the thread.
           | 
           | Lots of AI nonsense permeating radiology right now, which
           | seems to be fairly effective click bait and an easy way to
           | generate hype and headlines.
        
       | tiahura wrote:
       | Wouldn't ai ultrasound be more useful?
        
       | rhindi wrote:
       | There are some non-ML based approaches for ultra low field MRI
       | that are starting to work: https://drive.google.com/file/d/1m7K1W
       | --UOUecDPlm7KqFYzfkoew... . You can still add AI on top of
       | course, but at least you get a better signal to noise ratio to
       | start with!
        
       | cashsterling wrote:
       | I can't read the full article but low-T MRI is potentially a big
       | deal IMO because a 0.05T magnetic coil can be air or water-cooled
       | but higher T-magnets (like 1.5 and 3T MRI magnets) have to use
       | superconducting wire and thus must be cooled to sub 60K
       | temperatures (even down to sub 10K) using Helium refrigeration
       | cycles. I worked for a time at a company that made MRI
       | calibration standards (among many other things).
       | 
       | helium refrigeration cycle equals:
       | 
       | - elaborate and expensive cryogenic engineering in the MRI
       | overall design.
       | 
       | - lots of power for the helium refrigeration cycle.
       | 
       | - requirements for pure helium supply chain, which is not
       | possible in many parts of the world, including areas of Europe,
       | North America, etc.
        
       | bone_slide wrote:
       | As a practicing radiologist, I think this is great. We can have
       | AI enabled MRI scanners hallucinating images, read by AI
       | interpreting systems hallucinating reports!
        
       ___________________________________________________________________
       (page generated 2024-05-12 23:00 UTC)