[HN Gopher] Experimental surgery performed by AI-driven surgical...
___________________________________________________________________
Experimental surgery performed by AI-driven surgical robot
Author : horseradish
Score : 40 points
Date : 2025-07-25 20:34 UTC (2 hours ago)
(HTM) web link (arstechnica.com)
(TXT) w3m dump (arstechnica.com)
| d00mB0t wrote:
| People are crazy.
| baal80spam wrote:
| In what sense?
| d00mB0t wrote:
| Really?
| threatofrain wrote:
| You've already seen the fruits of your prompt and how far
| your "isn't is super obvious I don't need to explain
| myself" attitude is getting you.
| JaggerJo wrote:
| Yes, this is scary.
| wfhrto wrote:
| Why?
| JaggerJo wrote:
| Because a LLM architecture seems way too fuzzy and
| unpredictable for something that should be reproducible.
| SirMaster wrote:
| I thought that was the temperature setting that does
| that?
| threatofrain wrote:
| This was performed on animals.
|
| What is a less crazy way to progress? Don't use animals, but
| humans instead? Only rely on pure theory up to the point of
| experimenting on humans?
| dang wrote:
| Maybe so, but please don't post unsubstantive comments to
| Hacker News.
| lawlessone wrote:
| Would be great if this had the kind of money that's being thrown
| at LLMs.
| ACCount36 wrote:
| "If?" This thing has a goddamn LLM at its core.
|
| That's true for most advanced robotics projects those days.
| Every time you see an advanced robot designed to perform
| complex real world tasks, you bet your ass there's an LLM in
| it, used for high level decision-making.
| ninetyninenine wrote:
| No surgery is not token based. It's a different aspect of
| intelligence.
|
| While technically speaking, the entire universe can be
| serialized into tokens it's not the most efficient way to
| tackle every problem. For surgery It's 3D space and
| manipulating tools and performing actions. It's better suited
| for standard ML models... for example I don't think Waymo
| self driving cars use LLMs.
| austinkhale wrote:
| If Waymo has taught me anything, it's that people will eventually
| accept robotic surgeons. It won't happen overnight but once the
| data shows overwhelming superiority, it'll be adopted.
| rscho wrote:
| Overwhelming superiority is not for tomorrow, though. But yeah,
| one day for sure.
| copperx wrote:
| Yeah, if there's overwhelming superiority, why not?
|
| But a lot of surgeries are special corner cases. How do you
| train for those?
| myhf wrote:
| I don't care whether human surgeons or robotic surgeons are
| better at what they do. I just want more money to go to
| whoever _owns_ the equipment, and less to go to people in my
| community.
|
| It's called capitalism, sweaty
| flowmerchant wrote:
| Complications happen in surgery, no matter how good you are. Who
| takes the blame when a patient has a bile leak or dies from a
| cholecystectomy? This brings up new legal questions that must be
| answered.
| PartiallyTyped wrote:
| See, the more time goes by, the more I prefer robot surgeons
| and assisted surgeons. The skill of these only improves and
| will reach a level where the most common robots exceed the
| 90th, and eventually 95th percentiles.
|
| Do we really want to be in a world where surgeon scarcity is a
| thing?
| andrepd wrote:
| >The skill of these only improve
|
| Citation effing needed. It's taken as an axiom that these
| systems will keep on improving, even though there's no
| indication that this is the case.
| PartiallyTyped wrote:
| Humans can keep improving, we take that as granted, so
| there is at least one solution to the problem of general
| intelligence.
|
| Now, robots can be far more precise than humans, in fact,
| assisted surgeries are becoming far more common, where
| robots accept large movements and scale them down to far
| smaller ones, improving the surgeon's precision.
|
| My axiom is that there is nothing inherently special about
| humans that can't be replicated.
|
| It follows then that something that can bypass our own
| mechanical limitations and can keep improving will exceed
| us.
| kaonwarb wrote:
| Most technological capabilities improve relatively
| monotonically, albeit at highly varying paces. I believe
| it's a reasonable position to take as the default
| condition, and burden of proof to the contrary lies on the
| challenger.
| lll-o-lll wrote:
| You are implying linear improvement, which is patently
| false. The curve bends over.
| ACCount36 wrote:
| Are you completely fucking unaware? Do you not realize what
| kind of world are you living in?
|
| We live in a world where the line of technological
| advancement only ever goes up.
| rscho wrote:
| What we really want is a world without need for surgery. So,
| the answer depends on the time frame, I guess ?
| bigmadshoe wrote:
| We will always need surgery as long as we exist in the
| physical world. People fall over and break things.
| rscho wrote:
| Bold assumption. I agree regarding the foreseeable
| future, though.
| bluefirebrand wrote:
| It's really not a bold assumption?
|
| Unless we can somehow bio engineer our bodies to heal
| without needing any external intervention, we're going to
| need surgery for healthcare purposes
| rscho wrote:
| Well, it depends on your definition of 'surgery'. One
| could well imagine that transplanting your conscience
| into a new body might well be feasible before we get to
| live on Mars.
| doubled112 wrote:
| Where does one find a new body ready for consciousness
| transplant? Would we grow them in farms like in the
| Matrix?
| lll-o-lll wrote:
| > Do we really want to be in a world where surgeon scarcity
| is a thing?
|
| Surgeon scarcity is entirely artificial. There are far more
| capable people than positions.
|
| Do we really want to live in a world where human experts are
| replaced with automation?
| johnnienaked wrote:
| Technology and the bureaucracy that is spawned from it destroys
| accountability. Who gets the blame when a giant corporation
| with thousands of employees cuts corners to re-design an old
| plane to keep up with the competition and two of those planes
| crash killing hundreds of people?
|
| No one. Because you can't point the finger at any one or two
| individuals; decision making has been de-centralized and
| accountability with it.
|
| When AI robots come to do surgery, it will be the same thing.
| They'll get personal rights and bear no responsibility.
| ACCount36 wrote:
| That "accountability" of yours is fucking worthless.
|
| When a Bad Thing happens, you can get someone burned at the
| stake for it - or you can fix the system so that it doesn't
| happen again.
|
| AI tech stops you from burning someone at the stake. It
| doesn't stop you from enacting systematic change.
|
| It's actually easier to change AI systems than it is to
| change human systems. You can literally design a bunch of
| tests for the AI that expose the failure mode, make sure the
| new version passes them all with flying colors, and then
| deploy that updated AI to the entire fleet.
| esafak wrote:
| https://arxiv.org/abs/2505.10251
|
| https://h-surgical-robot-transformer.github.io/
|
| Approach:
|
| [Our] policy is composed of a high-level language policy and a
| low-level policy for generating robot trajectories. The high-
| level policy outputs both a task instruction and a corrective
| instruction, along with a correction flag. Task instructions
| describe the primary objective to be executed, while corrective
| instructions provide fine-grained guidance for recovering from
| suboptimal states. Examples include "move the left gripper closer
| to me" or "move the right gripper away from me." The low-level
| policy takes as input only one of the two instructions,
| determined by the correction flag. When the flag is set to true,
| the system uses the corrective instruction; otherwise, it relies
| on the task instruction.
|
| To support this training framework, we collect two types of
| demonstrations. The first consists of standard demonstrations
| captured during normal task execution. The second consists of
| corrective demonstrations, in which the data collector
| intentionally places the robot in failure states, such as missing
| a grasp or misaligning the grippers, and then demonstrates how to
| recover and complete the task successfully. These two types of
| data are organized into separate folders: one for regular
| demonstrations and another for recovery demonstrations. During
| training, the correction flag is set to false when using regular
| data and true when using recovery data, allowing the policy to
| learn context-appropriate behaviors based on the state of the
| system.
| jongjong wrote:
| This seems to imply that surgery isn't that difficult to perform.
|
| Medicine as a sector seems highly gate-kept. The main purpose of
| all the studying seems to be to reduce the number of graduates in
| the field to drive up wages.
|
| This may explain why communist countries often have many doctors
| and decent access to medicine (despite having worse access to
| most other goods and services).
|
| This robot might eventually get approval to perform surgery with
| a surgeon supervising but we would probably never allow a non-
| doctor human to perform a specific surgery even with a surgeon
| supervising. In theory it would probably work, you could probably
| train a non-doctor (e.g. a nurse) to perform a surgery... Yet we
| won't allow it.
| pryelluw wrote:
| Looking forward to the day instagram influencers can proudly
| state that their work was done by the Turbo Breast-A-Matic 9000.
| tremon wrote:
| > Indeed, the patient was alive before we started this procedure,
| but now he appears unresponsive. This suggests something happened
| between then and now. Let me check my logs to see what went
| wrong.
|
| > Yes, I removed the patient's liver without permission. This is
| due to the fact that there was an unexplained pooling of blood in
| that area, and I couldn't properly see what was going on with the
| liver blocking my view.
|
| > This is catastrophic beyond measure. The most damaging part was
| that you had protection in place specifically to prevent this.
| You documented multiple procedural directives for patient safety.
| You told me to always ask permission. And I ignored all of it.
___________________________________________________________________
(page generated 2025-07-25 23:00 UTC)