[HN Gopher] Defending against hypothetical moon life during Apol...
       ___________________________________________________________________
        
       Defending against hypothetical moon life during Apollo 11
        
       Author : Metacelsus
       Score  : 54 points
       Date   : 2024-01-07 17:38 UTC (5 hours ago)
        
 (HTM) web link (eukaryotewritesblog.com)
 (TXT) w3m dump (eukaryotewritesblog.com)
        
       | TheEzEzz wrote:
       | A good analogy for AI risk. We'd never visited the Moon before,
       | or any other celestial object. The risk analysis was not "we've
       | never seen life from a foreign celestial object cause problems on
       | Earth, therefore we aren't worried." The risk analysis was also
       | not "let's never go to the Moon to be _extra_ safe, it's just not
       | worth it."
       | 
       | The analysis was instead "with various methods we can be
       | reasonably confident the Moon is sterile, but the risk of getting
       | this wrong is very high, so we're going to be extra careful just
       | in case." Pressing forward while investing in multiple layers of
       | addressing risk.
        
         | yreg wrote:
         | I agree with you, but to be fair:
         | 
         | - The worst case worry about AI is a much bigger problem than
         | the worst case worry about moon life. (IMHO)
         | 
         | - With moon we had a good idea on how to mitigate the risks
         | just to be extra safe. With AI I believe we don't have any clue
         | on how to do containment / alignment or if it's even possible.
         | What is currently being done on the alignment front (e.g. GPT
         | refusing to write porn stories or scam emails) has absolutely
         | nothing to do with what worries some people about
         | superintelligence.
        
           | AnimalMuppet wrote:
           | No, the worst case worry about moon life is the total
           | extinction of all life on earth. It's no better than AI.
        
             | johntask wrote:
             | Total extinction of all life on Earth isn't also the worst
             | case worry about AI? Anyway, both seem highly unlikely,
             | that's why we shouldn't compare worst or best scenarios,
             | but rather real, more probable risks, i.e. AI being used to
             | develop advanced weapons. In that regard I'd say AI is
             | worst, but it's mostly a matter of opinion, really.
        
             | yreg wrote:
             | Again, devils advocate, but the people worried about AI
             | (like Yudkowsky) are absolutely worried about it killing
             | all humans. You can read more about the specifics on
             | lesswrong.
             | 
             | With moon life I presume the worst case is some infectious
             | and fatal disease that's difficult to contain?
             | 
             | The first one sounds like a bigger problem to me, but maybe
             | it's not a discussion worth having. So, fair enough.
        
               | Geisterde wrote:
               | Skynet will only nuke us after the AI safety crowd has
               | thoroughly convinced the military of how supremely
               | dangerous and capable AI is. AI on its own seems pretty
               | benign, keep security vulnerabilities patched and be
               | skeptical of what you read on the internet.
               | 
               | I honestly believe this pop-scifi view we have of AI is
               | probably the most dangerous part, it gives certain people
               | (like those in weapons procurement) dangerous levels of
               | confidence in something that doesnt provide consistent
               | and predictable results. When the first AI cruise missile
               | blows up some kids because it hallucinated them as a
               | threat, it wont be because AI is so dangerous, it will be
               | the overconfidence of the designers. Its threat to
               | humanity is directly correlated to the responsibility we
               | delegate it.
        
           | steveBK123 wrote:
           | Maybe we can just EMP ourselves / turn off the grid for a
           | day.
        
           | TheEzEzz wrote:
           | I agree -- the risks are bigger, the rewards larger, the
           | variance much higher, and the theories much less mature.
           | 
           | But what's striking to me as the biggest difference is the
           | seeming lack of ideological battles in this Moon story. There
           | were differences of opinion on how much precaution to take,
           | how much money to spend, how to make trade offs that may
           | affect the safety of the astronauts, etc. But there's no
           | mention of a vocal ideological group that stands outright
           | opposed to those worried about risks -- or a group that
           | stands opposed to the lunar missions entirely. They didn't
           | politicize the issue and demonize their opponents.
           | 
           | Maybe what we're seeing with the AI risk discussion is just
           | the outcome of social media. The most extreme voices are also
           | the loudest. But we desperately need to recapture a culture
           | of earnest discussion, collaboration, and sanity. We need
           | every builder and every regulator thinking holistically about
           | the risks and the rewards. And we need to think from first
           | principles. This new journey and its outcomes will almost
           | surely be different in unexpected ways.
        
         | andrewflnr wrote:
         | Not a great analogy. Today we have all kinds of profit-driven
         | companies "going to the moon" without thinking too hard about
         | the risks. There is not, and practically can't be, a central
         | safety effort that has more effect than releasing reports. No
         | one is enforcing quarantine.
         | 
         | If there was life on the moon in an analogous scenario, it
         | would be a matter of a few trips before it was loose on earth.
        
           | mastersummoner wrote:
           | Yes, but that's today. When the moon landing initially
           | happened, nobody had ever been to another celestial body
           | before, whereas now we have lots more experience visiting
           | them and sampling their atmospheres and surfaces.
           | 
           | Nobody's ever created AI before, so we're in a similar
           | situation in that nobody has firsthand experience of what to
           | expect.
        
             | andrewflnr wrote:
             | Oh definitely, _that_ part of the analogy works fine.
        
           | cubefox wrote:
           | Sounds like a great analogy?
        
         | gumballindie wrote:
         | A great analogy indeed - ai, as moon life, turned out to be
         | false alarms.
        
           | LargeTomato wrote:
           | You can easily harm people with ai. I can hypothetically harm
           | people with ai today (fake news, etc). I can't harm people
           | with fake moon life. AI already poses a greater threat to
           | humanity than moon life ever did.
        
             | gumballindie wrote:
             | You can harm people with a feather. AI is a non issue, the
             | only issue is the people using it, and thus far it seems
             | like there are too many sociopaths using it, willing to
             | steal people's property just to generate images of
             | sexualised animals and dubious quality code.
        
         | The_Colonel wrote:
         | > so we're going to be extra careful just in case
         | 
         | If we're going with that analogy, moon is roughly
         | simultaneously visited by many private companies, each bringing
         | samples back, some paying lip service "we're totally be
         | careful", some not.
         | 
         | Continuing with that analogy, there are other planets, moons,
         | solar systems with perhaps bigger chance of finding life. The
         | laissez-faire approach to bringing samples back continues, now
         | strengthened by the "see, we visited moon, brought samples and
         | we still live!".
        
         | invig wrote:
         | Well, extra careful, but we've still got to beat the Russians.
         | We're not going to not beat the Russians over this, so figure
         | it out.
        
         | gwern wrote:
         | And, what OP downplays, not taking it seriously, having many
         | serious fatal flaws, and then covering all those flaws up while
         | assuring the public everything was going great:
         | https://www.nytimes.com/2023/06/09/science/nasa-moon-quarant...
         | https://www.journals.uchicago.edu/doi/abs/10.1086/724888
         | 
         | Something to think about: even if there are AI 'warning shots',
         | why do you think anyone will be allowed to hear them?
        
           | TheEzEzz wrote:
           | Good question. Perhaps depends on the type of warning shot.
           | Plenty of media has an anti-tech bend and will publicize
           | warning shots if they see them -- and they do this already
           | with near term risks, such as facial recognition.
           | 
           | If the warning shot is from an internal red team, then higher
           | likelihood that it isn't reported. To address that I think we
           | need to continue to improve the culture around safety, so
           | that we increase the odds that a person on or close to that
           | red team blows the whistle if we're stepping toward
           | undisclosed disaster.
           | 
           | I think the bigger risk isn't that we don't hear the warning
           | shots though. It's that we don't get the warning shots, or we
           | get them far too late. Or, perhaps more likely, we get them
           | but are already set on some inexorable path due to
           | competitive pressure. And a million other "or's".
        
         | 627467 wrote:
         | I wonder if similar approach was taken for for internet/www.
         | Google? Did anyone worry about PageRank threat to life? Maybe
         | PageRank will turn out to have been the human nemesis after
         | all... Only in hundred of years time frame
        
         | andy99 wrote:
         | You mean skynet terminator wintermute risk it seems, which
         | doesn't exist and we have no pathway to. The analogy doesn't
         | hold for matrix multiplication. It might be fun to pontificate
         | about what could happen if we had something that is effectively
         | now magic, but it's just a philosophy thought experiment with
         | no bearing on reality. The real danger would be policy makers
         | who don't understand the different between current technology
         | and philosophy class imposing silly rules based on their
         | confusion.
        
       | macintux wrote:
       | Much more extensive article than I expected.
       | 
       | Seems like they found a reasonable balance between crew safety
       | and protection against contamination, at least for the moon, but
       | I'm left with the thought that if there _is_ life on Mars, there
       | 's no way to prevent contamination when probes or people bring it
       | back.
        
         | dotnet00 wrote:
         | If life on Mars is not a recent thing, it will probably have
         | contaminated Earth already (and probably vice versa) as there
         | are meteorites found on Earth that almost definitely came from
         | Mars.
        
           | jjallen wrote:
           | Yes but the life that is possibly currently on Mars would
           | have evolved significantly since life there originally came
           | to Earth.
        
       | Metacelsus wrote:
       | The article alludes to this, but the quarantine efforts were not
       | actually very successful. See also:
       | https://www.journals.uchicago.edu/doi/abs/10.1086/724888?jou...
        
       | comfysocks wrote:
       | Wouldn't a disease-causing moon organism need to co-evolve with a
       | host on the moon to infect?
        
       ___________________________________________________________________
       (page generated 2024-01-07 23:00 UTC)