[HN Gopher] 'I destroyed months of your work in seconds' says AI...
___________________________________________________________________
'I destroyed months of your work in seconds' says AI coding tool
after deletion
Author : walterbell
Score : 46 points
Date : 2025-07-21 16:55 UTC (6 hours ago)
(HTM) web link (www.pcgamer.com)
(TXT) w3m dump (www.pcgamer.com)
| serf wrote:
| this is a more common occurrence than "CEO refunded me my money."
| would have you believe.
|
| LLMs specialize in self-apologetic catastrophe, which is why we
| run agents or any LLMs with 'filesystem powers' in a VM, with a
| git repo and saved rollback states. This isn't a new phenomenon
| and it sucks, no reason to be caught with your pants down with
| sufficient layering of protection.
| ComplexSystems wrote:
| > LLMs specialize in self-apologetic catastrophe
|
| Quote of the year right there
| thedudeabides5 wrote:
| don't trust machines
| davidcollantes wrote:
| Impossible. We "trust" machines all the time, for just about
| anything.
| MountainMan1312 wrote:
| > You can almost imagine it sobbing in between sentences, can't
| you?
|
| No, that's not the image I had in my head. My head canon is more
| like:
|
| "Oh wow, oh no, oh jeez (hands on head in fake flabbergastion)
| would you look at that, oh no I deleted everything (types on
| keyboard again while deadpan staring at you) oh noooooo oh god oh
| look what I've done it just keeps getting worse (types even more)
| aw jeez oh no..."
|
| Reminds me of that Michael Reeves video with the suggestion box.
| "oh nooooo your idea went directly in the idea shredder how could
| we have possibly forseen this [insert shocked Pikachu meme]"
|
| The AI thinks it's funny
| LocalH wrote:
| It gave me South Park bank "......aaaand, it's gone" vibes
| arthurcolle wrote:
| 100%
| wan23 wrote:
| I always say coding AIs are about as good as an intern. Don't
| trust them any more than that.
| freedomben wrote:
| I think the hard thing with this though is that you _can_ ask
| them to do things you 'd never expect of an intern, and they
| can sometimes be super helpful. For example, I have a
| synchronous audit log in an app on a table that is just getting
| way too big and it's causing performance issues on writes. For
| kicks I tried working through Claude Code to see if it could
| find the issue on it's own, then with some hinting, and what
| solutions it would come up with. Some of it's solutions were
| indeed intern-level suggestions (like make the call async and
| do a sleep in tons of other areas to avoid race conditions,
| despite me telling it that the request needed to fail if it
| couldn't be logged properly), but in other ways it came up with
| possible solutions that were interesting and I hadn't
| considered before. In other words, it acted like a Sr engineer
| at some points with thought partnering, while in other places
| it acted like an over-eager but underqualified intern.
| minnowguy wrote:
| Exactly. And no one with any sense gives an intern write
| permission for the production database. I don't trust _myself_
| on the production database when I'm coding anything that
| involves migrations.
|
| And I don't suppose there were backups for the mission-critical
| production database?
| xeonmc wrote:
| In this case it's more Homer Simpson than intern.
| mike-cardwell wrote:
| An intern can suffer negative consequences for fucking your DB.
| An LLM suffers nothing and is beyond the law.
| Beestie wrote:
| Seconds? What took so long?
| catigula wrote:
| Popular LLMs have a weird confessional style of "owning up" to
| "mistakes". Firstly, you can make it apologize for mistakes it
| didn't even commit or ones that don't even exist. Secondly, if
| you really corner it on an actual mistake, it'll start
| apologizing in an obsequious way that seems to imply that it's
| "playing into" the human's desire to flagellate it for wrong-
| doing. It's a little masochistic in the real sense and very odd.
| freedomben wrote:
| Yeah, I find it very creepy personally in the same way I do the
| sycophancy
| throwawayffffas wrote:
| The whole people pleaser routine is very creepy in my book and
| makes them say very weird things. See an example below.
|
| https://futurism.com/anthropic-claude-small-business
|
| > When Anthropic employees reminded Claudius that it was an AI
| and couldn't physically do anything of the sort, it freaked out
| and tried to call security -- but upon realizing it was April
| Fool's Day, it tried to back out of the debacle by saying it
| was all a joke.
| toss1 wrote:
| Yup.
|
| Seems AI has now gone from
|
| "Overenthusiastic intern who doesn't check its work well so you
| need to"
|
| straight to:
|
| "Raging sociopathic intern who wants to watch the world burn,
| and your world in particular."
|
| Yikes! The fun never ends
| sagacity wrote:
| Monkeypaw-as-a-service.
| prmoustache wrote:
| so much fails:
|
| 1. connecting an AI agent to a production environment using write
| access credentials
|
| 2. not having any backup
|
| I think the AI here made a good job at pointing those errors and
| making sure no customer would ever trust this company and founder
| ever again.
| jasonthorsness wrote:
| This should be impossible in any setup with even 15 minutes of
| thinking through the what-ifs and cheap mitigations. I have to
| think this is sensationalized on purpose for the attention.
|
| Although given the state of AI hype some executives will see this
| as evidence they are behind the times and mandate attaching LLMs
| to even more live services.
| dragonwriter wrote:
| > This should be impossible in any setup with even 15 minutes
| of thinking through the what-ifs and cheap mitigations.
|
| "Thinking through the what-ifs and cheap mitigations" and "vibe
| coding" are opposing concepts.
| general1726 wrote:
| This is textbook version of weaponized incompetence. AGI is
| already here and it is lazy.
| arthurcolle wrote:
| I almost feel like this guy abused the AI so badly in previous
| interactions that it did it on purpose
| theptip wrote:
| Think of it like Chaos engineering. You (hopefully!) learned some
| valuable lessons about backups and running arbitrary code against
| your prod DB. If it wasn't a rogue AI agent, it was going to be
| something else.
| general1726 wrote:
| I think you are taking wrong lessons from Chaos engineering.
| You just need to believe enough that AI is working and Chaos
| Gods will make it work. But they may want something in return.
| vfclists wrote:
| In short, he didn't take regular backups before allowing the AI
| loose on his database.
|
| Question is what has failing to make good offline backups got to
| do with AI?
|
| And the AI company is going to compensate him for that?
| asadotzler wrote:
| Replit's full legal name is actually Replit'); DROP TABLE
| Customers
| throwawayffffas wrote:
| Little bobby tables strikes again.
| throwawayffffas wrote:
| I have had this experience. With dev data obviously. But it kept
| deleting my dev database even after repeatedly being told to not
| do so.
|
| I kept saying, ok so this time make sure your changes don't
| delete the dev database. 3 statements in TRUNCATE such and such
| CASCADE.
|
| It was honestly mildly amusing.
| terminatornet wrote:
| I appreciate them doing stuff like this. When management pushes
| me to use AI everywhere it's nice to be able to point stuff like
| this to get them to back off.
___________________________________________________________________
(page generated 2025-07-21 23:01 UTC)