[HN Gopher] 1-Click RCE to steal your Moltbot data and keys
___________________________________________________________________
1-Click RCE to steal your Moltbot data and keys
Author : arwt
Score : 127 points
Date : 2026-02-01 19:47 UTC (3 hours ago)
(HTM) web link (depthfirst.com)
(TXT) w3m dump (depthfirst.com)
| dotancohen wrote:
| The real problem is that there is nothing novel here. Variants of
| this type of attack were clear from the beginning.
| lxgr wrote:
| What I would have expected is prompt injection or other methods
| to get the agent to do something its user doesn't want it to,
| not regular "classical" attacks.
|
| At least currently, I don't think we have good ways of
| preventing the former, but the latter should be possible to
| avoid.
| ethin wrote:
| They are easy to avoid if you actually give a damn.
| Unfortunately, people who create these things don't, assuming
| they even know what even half of these attacks are in the
| first place. They just want to pump out something now now now
| and the mindset is "we'll figure out all the problems later,
| I want my cake now now now now!" Maximum velocity! Full
| throttle!
|
| It's just as bad as a lot of the vibe-coders I've seen. I
| literally saw this vibe-coder who created an app without even
| knowing what they wanted to create (as in, what it would do),
| and the AI they were using to vibe-code literally handwrote a
| PE parser to load DLLs instead of using LoadLibrary or delay
| loading. Which, really, is the natural consequence of giving
| someone access to software engineering tools when they don't
| know the first thing about it. Is that gatekeeping of a sort?
| Maybe, but I'd rather have that then "anyone can write
| software, and oh by the way this app reimplements wcslen in
| Rust because the vibe-coder had no idea what they were even
| doing".
| lxgr wrote:
| > "we'll figure out all the problems later, I want my cake
| now now now now!" Maximum velocity! Full throttle!
|
| That is indeed the point. Moltbot reminds me a lot of the
| demon core experiment(s): Laughably reckless in hindsight,
| but ultimately also an artifact of a time of massive
| scientific progress.
|
| > Is that gatekeeping of a sort? Maybe, but I'd rather have
| that
|
| Serious question: What do you gain from people not being
| able to vibe code?
| hugey010 wrote:
| Not who you're responding to, but I'm not a huge fan of
| vibe coding for 2 reasons: I don't want to use crappy
| software, and I don't want to inherit crappy software.
| lxgr wrote:
| Same, but I've both used and inherited crappy software
| long before LLMs and agents were a thing.
|
| I suppose it's going to be harder to identify obvious
| slop at a first glance, but fundamentally, what changes?
| chrisjj wrote:
| > They just want to pump out something now now now
|
| Some people actually fell for "move fast and break things".
| ejcho wrote:
| I think with the advent of the AI gold rush, this is
| exactly the mentality that has proliferated throughout new
| AI startups.
|
| Just ship anything and everything as fast as possible
| because all that matters is growth at all costs. Security
| is hard and it takes time, diligence, and effort and
| investors aren't going to be looking at the metric of "days
| without security incident" when flinging cash into your
| dumpster fire.
| chrisjj wrote:
| > At least currently, I don't think we have good ways of
| preventing the former, but the latter should be possible to
| avoid.
|
| Here's the thing. People who don't see a problem with the
| former obviously have no interest in addressing the latter.
| clawsyndicate wrote:
| legit issue for local installs but this is why we run the hosted
| platform in gVisor. even with the exploit you're trapped in a
| sandbox with no access to the host node. we treat every container
| as hostile by default.
| electroglyph wrote:
| that response is not comforting
| chrisjj wrote:
| So... what use is an agent that cannot reach out of its trap?
| hughw wrote:
| You sound like the confident techie character in a Michael
| Crichton novel pronouncing "We've thought of everything there's
| no way for the demon to escape" shortly before the demon
| escapes.
| optimalsolver wrote:
| He spared no expense.
| mentalgear wrote:
| Moltbot is a security nightmare, especially it's premise (tap
| into all your data sources) and the rapid uptake by inexperienced
| users makes it especially attractive for criminal networks.
| chrisjj wrote:
| It's like a bank decided to open its systems to a bunch of
| students it hired off Fiverr.
| avaer wrote:
| Yes, there are already several criminal networks operating on
| it (transparently). I guess some consider this a feature.
| cal85 wrote:
| How do you know this? Not disagreeing, just curious.
| avaer wrote:
| The links have been posted to HN if you search.
|
| https://moltroad.com/ comes to mind. The "top rated" on
| there describes itself as "trading in neural contraband".
|
| That's in addition to all of the actual hijacking hacks
| that have been going on.
|
| I'm not saying any of this is successful, but people are
| certainly trying.
| FreePalestine1 wrote:
| I am officially at the age where I'm unable to "get with
| the times". What am I looking at with moltroad.com?
| g947o wrote:
| We'll all have a good laugh when looking back at this in a few
| years.
| catlifeonmars wrote:
| Any customers of products built on this stuff, who have their
| SSNs, numbers, and other PII leaked will not be laughing. But
| hey, who cares about them?
| overgard wrote:
| I'm curious, outside of AI enthusiasts have people found value
| with using Clawdbot, and if so, what are they doing with it? From
| my perspective it seems like the people legitimately busy enough
| that they actually need an AI assistant are also people with
| enough responsibilities that they have to be very careful about
| letting something act on their behalf with minimal supervision.
| It seems like that sort of person could probably afford to hire
| an administrative assistant anyway (a trustworthy one), or if
| it's for work they probably already have one.
|
| On the other hand, the people most inclined to hand over access
| to everything to this bot also strike me as people without a lot
| to lose? I don't want to make an unfair characterization or
| anything, it just strikes me that handing over the keys to your
| entire life/identity is a lot more palatable if you don't have
| much to lose anyway?
|
| Am I missing something?
| jondwillis wrote:
| Does it matter? Let them cook and get burned if they want to.
| lxgr wrote:
| There's some good discussion here:
| https://news.ycombinator.com/item?id=46838946
| mh2266 wrote:
| The whole premise of this thing seems to be that it has access
| to your email, web browser, messaging, and so on. That's what
| makes it, in theory, useful.
|
| The prompt injection possibilities are incredibly obvious...
| the entire world has write access to your agent.
|
| ???????
| Trufa wrote:
| It is very much fun! Chaotic and definitely dangerous but a fun
| little experiment of the boundaries.
|
| It's definitely not it it's final form but it's showing
| potential.
| voxgen wrote:
| I'm working in AI, but I'd have made this anyway: Molty is my
| language learning accountability buddy. It crawls the web with
| a sandboxed subagent to find me interesting stuff to read in
| French and Japanese. It makes Anki flashcards for me. And it
| wraps it up by quizzing me on the day's reading in the evening.
|
| All this is running on a cheap VPS, where the worst it has
| access to is the LLM and Discord API keys and AnkiWeb login.
| h4kunamata wrote:
| From my perspective, not everybody is busy but they are using
| AI to remove the load from them.
|
| You might think: But that is great right??
|
| I had a chat with a friend also in IT, ChatGPT and alike is the
| one doing all the "brain part and execution" in most cases.
| Entire workflows are done by AI tools, he just presses a button
| in some cases.
|
| People forget that our brain needs stimulation, if you don't
| use it, you forget things and it gets dumber. Watch the next
| generation of engineers that are very good at using AI but are
| unable to do troubleshooting on their own.
|
| Look at what happened with ChatGPT4 -> 5, companies workflows
| worldwide stopped working setting companies back by months.
|
| Do you wanna a real world example???
|
| Watch people who spent their entire lives within an university
| getting all sort of qualification but never really touched the
| real deal unable to do anything.
|
| Sure, there are the smarter ones who would put things to the
| test and found awesome job, but many are jobless because all
| they did is "press a button", they are just like the AI
| enthusiasts, remove such tools and they can no longer work.
| bmit wrote:
| So many people are giving keys to the kingdom to this thing. What
| is happening with humanity?
| lxgr wrote:
| Humanity is the same it's always been. Some people are just
| inherently curious despite the obvious dangers.
|
| Also, if you think about it, billions of people aren't running
| Moltbot at all.
| nsm100 wrote:
| Thank you for doing this. I'm shocked that more people aren't
| thinking about security with respect to AI.
| lxgr wrote:
| This isn't even AI security, as far as I can tell: It looks
| like regular old computer security to me.
| g947o wrote:
| In the old days we just call that arbitrary code execution.
|
| And these AI people just act as if that's never a problem.
| avaer wrote:
| People are thinking about it. I'm just not sure if the
| intersect between people who use OpenClaw/Moltbook is very
| high.
| decodebytes wrote:
| I rushed out nono.sh (the opposite of yolo!) in response to this
| and its already negated a few gateway attacks.
|
| It uses kernel-level security primitives (Landlock on Linux,
| Seatbelt on macOS) to create sandboxes where unauthorized
| operations are structurally impossible. API keys are also stored
| in apples secure enclave (or the kernel keyring in linux) , and
| injected at run time and zeroized from memory after use. There is
| also some blocking of destructive actions (rm -rf ~/)
|
| its as simple to run as: nono run --profile openclaw -- openclaw
| gateway
|
| You can also use it to sandbox things like npm install:
|
| nono run --allow node_modules --allow-file package.json
| package.lock npm install pkg
|
| Its early in, there will be bugs! PR's welcome and all that!
|
| https://nono.sh
| krackers wrote:
| Is this better than using sandbox-exec (on mac) directly?
| decodebytes wrote:
| Hmm, I don't know about better, more convenient I guess. But
| if it floats your boat you could write out everything in the
| sb format and call sandbox_exec()!
| stijnveken wrote:
| Heads up that your url is wrong. Should be https://nono.sh
| decodebytes wrote:
| lol thanks! seriously, I have been running the tool over and
| over while testing and I kept typing 'nano' and opening
| binaries in the text editor. Next minute I swearing my head
| off trying to close nano (and not vim!)
| hedgehog wrote:
| Obviously I'm biased but this looks really useful.
| ethin wrote:
| Things like this are why I don't use AI agents like
| moltbot/openclaw. Security is just out the window with these
| things. It's like the last 50 years never happened.
| avaer wrote:
| No need to look back 50 years, people already forgot 2021
| crypto security lapses that collectively cost billions. Or
| maybe the target audience here just doesn't care.
| voxgen wrote:
| It's not perfect but it does have a few opt-in security
| features: running all tools in a docker container with minimal
| mounts, requiring approvals for exec commands, specifying tools
| on an agent by agent basis so that the web agent can't see
| files and the files agent can't see the web, etc.
|
| That said, I still don't trust it and have it quarantined in a
| VPS. It's still surprisingly useful even though it doesn't have
| access to anything that I value. Tell it to do something and
| it'll find a way!
| vulnwrecker5000 wrote:
| what worries me here is that the entire personal AI agent product
| category is built on the premise of "connect me to all your data
| + give me execution." At that point, the question isn't "did they
| patch this RCE," it's more about what does a secure autonomous
| agent deployment even look like when its main feature is broad
| authority over all of someone's connected data?
|
| Is the only real answer sandboxing + zero trust + treating agents
| as hostile by default? Or is this category fundamentally
| incompatible with least privilege?
|
| yikes
| chrisjj wrote:
| We need more Windows' "Are you sure you want XXX to make
| changes to your computer? (no I can't tell you what changes,
| but trust me.)"
|
| /i
| mh2266 wrote:
| > "did they patch this RCE,"
|
| no, they _documented_ it
|
| https://docs.openclaw.ai/gateway/security#node-execution-sys...
| g947o wrote:
| So that's shifting the responsibility to users. And likely
| many users tools don't understand what those words mean.
|
| All these companies/projects break decades of our security
| practice and sell you AI browser, AI agent for... I don't
| know what?
| ejcho wrote:
| do people even care about security anymore? I'll bet many
| consumers wouldn't even think twice about just giving full access
| to this thing (or any other flavor of the month AI agent product)
___________________________________________________________________
(page generated 2026-02-01 23:00 UTC)