_______               __                   _______
       |   |   |.---.-..----.|  |--..-----..----. |    |  |.-----..--.--.--..-----.
       |       ||  _  ||  __||    < |  -__||   _| |       ||  -__||  |  |  ||__ --|
       |___|___||___._||____||__|__||_____||__|   |__|____||_____||________||_____|
                                                             on Gopher (inofficial)
 (HTM) Visit Hacker News on the Web
       
       
       COMMENT PAGE FOR:
 (HTM)   Tell HN: Litellm 1.82.7 and 1.82.8 on PyPI are compromised
       
       
        jcarrano wrote 6 hours 34 min ago:
        Thinking how a secure setup for uploading packages from a CI would look
        like: the package must be signed by the devs, and for that they must
        build it independently on their machines (this requires a reproducible
        build).
       
        calebjang wrote 1 day ago:
        This is exactly what worries me about autonomous agents.
        A compromised package is bad. An agent that autonomously
        runs pip install with that package is a different problem.
        The attack surface moves with the agent.
       
          Fibonar wrote 22 hours 17 min ago:
          Dev who submitted the PyPI report here. I hear what you're saying,
          but in this case it was all human error that got me. It was a mix of
          getting too comfortable with uvx  installing all dependencies on
          startup, and Cursor running my uvx-built plugins automatically in the
          background. Meaning I didn't even type the install command myself,
          yet no agents involved.
       
        n1tro_lab wrote 1 day ago:
        The scariest part is LiteLLM is a transitive dependency. The person who
        found it wasn't even using LiteLLM directly, it got pulled in by a
        Cursor MCP plugin. The supply chain attack surface for AI tooling is
        massive because these packages get pulled in as dependencies of
        dependencies and nobody audits transitive installs.
       
        Bullhorn9268 wrote 1 day ago:
        I am from futuresearch and went through this with Callum (the OG). We
        did a small analysis here: [1] of the packages and also build this mini
        tool to analyze the likelihood of you getting pwned through this:
        
 (HTM)  [1]: https://futuresearch.ai/blog/litellm-hack-were-you-one-of-the-...
 (HTM)  [2]: https://futuresearch.ai/tools/litellm-checker/
       
        zx8080 wrote 1 day ago:
        > the compromise originated from the Trivy dependency used in our CI/CD
        security scanning workflow.
        
        What is the source of compromise?
        
        Does anyone have a list of other compromised projects?
       
        latable wrote 1 day ago:
        So now we feel the need to add malware protection into the CI, like we
        put comodo on windows 7 and pray while surfing shady torrent websites ?
        It is pretty ironic that an extra tool used to protect against threats
        gets compromised and creates an even bigger threat. 
        Some here talks about better isolation during development, CI, but the
        surface area is huge, and probably impractical. Even if the CI is well
        isolated, the produced package is compromised.
        
        What about reducing the number of dependencies ? Integrating core
        functionalities in builtin language libraries ? Avoiding frequent
        package updates ? Avoiding immature/experimental packages from
        developers of unknown reliability ?
        
        Those issues are grave. I see no future when those get rarer, and I am
        afraid they may wipe the open-source movement credibility.
       
        avian wrote 1 day ago:
        What's with the hundreds of comments like "This was the answer I was
        looking for." in that GitHub thread?
        
        They also seem to be spilling into HN [1].
        
        Runaway AI agents? A meme I'm to old to understand?
        
 (HTM)  [1]: https://news.ycombinator.com/item?id=47508315
       
          ramimac wrote 1 day ago:
          It's a spam flood by the attacker to complicate information
          sharing[1]. They did the same thing in the Trivy discussion, with
          many of the same accounts.[2] [1]
          
 (HTM)    [1]: https://ramimac.me/teampcp/#spam-flood-litellm
 (HTM)    [2]: https://ramimac.me/teampcp/#discussion-flooded
       
        postalcoder wrote 1 day ago:
        FYI, npm/bun/pnpm/uv now all support setting a minimum release age for
        packages.
        
        I updated my global configs to set min release age to 7 days:
        
          ~/.config/uv/uv.toml
          exclude-newer = "7 days"
          
          ~/.npmrc
          min-release-age=7 # days
          
          ~/Library/Preferences/pnpm/rc
          minimum-release-age=10080 # minutes
          
          ~/.bunfig.toml
          [install]
          minimumReleaseAge = 604800 # seconds
       
          tomtomtom777 wrote 1 day ago:
          I understand that this is a good idea but it does feel really weird.
          Add a min-release-age to see if anyone who doesn't gets bitten.
          
          Next up, we're going to advise a minimum-release-age of 14 days,
          cause most other projects use 7 days.
       
            talkin wrote 1 day ago:
            There will always be early adopters.
            
            And maybe more importantly: security tools and researchers.
       
            bonoboTP wrote 1 day ago:
            You don't have to outrun the bear, just the other guy.
       
          exyi wrote 1 day ago:
          Do you know if there is override this specifically when I want to
          install a security patch? UV just claims that package doesn't exist
          if I ask for new version
       
            postalcoder wrote 1 day ago:
            Yes there is. You can use those configs as flags in the CLI to
            override the global config.
            
            eg:
            
              npm install  --min-release-age 0
              
              pnpm add  --minimum-release-age 0
              
              uv add  --exclude-newer "0 days"
              
              bun add  --minimum-release-age 0
       
              collinmanderson wrote 1 hour 29 min ago:
              uv also has --exclude-newer-package which I think can be used for
              overriding just a certain package. [1]
              
 (HTM)        [1]: https://docs.astral.sh/uv/reference/cli/#uv-run--exclude...
 (HTM)        [2]: https://docs.astral.sh/uv/reference/settings/#exclude-ne...
       
          jerrygoyal wrote 1 day ago:
          I don't think syntax is correct for pnpm
       
            postalcoder wrote 1 day ago:
            Works for me?
            
              $ pnpm add -D typescript@6.0.2
               ERR_PNPM_NO_MATURE_MATCHING_VERSION    No matching version found
            for typescript@6.0.2 published by Wed Mar 18 2026..
            
            You could also set the config this way:
            
              pnpm config set minimumReleaseAge 10080 --global
            
            You may be thinking about the project-specific config, which uses
            YAML.
            
 (HTM)      [1]: https://pnpm.io/cli/config
       
        vlovich123 wrote 1 day ago:
        I maintain that GitHub does a piss poor job of hardening CI so that one
        step getting compromised doesn’t compromise all possible secrets.
        There’s absolutely no need for the GitHub publishing workflow to run
        some third party scanner and the third party scanner doesn’t need
        access to your pypi publishing tokens.
        
        This stupidity is squarely on GitHub CI. Trivy is also bad here but the
        blast radius should have been more limited.
       
        dhon_ wrote 1 day ago:
        I have older versions of litellm installed on my system - it appears to
        be a dependency for aider-chat (at least on NixOS)
       
        agentictrustkit wrote 1 day ago:
        I think this gets a lot worse when we look at it from an agentic
        perspective. Like when a dev person hits a compromising package,
        there's usually a "hold on, that's weird" moment before a catastrophe.
        An agent doesn't have that instinct.
        
        Oh boy supply chain integrity will be an agent governenace problem, not
        just a devops one. If you send out an agent that can autonomously pull
        packages, do code, or access creds, then the blast radius of
        compromises widens. That's why I think there's an argument for
        least-privilege by default--agents should have scoped, auditable
        authority over what they can install and execute, and approval for
        anything outside the boundaries.
       
          Fibonar wrote 20 hours 21 min ago:
          Initial person to report the malware to PyPI here. My cynical take is
          that it doesn't really matter how tightly scoped the agent privileges
          are if the human is still developing code outside of containers, with
          .env files lying around for the taking. I agree about agents not yet
          having the instincts to check suspicious behaviour. It took a bit of
          prodding for my CC to dig deeper and not accept the first innocent
          explanation it stumbled on.
       
        getverdict wrote 1 day ago:
        Supply chain compromises in AI tooling are becoming structural, not
        exceptional. 
        We've seen similar patterns in the last 6 months — Zapier's npm
        account 
        (425 packages, Shai Hulud malware) and Dify's React2Shell incident both
        
        followed the same vector: a trusted package maintainer account as the
        entry 
        point. The blast radius keeps growing as these tools get embedded
        deeper 
        into production pipelines.
       
        saharhash wrote 1 day ago:
        Easy tool to check if you/other repos were expoed
        
 (HTM)  [1]: https://litellm-compromised.com
       
        gaborbernat wrote 1 day ago:
        Recommend reading related blog post
        
 (HTM)  [1]: https://bernat.tech/posts/securing-python-supply-chain
       
        arrty88 wrote 1 day ago:
        Oooof another one. I think i will lock my deps to versions at least 3
        months old.
       
        r2vcap wrote 1 day ago:
        Does the Python ecosystem have anything like pnpm’s minimumReleaseAge
        setting? Maybe I’m being overly paranoid, but it feels like every
        internet-facing ecosystem should have something like this.
       
          collinmanderson wrote 1 hour 15 min ago:
          uv has "exclude-newer"
          
 (HTM)    [1]: https://news.ycombinator.com/item?id=47513932
       
        mathis-l wrote 1 day ago:
        CrewAI (uses litellm) pinned it to 1.82.6 (last good version) 5 hours
        ago but the commit message does not say anything about a potential
        compromise. This seems weird. Is it a coincidence? Shouldn’t users be
        warned about a potential compromise?
        
 (HTM)  [1]: https://github.com/crewAIInc/crewAI/commit/8d1edd5d65c462c3dae...
       
          mathis-l wrote 1 day ago:
          Dspy handling it openly
          
 (HTM)    [1]: https://github.com/stanfordnlp/dspy/issues/9500
       
        ps06756 wrote 1 day ago:
        Can someone help enlighten why would someone use LiteLLM over say AWS
        Bedrock ? 
        Or build a lightweight router and directly connect to the model
        provider?
       
        datadrivenangel wrote 1 day ago:
        This among with some other issues makes me consider ejecting and
        building my own LLM shim. The different model providers are bespoke
        enough even within litellm that it sometimes seems like a lot of hassle
        for not much benefit.
        
        Also the repo is so active that it's very hard to understand the state
        of issues and PRs, and the 'day 0' support for GPT-5.4-nano took over a
        week! Still, tough situation for the maintainers who got hacked.
       
        tonymet wrote 1 day ago:
        I recommend scanning all of your projects with osv-scanner in
        non-blocking mode
        
           # add any dependency file patterns
           osv-scanner -r .
        
        as your projects mature, add osv-scanner as a blocking step to fail
        your installs before the code gets installed / executed.
       
        dweinstein wrote 1 day ago:
         [1] I made this tool for macos systems that helps detect when a
        package accesses something it shouldn't. it's a tiny go binary (less
        than 2k LOC) with no dependencies that will mount a webdav filesystem
        (no root) or NFS (root required) with fake secrets and send you a
        notification when anything accesses it. Very stupid simple. I've always
        really liked the canary/honeypot approach and this at least may give
        some folks a chance to detect (similar to like LittleSnitch) when
        something strange is going on!
        
        Next time the attack may not have an obvious performance issue!
        
 (HTM)  [1]: https://github.com/dweinstein/canary
       
          someguyornotidk wrote 1 day ago:
          Thank you for sharing this!
          
          I always wanted to mess with building virtual filesystems but was
          unwilling to venture outside the standard library (i.e. libfuse) for
          reasons wonderfully illustrated in this thread and elsewhere. Somehow
          the idea of implementing a networked fs protocol and leaving system
          integration to the system never crossed my mind.
          
          I'm glad more people are taking this stance. Large centralized
          standard libraries and minimal audited dependencies is really the
          only way to achieve some semblance of security. There is simply no
          other viable approach.
          
          Edit: What's the license for this project?
       
            dweinstein wrote 1 day ago:
            hi, glad you like it and that it encourages you to try some things
            you've always wanted to do :-)
            
            I was thinking for the license I'd do GPLv3. Would that work for
            you?
       
              someguyornotidk wrote 1 hour 13 min ago:
              Depends on what you want to achieve with your licensing, but I
              personally think GPLv3 is a really good fit for a project like
              yours.
       
          huevosabio wrote 1 day ago:
          This is clever, and also interesting in that it could help stop the
          steal as it happens (though of course not perfect).
       
            dweinstein wrote 1 day ago:
            thanks for your feedback!
            
            that's a really good point and could be an interesting thing to
            play with as an extension. Since we potentially know which process
            is doing the "read" we could ask the user if it's ok to kill it.
            obviously the big issue is that we don't know how much has already
            been shipped off the system at that point but at least we have some
            alert to make some tough decisions.
       
        sudorm wrote 1 day ago:
        are there any timestamps available when the malicious versions were
        published on pypi? I can't find anything but that now the last "good"
        version was published on march 22.
       
          sudorm wrote 1 day ago:
          according to articles the first malicious version was published at
          roughly 8:30 UTC and the pypi repo taken down at ~11:25 UTC.
       
        Nayjest wrote 1 day ago:
        Use secure and minimalistic lm-proxy instead: [1] ```
        pip install lm-proxy
        ```
        
        Guys, sorry, as the author of a competing opensource product, I
        couldn’t resist
        
 (HTM)  [1]: https://github.com/Nayjest/lm-proxy
       
        Ayc0 wrote 1 day ago:
        Exactly what I needed, thanks.
       
        ilusion wrote 1 day ago:
        Does this mean opencode (and other such agent harnesses that auto
        update) might also be compromised?
       
        ajoy wrote 2 days ago:
        Reminded me of a similar story at openSSH, wonderfully documented in a
        "Veritasium" episode, which was just fascinating to watch/listen.
        
 (HTM)  [1]: https://www.youtube.com/watch?v=aoag03mSuXQ
       
          zahlman wrote 1 day ago:
          The xz compromise was not "at openSSH", and worked very differently.
       
        santiago-pl wrote 2 days ago:
        It looks like Trivy was compromised at least five days ago.
        
 (HTM)  [1]: https://www.wiz.io/blog/trivy-compromised-teampcp-supply-chain...
       
        somehnguy wrote 2 days ago:
        Perhaps I'm missing something obvious - but what's up with the comments
        on the reported issue?
        
        Hundreds of downvoted comments like "Worked like a charm, much
        appreciated.", "Thanks, that helped!", and "Great explanation, thanks
        for sharing."
       
          kamikazechaser wrote 2 days ago:
          Compromised accounts. The malware targeted ~/.git-credentials.
       
        ting0 wrote 2 days ago:
        I've been waiting for something like this to happen. It's just too easy
        to pull off. I've been hard-pinning all of my versions of dependencies
        and using older versions in any new projects I set up for a little
        while, because they've generally at least been around long enough to
        vet. But even that has its own set of risks (for example, what if I
        accidently pin a vulnerable version). Either that, or I fork
        everything, including all the deps, run LLMs over the codebase to vet
        everything.
        
        Even still though, we can't really trust any open-source software any
        more that has third party dependencies, because the chains can be so
        complex and long it's impossible to vet everything.
        
        It's just too easy to spam out open-source software now, which also
        means it's too easy to create thousands of infected repos with
        sophisticated and clever supply chain attacks planted deeply inside
        them. Ones that can be surfaced at any time, too. LLMs have compounded
        this risk 100x.
       
          what wrote 1 day ago:
          Pinning doesn’t help you. They can replace the package and you’ll
          get the new one. You have to vendor the dependencies.
       
            davidatbu wrote 1 day ago:
            I don't think pypi or npm allow replacing existing packages?
       
              ctmnt wrote 1 day ago:
              They absolutely do. In this case litellm 1.82.8 had been out for
              at least a week (can’t recall the exact date offhand). The
              compromised version was a replacement.
       
                ctmnt wrote 1 day ago:
                Ah, my mistake! Thanks for the correction.
                
                But I believe you can replace versions on both, nonetheless.
                It’s a multi step process, unpublish then publish again. But
                the net effect is the same.
       
                  collinmanderson wrote 1 hour 19 min ago:
                  If you lock your dependencies, it should fail if the hash
                  doesn't match.
       
                  xenophonf wrote 19 hours 33 min ago:
                  PyPI enforces immutable releases. [1] > PyPI does not allow
                  for a filename to be reused, even once a project has been
                  deleted and recreated...
                  
                  > This ensures that a given distribution for a given release
                  for a given project will always resolve to the same file, and
                  cannot be surreptitiously changed one day by the projects
                  maintainer or a malicious party (it can only be removed).
                  
 (HTM)            [1]: https://pypi.org/help/#file-name-reuse
       
                cpburns2009 wrote 1 day ago:
                1.82.7 and 1.82.8 were only up for about 3 hours before they
                were quarantined on PyPI.
       
                dot_treo wrote 1 day ago:
                It actually wasn't. That was one of the reasons why I looked
                into what was changed. Even 1.82.6 is only at an RC release on
                github since just before the incident.
                
                So the fact that 1.82.7 and then 1.82.8 were released within an
                hour of each other was highly suspicious.
       
          MarsIronPI wrote 2 days ago:
          > Even still though, we can't really trust any open-source software
          any more that has third party dependencies, because the chains can be
          so complex and long it's impossible to vet everything.
          
          This is why software written in Rust scares me.  Almost all Rust
          programs have such deep dependency trees that you really can't vet
          them all.  The Rust and Node ecosystems are the worst for this, but
          Python isn't much better.  IMO it's language-specific package
          managers that end up causing this problem because they make it too
          easy to bring in dependencies.    In languages like C or C++ that
          traditionally have used system package managers the cost of adding a
          dependency is high enough that you really avoid dependencies unless
          they're truly necessary.
       
            consp wrote 1 day ago:
            > Almost all Rust programs have such deep dependency trees that you
            really can't vet them all.
            
            JS/TS > Screems aloud! never do "npm import [package containing
            entire world as dependency]"
            
            Rust > Just import everything since rust fixes everything.
            
            When you design your package management and doctrine like de facto
            javascript your have failed like javascript.
       
        smakosh wrote 2 days ago:
        Checkout LLM Gateway: [1] Migration guide: [1] /migration/litellm
        
 (HTM)  [1]: https://llmgateway.io
 (HTM)  [2]: https://llmgateway.io/migration/litellm
       
        aborsy wrote 2 days ago:
        What is the best way to sandbox LLMs and packages in general, while
        being able to work on data from outside sandbox (get data in and out
        easily)?
        
        There is also the need for data sanitation, because the attacker could
        distribute compromised files through user’s data which will later be
        run and compromise the host.
       
          ashishb wrote 1 day ago:
          I wrote this[1] for myself last year.
          It only gives access to the current directory (and a few others - see
          README).
          So, it drastically reduces the attack surface of running third-party
          Python/Go/Rust/Haskell/JS code on your machine.
          
          1 -
          
 (HTM)    [1]: https://github.com/ashishb/amazing-sandbox
       
        cowpig wrote 2 days ago:
        Tried running the compromised package inside Greywall, because
        theoretically it should mitigate everything but in practice it just
        forkbombs itself?
       
        westoque wrote 2 days ago:
        my takeaway from this is that it should now be MANDATORY to have an LLM
        do a scan on the entire codebase prior to release or artifact creation.
        do NOT use third party plugins for this. it's so easy to create your
        own github action to digest the whole codebase and inspect third party
        code. it costs tokens yes but it's also cached and should be negligible
        spend for the security it brings.
       
          jimmySixDOF wrote 1 day ago:
          Not sure that Trivy was doing that itself but zizmor is probably
          better than starting with an LLM :
          
 (HTM)    [1]: https://github.com/zizmorcore/zizmor
       
          bink wrote 2 days ago:
          Ironically, Trivy was the first known compromised package and its
          purpose is to scan container images to make sure they don't contain
          vulnerabilities. Kinda like the LLM in your scenario.
       
        rvz wrote 2 days ago:
        What do we have here? Unaudited software completely compromised with a
        fake SOC 2 and ISO 27001 certification.
        
        An actual infosec audit would have rigorously enforced basic security
        best practices in preventing this supply chain attack.
        
        [0]
        
 (HTM)  [1]: https://news.ycombinator.com/item?id=47502754
       
        foota wrote 2 days ago:
        Somewhat unrelated, but if I have downloaded node modules in the last
        couple days, how should I best figure out if I've been hacked?
       
        noobermin wrote 2 days ago:
        I have to say, the long line of comments from obvious bots thanking the
        opener of the issue is a bit too on the nose.
       
          zahlman wrote 1 day ago:
          It doesn't need to be subtle if the goal is just to drown out actual
          discussion.
       
        lightedman wrote 2 days ago:
        Write it yourself, fuzz/test it yourself, and build it yourself, or be
        forever subject to this exact issue.
        
        This was taught in the 90s. Sad to see that lesson fading away.
       
        canberkh wrote 2 days ago:
        helpful
       
        macNchz wrote 2 days ago:
        Was curious—good number of projects out there with an un-pinned
        LiteLLM dependencies in their requirements.txt (628 matches): [1] or
        pyproject.toml (not possible to filter based on absence of a uv.lock,
        but at a glance it's missing from many of these): [2] or setup.py:
        
 (HTM)  [1]: https://github.com/search?q=path%3A*%2Frequirements.txt%20%2F%...
 (HTM)  [2]: https://github.com/search?q=path%3A*%2Fpyproject.toml+%22%5C%2...
 (HTM)  [3]: https://github.com/search?q=path%3A*%2Fsetup.py+%22%5C%22litel...
       
        faxanalysis wrote 2 days ago:
        This is secure bug impacting PyPi v1.82.7, v1.82.8. The idea of
        bracketing r-w-x mod package permissions for group id credential where
        litellm was installed.
       
        Aeroi wrote 2 days ago:
        whats up with the hundreds of bot replys on github to this?
       
          zahlman wrote 1 day ago:
          It seems to be a deliberate attempt to interfere with people
          discussing mitigations etc.
       
        dev_tools_lab wrote 2 days ago:
        Good reminder to pin dependency versions and verify 
        checksums. SHA256 verification should be standard 
        for any tool that makes network calls.
       
        claudiug wrote 2 days ago:
        LiteLLM's SOC2 auditor was Delve :))
       
        cpburns2009 wrote 2 days ago:
        Looks like litellm is no longer in quarantine on PyPI, and the
        compromized versions (1.82.7 and 1.82.8) have been removed [1]:
        
 (HTM)  [1]: https://pypi.org/project/litellm/#history
       
        syllogism wrote 2 days ago:
        Maintainers need to keep a wall between the package publishing and
        public repos. Currently what people are doing is configuring the public
        repo as a Trusted Publisher directly. This means you can trigger the
        package publication from the repo itself, and the public repo is a huge
        surface area.
        
        Configure the CI to make a release with the artefacts attached. Then
        have an entirely private repo that can't be triggered automatically as
        the publisher. The publisher repo fetches the artefacts and does the
        pypi/npm/whatever release.
       
          anderskaseorg wrote 2 days ago:
          The point of trusted publishing is supposed to be that the public can
          verifiably audit the exact source from which the published artifacts
          were generated.  Breaking that chain via a private repo is a step
          backwards. [1]
          
 (HTM)    [1]: https://docs.npmjs.com/generating-provenance-statements
 (HTM)    [2]: https://packaging.python.org/en/latest/specifications/index-...
       
          saidnooneever wrote 2 days ago:
          this kind of compromise is why a lot of orgs have internal mirrors of
          repos or package sources so they can stay behind few versions to
          avoid latest and compromise. seen it with internal pip repos, apt
          repos etc.
          
          some will even audit each package in there (kind crap job but it
          works fairly well as mitigation)
       
        eoskx wrote 2 days ago:
        Also, not surprising that LiteLLM's SOC2 auditor was Delve. The story
        writes itself.
       
          saganus wrote 2 days ago:
          Would a proper SOC2 audit have prevented this?
          
          I've been through SOC2 certifications in a few jobs and I'm not sure
          it makes you bullet proof, although maybe there's something I'm
          missing?
       
            stevekemp wrote 2 days ago:
            Just so long as it was a proper SOC2 audit, and not a copy-pasted
            job:
            
 (HTM)      [1]: https://news.ycombinator.com/item?id=47481729
       
            shados wrote 2 days ago:
            SOC2 is just "the process we say we have, is what we do in
            practice". The process can be almost anything. Some auditors will
            push on stuff as "required", but they're often wrong.
            
            But all it means in the end is you can read up on how a company
            works and have some level of trust that they're not lying (too
            much).
            
            It makes absolutely zero guarantees about security practices,
            unless the documented process make these guarantees.
       
              saganus wrote 2 days ago:
              Yeah, that was my understanding as well, so I fail to see how a
              proper SOC2 would have prevented this.
              
              I mean ideally a proper SOC2 would mean there are processes in
              place to reduce the likelihood of this happening, and then also
              processes to recover from if it did ended up happening.
              
              But the end result could've been essentially the same.
       
                kyyol wrote 2 days ago:
                It wouldn't have. lol.
       
        saidnooneever wrote 2 days ago:
        just wanna state this can litterally happen to anyone within this messy
        package ecosystem. maintainer seems to be doing his best
        
        if you have tips i am sure they are welcome. snark remarks are useless.
        dont be a sourpuss. if you know better, help the remediation effort.
       
        Shank wrote 2 days ago:
        I wonder at what point ecosystems just force a credential rotation.
        Trivy and now LiteLLM have probably cleaned out a sizable number of
        credentials, and now it's up to each person and/or team to rotate.
        TeamPCP is sitting on a treasure trove of credentials and based on
        this, they're probably carefully mapping out what they can exploit and
        building payloads for each one.
        
        It would be interesting if Python, NPM, Rubygems, etc all just decided
        to initiate an ecosystem-wide credential reset. On one hand, it would
        be highly disruptive. On the other hand, it would probably stop the
        damage from spreading.
       
          post-it wrote 2 days ago:
          It'll only be disruptive to people who are improperly managing their
          credentials. Cattle not pets applies to credentials too.
       
        hmokiguess wrote 2 days ago:
        what's up with everyone in the issue thread thanking it, is this an
        irony trend or is that a flex on account takeover from teampcp? this
        feels wild
       
        homanp wrote 2 days ago:
        How were they compromised? Phishing?
       
        johnhenry wrote 2 days ago:
        I've been developing an alternative to LiteLLM. Javascript. No
        dependencies.
        
 (HTM)  [1]: https://github.com/johnhenry/ai.matey/
       
        abhisek wrote 2 days ago:
        We just analysed the payload. Technical details here: [1] We are
        looking at similar attack vectors (pth injection), signatures etc. in
        other PyPI packages that we know of.
        
 (HTM)  [1]: https://safedep.io/malicious-litellm-1-82-8-analysis/
       
        mark_l_watson wrote 2 days ago:
        A question from a non-python-security-expert: is committing uv.lock
        files for specific versions, and only infrequently updating versions a
        reasonable practice?
       
          Imustaskforhelp wrote 2 days ago:
          (I am not a security expert either)
          
          But, one of the arguments that I saw online from this was that when a
          security researcher finds a bug and reports it to the OSS
          project/Company they then fix the code silently and include it within
          the new version and after some time, they make the information public
          
          So if you run infrequently updated versions, then you run a risk of
          allowing hackers access as well.
          
          (An good example I can think of is OpenCode which had an issue which
          could allow RCE and the security researcher team asked Opencode
          secretly but no response came so after sometime of no response, they
          released the knowledge in public and Opencode quickly made a patch to
          fix that issue but if you were running the older code, you would've
          been vulnerable to RCE)
       
            mark_l_watson wrote 2 days ago:
            Good points. Perhaps there is a way to configure uv to only use a
            new version if it is 24 hours old?
       
              arwt wrote 1 day ago:
              You can. See: [1] How you use it depends on your workflow. An
              entry like this in your pyproject.toml could suffice:
              
                [tool.uv]
                exclude-newer = "5 days"
              
 (HTM)        [1]: https://docs.astral.sh/uv/reference/cli/#uv-run--exclude...
       
                mark_l_watson wrote 1 day ago:
                thank you!
       
        segalord wrote 2 days ago:
        LiteLLM has like a 1000 dependencies this is expected
        
 (HTM)  [1]: https://github.com/BerriAI/litellm/blob/main/requirements.txt
       
          zahlman wrote 1 day ago:
          Oof. What exactly is supposed to be "lite" about this?
       
        tom-blk wrote 2 days ago:
        Stuff like is happening too much recently. Seems like the more fast
        paced areas of development would benefit from a paradigm shift
       
          sirl1on wrote 2 days ago:
          Move Slow and Fix Things.
       
        santiagobasulto wrote 2 days ago:
        I blogged about this last year[0]...
        
        > ### Software Supply Chain is a Pain in the A*
        
        > On top of that, the room for vulnerabilities and supply chain attacks
        has increased dramatically
        
        AI Is not about fancy models, is about plain old Software Engineering.
        I strongly advised our team of "not-so-senior" devs to not use LiteLLM
        or LangChain or anything like that and just stick to
        `requests.post('...')".
        
        [0]
        
 (HTM)  [1]: https://sb.thoughts.ar/posts/2025/12/03/ai-is-all-about-softwa...
       
          driftnode wrote 1 day ago:
          the requests.post advice is right but its also kind of depressing
          that the state of the art recommendation for using llm apis safely in
          2026 is to just write the http call yourself. we went from dont
          reinvent the wheel to actually maybe reinvent it because the wheel
          might steal your ssh keys. the abstraction layer that was supposed to
          save you time just cost an unknown number of people every credential
          on their machine
       
          eoskx wrote 2 days ago:
          Valid, but for all the crap that LangChain gets it at least has its
          own layer for upstream LLM provider calls, which means it isn't
          affected by this supply chain compromise (unless you're using the
          optional langchain-litellm package). DSPy uses LiteLLM as its primary
          way to call OpenAI, etc. and CrewAI imports it, too, but I believe it
          prefers the vendor libraries directly before it falls back to
          LiteLLM.
       
        f311a wrote 2 days ago:
        Their previous release would be easily caught by static analysis. PTH
        is a novel technique.
        
        Run all your new dependencies through static analysis and don't install
        the latest versions.
        
        I implemented static analysis for Python that detects close to 90% of
        such injections.
        
 (HTM)  [1]: https://github.com/rushter/hexora
       
          ting0 wrote 2 days ago:
          And easily bypassed by an attacker who knows about your static
          analysis tool who can iterate on their exploit until it no longer
          gets flagged.
       
            fernandotakai wrote 1 day ago:
            the main things are:
            
            1. pin dependencies with sha signatures
            2. mirror your dependencies
            3. only update when truly necessary
            4. at first, run everything in a sandbox.
       
          samsk wrote 2 days ago:
          Interesting tool, will definitely try - just curious, is there a tool
          (hexora checker) that ensures that hexora itself and its dependencies
          are not compromised ?
          And of course if there is one, I'll need another one for the hexora
          checker....
       
        wswin wrote 2 days ago:
        I will wait with updating anything until this whole trivy case gets
        cleaned up.
       
        hmokiguess wrote 2 days ago:
        What’s the best way to identify a compromised machine? Check uv,
        conda, pip, venv, etc across the filesystem? Any handy script around?
        
        EDIT: here's what I did, would appreciate some sanity checking from
        someone who's more familiar with Python than I am, it's not my language
        of choice.
        
        find / -name "litellm_init.pth" -type f 2>/dev/null
        
        find / -path '/litellm-1.82..dist-info/METADATA' -exec grep -l
        'Version: 1.82.[78]' {} \; 2>/dev/null
       
          lukewarm707 wrote 2 days ago:
          these days, i just use a private llm. it's very quick and when i see
          the logs, it does a better job than me for this type of task.
          
          no i don't let it connect to web...
       
          persedes wrote 2 days ago:
          there's probably a more precise way, but if you're on uv:
          
            rg litellm  --iglob='*.lock'
       
        zhisme wrote 2 days ago:
        Am I the only one having feeling that with LLM-era we have now bigger
        amount of malicious software lets say parsers/fetchers of
        credentials/ssh/private keys?
        And it is easier to produce them and then include in some 3rd party
        open-source software? Or it is just our attention gets focused on such
        things?
       
        te_chris wrote 2 days ago:
        I reviewed the LiteLLM source a while back. Without wanting to be mean,
        it was a mess. Steered well clear.
       
          rnjs wrote 2 days ago:
          Terrible code quality and terrible docs
       
        danielvaughn wrote 2 days ago:
        I work with security researchers, so we've been on this since about an
        hour ago. One pain I've really come to feel is the complexity of Python
        environments. They've always been a pain, but in an incident like this,
        where you need to find whether an exact version of a package has ever
        been installed on your machine. All I can say is good luck.
        
        The Python ecosystem provides too many nooks and crannies for malware
        to hide in.
       
        cedws wrote 2 days ago:
        This looks like the same TeamPCP that compromised Trivy. Notice how the
        issue is full of bot replies. It was the same in Trivy’s case.
        
        This threat actor seems to be very quickly capitalising on stolen
        credentials, wouldn’t be surprised if they’re leveraging LLMs to do
        the bulk of the work.
       
          driftnode wrote 1 day ago:
          whats new isnt the shortcuts, its the cascading. one compromised
          trivy instance led to kics led to litellm led to dspy and crewai and
          mlflow and hundreds of mcp servers downstream. the attacker didnt
          need to find five separate vulnerabilities, they found one and rode
          the dependency graph. thats a fundamentally different threat model
          than what most security tooling is built around
       
          varenc wrote 1 day ago:
          What is the rational for the attacker spamming the relevant issue
          with bot replies? does this benefit them? Maybe it makes discussion
          impossible to confuse maintainers and delay the time to a fix?
       
            cedws wrote 19 hours 16 min ago:
            Yes, trying to slow down response.
       
        Blackthorn wrote 2 days ago:
        Edit: ignore this silliness, as it sidesteps the real problem. Leaving
        it here because we shouldn't remove our own stupidity.
        
        It's pretty disappointing that safetensors has existed for multiple
        years now but people are still distributing pth files. Yes it requires
        more code to handle the loading and saving of models, but you'd think
        it would be worth it to avoid situations like this.
       
        detente18 wrote 2 days ago:
        LiteLLM maintainer here, this is still an evolving situation, but
        here's what we know so far:
        
        1. Looks like this originated from the trivvy used in our ci/cd - [1]
        [2] 2. If you're on the proxy docker, you were not impacted. We pin our
        versions in the requirements.txt
        
        3. The package is in quarantine on pypi - this blocks all downloads.
        
        We are investigating the issue, and seeing how we can harden things.
        I'm sorry for this.
        
        - Krrish
        
 (HTM)  [1]: https://github.com/search?q=repo%3ABerriAI%2Flitellm%20trivy&t...
 (HTM)  [2]: https://ramimac.me/trivy-teampcp/#phase-09
       
          N_Lens wrote 1 day ago:
          Good work! Sorry to hear you're in this situation, good luck and
          godspeed!
       
          driftnode wrote 1 day ago:
          the chain here is wild. trivy gets compromised, that gives access to
          your ci, ci has the pypi publish token, now 97 million monthly
          downloads are poisoned. was the pypi token scoped to publishing only
          or did it have broader access? because the github account takeover
          suggests something wider leaked than just the publish credential
       
            sobellian wrote 1 day ago:
            If the payload is a credential stealer then they can use that to
            escalate into basically anything right?
       
              driftnode wrote 1 day ago:
              Yes and the scary part is you might never know the full extent. A
              credential stealer grabs whatever is in memory or env during the
              build, ships it out, and the attacker uses those creds weeks
              later from a completely different IP. The compromised package
              gets caught and reverted, everyone thinks the incident is over,
              meanwhile the stolen tokens are still valid. I wonder how many
              teams who installed 1.82.7 actually rotated all their CI secrets
              after this, not just uninstalled the bad version.
       
            kreelman wrote 1 day ago:
            I wonder if there are a few things here....
            
            It would be great if Linux was able to do simple chroot jails and
            run tests inside of them before releasing software. In this case,
            it looks like the whole build process would need to be done in the
            jail. Tools like lxroot might do enough of what chroot on BSD does.
            
            It seems like software tests need to have a class of test that
            checks whether any of the components of an application have been
            compromised in some way. This in itself may be somewhat complex...
            
            We are in a world where we can't assume secure operation of
            components anymore. This is kinda sad, but here we are....
       
              driftnode wrote 1 day ago:
              The sad part is you're right that we can't assume secure
              operation of components anymore, but the tooling hasn't caught up
              to that reality. Chroot jails help with runtime isolation but the
              attack here happened at build time, the malicious code was
              already in the package before any test could run. And the supply
              chain is deep. Trivy gets compromised, which gives CI access,
              which gives PyPI access. Even if you jail your own builds you're
              trusting that every tool in your pipeline wasn't the entry point.
              97 million monthly downloads means a lot of people's "secure"
              pipelines just ran attacker code with full access.
       
          pojzon wrote 1 day ago:
          This is just one of many projects that was a victim of Trivy hack.
          There are millions of those projects and this issue will be exploited
          in next months if not years.
       
          mikert89 wrote 1 day ago:
          Similar to delve, this guy has almost no work experience. You have to
          wonder if YC and the cult of extremely young founders is causing
          instability issues in society at large?
       
            moomoo11 wrote 1 day ago:
            It’s a flex now. But there are still many people doing it for the
            love of the game.
       
            gopher_space wrote 1 day ago:
            It's interesting to see how the landscape changes when the folks
            upstream won't let you offload responsibility.    Litellm's client
            list includes people who know better.
       
            zdragnar wrote 1 day ago:
            Welcome to the new era, where programming is neither a skill nor a
            trade, but a task to be automated away by anyone with a paid
            subscription.
       
              mikert89 wrote 1 day ago:
              alot of software isnt that important so its fine, but some
              actually is important. especially with a branding name slapped on
              it that people will trust
       
                leftyspook wrote 1 day ago:
                All software runs on somebody's hardware. Ultimately even an
                utterly benign program like `cowsay` could be backdoored to
                upload your ssh keys somewhere.
       
                  utrack wrote 1 day ago:
                   [1] , but with `fortune -a` and `cowsay` instead of
                  imagemagick
                  
 (HTM)            [1]: https://xkcd.com/2347/
       
                whattheheckheck wrote 1 day ago:
                The industry needs to step up and plant a flag for
                professionalization certifications for proper software
                engineering. Real hard exams etc
       
                  jacamera wrote 1 day ago:
                  I can't even imagine what these exams would look like. The
                  entire profession seems to boil down to making the
                  appropriate tradeoffs for your specific application in your
                  specific domain using your specific tech stack. There's
                  almost nothing that you always should or shouldn't do.
       
                    xenophonf wrote 20 hours 7 min ago:
                    All engineering professions are like that.  NCEES has been
                    licensing Professional Engineers for over a hundred years. 
                    The only thing stopping CS/SE is an unwillingness to submit
                    to anything resembling oversight.
       
          rao-v wrote 2 days ago:
          I put together a little script to search for and list installed
          litellm versions on my systems here: [1] It's very much not
          production grade. It might miss sneaky ways to install litellm, but
          it does a decent job of scanning all my conda, .venv, uv and system
          enviornments without invoking a python interpreter or touching
          anything scary. Let me know if it misses something that matters.
          
          Obviously read it before running it etc.
          
 (HTM)    [1]: https://github.com/kinchahoy/uvpowered-tools/blob/main/inven...
       
          mrexcess wrote 2 days ago:
          You're making great software and I'm sorry this happened to you.
          Don't get discouraged, keep bringing the open source disruption!
       
          Imustaskforhelp wrote 2 days ago:
          I just want to share an update
          
          the developer has made a new github account and linked their new
          github account to hackernews and linked their hackernews about me to
          their github account to verify the github account being legitimate
          after my suggestion
          
          Worth following this thread as they mention that: "I will be updating
          this thread, as we have more to share."
          
 (HTM)    [1]: https://github.com/BerriAI/litellm/issues/24518
       
          vintagedave wrote 2 days ago:
          This must be super stressful for you, but I do want to note your "I'm
          sorry for this." It's really human.
          
          It is so much better than, you know... "We regret any inconvenience
          and remain committed to recognising the importance of maintaining
          trust with our valued community and following the duration of the
          ongoing transient issue we will continue to drive alignment on a
          comprehensive remediation framework going forward."
          
          Kudos to you. Stressful times, but I hope it helps to know that
          people are reading this appreciating the response.
       
            nextos wrote 1 day ago:
            I think we really need to use sandboxes. Guix provides sandboxed
            environments by just flipping a switch. NixOS is in an ideal
            position to do the same, but for some reason they are regarded as
            "inconvenient".
            
            Personally, I am a heavy user of Firejail and bwrap. We need
            defense in depth. If someone in the supply chain gets compromised,
            damage should be limited. It's easy to patch the security model of
            Linux with userspaces, and even easier with eBPF, but the community
            is somehow stuck.
       
              kernc wrote 9 hours 29 min ago:
              `sandbox-venv` is a small shell script that sandboxes Python
              virtual environments in separate Linux namespaces using
              Bubblewrap (and soon using only command `unshare`, bringing the
              whole script down to effectively 0 deps).
              
 (HTM)        [1]: https://github.com/sandbox-utils/sandbox-venv
       
              ashishb wrote 1 day ago:
              I am happily running all third-party tools inside the Amazing
              Sandbox[1].
              I made it public last year.
              
              1 -
              
 (HTM)        [1]: https://github.com/ashishb/amazing-sandbox
       
              staticassertion wrote 1 day ago:
              What would be really helpful is if software sandboxed itself.
              It's very painful to sandbox software from the outside and it's
              radically less effective because your sandbox is always maximally
              permissive.
              
              But, sadly, there's no x-platform way to do this, and sandboxing
              APIs are incredibly bad still and often require privileges.
              
              >  It's easy to patch the security model of Linux with
              userspaces, and even easier with eBPF, but the community is
              somehow stuck.
              
              Neither of these is easy tbh. Entering a Linux namespace requires
              root, so if you want your users to be safe then you have to first
              ask them to run your service as root. eBPF is a very hard
              boundary to maintain, requiring you to know every system call
              that your program can make - updates to libc, upgrades to any
              library, can break this.
              
              Sandboxing tooling is really bad.
       
                eichin wrote 1 day ago:
                If the whole point of sandboxing is to not trust the software,
                it doesn't make sense for the software to do the sandboxing. 
                (At most it should have a standard way to suggest what access
                it needs, and then your outside tooling should work with what's
                reasonable and alert on what isn't.) The android-like approach
                of sandboxing literally everything works because you are forced
                to solve these problems generically and at scale - things like
                "run this as a distinct uid" are a lot less hassle if you're
                amortizing it across everything.
                
                (And no, most linux namespace stuff does not require root, the
                few things that do can be provided in more-controlled ways. For
                examples, look at podman, not docker.)
       
                  staticassertion wrote 1 day ago:
                  > If the whole point of sandboxing is to not trust the
                  software, it doesn't make sense for the software to do the
                  sandboxing.
                  
                  That's true, sort of. I mean, that isn't the whole point of
                  sandboxing because the threat model for sandboxing is pretty
                  broad. You could have a process sandbox just one library, or
                  sandbox itself in case of a vulnerability, or it could have a
                  separate policy / manifest the way browser extensions do
                  (that prompts users if it broadens), etc. There's still
                  benefit to isolating whole processes though in case the
                  process is malicious.
                  
                  > (And no, most linux namespace stuff does not require root,
                  the few things that do can be provided in more-controlled
                  ways. For examples, look at podman, not docker.)
                  
                  The only linux namespace that doesn't require root is user
                  namespace, which basically requires root in practice. [1]
                  Podman uses unprivileged user namespaces, which are disabled
                  on the most popular distros because it's a big security hole.
                  
 (HTM)            [1]: https://www.man7.org/linux/man-pages/man2/clone.2.ht...
       
                ashishb wrote 1 day ago:
                > It's very painful to sandbox software from the outside and
                it's radically less effective because your sandbox is always
                maximally permissive.
                
                Not really.
                
                Let's say I am running `~/src/project1 $ litellm`
                
                Why does this need access to anything outside of
                `~/src/project1`?
                
                Even if it does, you should expose exactly those particular
                directories (e.g. ~/.config) and nothing else.
       
                  staticassertion wrote 1 day ago:
                  How are you setting that sandbox up? I've laid out numerous
                  constraints - x-platform support is non-existent for
                  sandboxing, sandboxing requires privileges to perform,
                  whole-program sandboxing is fundamentally weaker, maintenance
                  of sandboxing is best done by developers, etc.
                  
                  > Even if it does, you should expose exactly those particular
                  directories (e.g. ~/.config) and nothing else.
                  
                  Yes, but now you are in charge of knowing every potential
                  file access, network access, or possibly even system call,
                  for a program that you do not maintain.
       
                    ashishb wrote 1 day ago:
                    > Yes, but now you are in charge of knowing every potential
                    file access, network access, or possibly even system call,
                    for a program that you do not maintain.
                    
                    Not really.
                    I try to capture the most common ones for caching [1], but
                    if I miss it, then it is just inefficient, as it is
                    equivalent to a cache miss.
                    
                    I'll emphasize again, "no linter/scanner/formatter (e.g.,
                    trivy) should need full disk access".
                    
                    1 -
                    
 (HTM)              [1]: https://github.com/ashishb/amazing-sandbox/blob/fd...
       
                      staticassertion wrote 1 day ago:
                      Okay, so you're using docker. Cool, that's one of the
                      only x-plat ways to get any sandboxing. Docker itself is
                      privileged and now any unsandboxed program on your
                      computer can trivially escalate to root. It also doesn't
                      limit nearly as much as a dev-built sandbox because it
                      has to isolate the entire process.
                      
                      Have you solved for publishing? You'll need your token to
                      enter the container or you'll need an authorizing proxy.
                      Are cache volumes shared? In that case, every container
                      is compromised if one is. All of these problems and many
                      more go away if the project is built around them from the
                      start.
                      
                      It's perfectly nice to wrap things up in docker but
                      there's simply no argument here - developers can write
                      sandboxes for their software more effectively because
                      they can architect around the sandbox, you have to wrap
                      the entire thing generically to support its maximum
                      possible privileges.
       
                        ashishb wrote 1 day ago:
                        > Docker itself is privileged and now any unsandboxed
                        program on your computer can trivially escalate to
                        root.
                        
                        Inside the sandbox but not on my machine.
                        Show me how it can access an unmounted directory.
                        
                        > Have you solved for publishing? You'll need your
                        token to enter the container or you'll need an
                        authorizing proxy.
                        
                        Amazing-sandbox does not solve for that.
                        The current risk is contamination; if you are running
                        `trivy`, it should not need access to tokens in a
                        different env/directory.
                        
                        > All of these problems and many more go away if the
                        project is built around them from the start.
                        
                        Please elaborate on your approach that will all me to
                        run markdown/JS/Python/Go/Rust linters and security
                        scanners.
                        Remember that `trivy` which caused `litellm` compromise
                        is a security scanner itself.
                        
                        > developers can write sandboxes for their software
                        more effectively because they can architect around the
                        sandbox,
                        
                        Yeah, let's ask 100+ linter providers to write
                        sandboxes for you.
                        I can't even get maintainers to respond to legitimate &
                        trivial PRs many a time.
       
                          sellmesoap wrote 1 day ago:
                          > Inside the sandbox but not on my machine. Show me
                          how it can access an unmounted directory.
                          
                          So it says right on the tin of my favorite distro:
                          'Warning: Beware that the docker group membership is
                          effectively equivalent to being root!
                          Consider using rootless mode below.' So # docker run
                          super-evil-oci-container with a bind mount or two and
                          your would-be attacker doesn't need to guess your
                          sudo password.
       
                            ashishb wrote 1 day ago:
                            > docker run super-evil-oci-container
                            
                              1. That super evil OCI container still needs to
                            find a vulnerability in Docker
                              2. You can run Docker in rootless mode e.g.
                            Orbstack runs without root
       
                              staticassertion wrote 22 hours 48 min ago:
                              They're suggesting that the attacker is in a
                              position to `docker run`. Any attacker in that
                              position has privesc to root, trivially.
                              
                              Rootless mode requires unprivileged user
                              namespaces, disabled on almost any distribution
                              because it's a huge security hole in and of
                              itself.
       
                            imtringued wrote 1 day ago:
                            What's particularly vexing is that there is this
                            agentic sandboxing software called "container-use"
                            and out of the box it requires you to add a user to
                            the docker group because they haven't thought about
                            what that really means and why running docker in
                            that configuration option shouldn't be allowed, but
                            instead they have made it mandatory as a default.
       
                          staticassertion wrote 1 day ago:
                          I'm not going to code review your sandbox project for
                          you.
       
            cyanydeez wrote 2 days ago:
            Lawyers are slowly eating humanity.
       
              bmurphy1976 wrote 1 day ago:
              For now.  They're about to get hit by the AI wave as bad as us
              software devs.    Who knows what's on the other side of this.
       
                blueone wrote 1 day ago:
                Sorry that I have to be the one to tell you this, but lawyers
                are fine. Sure, AI will have an impact, but nothing like the
                once hyped idea that it would replace lawyers. It has actually
                been amusing to watch the hype cycle play out around AI when it
                comes to lawyers.
       
                  recpen wrote 1 day ago:
                  Lawyership in the sense of the profession may survive and
                  adapt. Individual lawyers, not so much. I strongly doubt the
                  new equilibrium (if we ever reach one) will need so many
                  lawyers.
                  
                  Same logic for software developers.
       
                  throwawaytea wrote 1 day ago:
                  My parents had a weird green card and paperwork issue that
                  was becoming a big problem. Everyone in their social circle
                  recommended an immigration type lawyer. Everyone.
                  
                  My dad was confident he could figure it out based on his
                  perplexity Pro account. He attacked the problem from several
                  angles and used it for help with what to do, how to do it,
                  what to ask for when visiting offices, how to press them to
                  move forward, and tons of other things.
                  
                  Got the problem resolved.
                  
                  So it definitely can reduce hiring lawyers even.
       
              singleshot_ wrote 2 days ago:
              Allegedly*
       
          harekrishnarai wrote 2 days ago:
          > it seems your personal account is also compromised. I just checked
          for the github search here
          
 (HTM)    [1]: https://github.com/search?q=%22teampcp+owns%22
       
          ozozozd wrote 2 days ago:
          Kudos for this update.
          
          Write a detailed postmortem, share it publicly, continue taking
          responsibility, and you will come out of this having earned an
          immense amount respect.
       
          detente18 wrote 2 days ago:
          Update:
          
          - Impacted versions (v1.82.7, v1.82.8) have been deleted from PyPI 
          - All maintainer accounts have been changed
          - All keys for github, docker, circle ci, pip have been deleted
          
          We are still scanning our project to see if there's any more gaps.
          
          If you're a security expert and want to help, email me -
          krrish@berri.ai
       
            detente18 wrote 15 hours 28 min ago:
            Update 2 (03/25/2026):
            
            - We will be holding a townhall on Friday to review the incident
            and share next steps ( [1] )
            
            - We can confirm a bad version of Trivy security scanner ran in our
            CI/CD pipeline, which would have led to the supply chain attack
            
            - We have paused new releases until we've completed securing our
            codebase and release pipeline to ensure safe releases for users
            
            - We've added additional github/gitlab ci scripts for checking if
            you're impacted: [2] We hope to share a full RCA in the coming
            days. Until then, if there's anything we can do to help your team -
            please let me know. You can email me (krrish@berri.ai), or join the
            discussion on github ( [3] ).
            
 (HTM)      [1]: https://lnkd.in/gsbTdCe7
 (HTM)      [2]: https://lnkd.in/gGicMkby
 (HTM)      [3]: https://lnkd.in/g9TuuQ2H
       
            MadsRC wrote 2 days ago:
            Dropped you a mail from mads.havmand@nansen.ai
       
            cosmicweather wrote 2 days ago:
            > All maintainer accounts have been changed
            
            What about the compromised accounts(as in your main account)? Are
            they completely unrecoverable?
       
              detente18 wrote 1 day ago:
              I deleted it, to be safe.
       
          kleton wrote 2 days ago:
          There are hundreds of PRs fixing valid issues to your github repo
          seemingly in limbo for weeks. What is the maintainer state over
          there?
       
            michh wrote 2 days ago:
            increasing the (social) pressure on maintainers to get PRs merged
            seems like the last thing you should be doing in light of
            preventing malicious code ending up in dependencies like this
            
            i'd much rather see a million open PRs than a single malicious PR
            sneak through due to lack of thorough review.
       
            zparky wrote 2 days ago:
            Not really the time for that. There's also PRs being merged every
            hour of the day.
       
          bognition wrote 2 days ago:
          The decision to block all downloads is pretty disruptive, especially
          for people on pinned known good versions. Its breaking a bunch of my
          systems that are all launched with `uv run`
       
            wasmitnetzen wrote 1 day ago:
            Take this as an argument to rethink your engineering decision to
            base your workflows entirely on the availability of an external
            dependency.
       
            zbentley wrote 2 days ago:
            That's a good thing (disruptive "firebreak" to shut down any
            potential sources of breach while info's still being gathered). The
            solve for this is artifacts/container images/whatnot, as other
            commenters pointed out.
            
            That said, I'm sorry this is being downvoted: it's unhappily
            observing facts, not arguing for a different security response. I
            know that's toeing the rules line, but I think it's important to
            observe.
       
            tedivm wrote 2 days ago:
            You should be using build artifacts, not relying on `uv run` to
            install packages on the fly. Besides the massive security risk, it
            also means that you're dependent on a bunch of external
            infrastructure every time you launch. PyPI going down should not
            bring down your systems.
       
              lanstin wrote 2 days ago:
              There are so many advantages to deployable artifacts, including
              audibility and fast roll-back. Also you can block so many risky
              endpoints from your compute outbound networks, which means even
              if you are compromised, it doesn't do the attacker any good if
              their C&C is not allow listed.
       
              zbentley wrote 2 days ago:
              This is the right answer. Unfortunately, this is very rarely
              practiced.
              
              More strangely (to me), this is often addressed by adding loads
              of fallible/partial caching (in e.g. CICD or deployment
              infrastructure) for package managers rather than building and
              publishing temporary/per-user/per-feature ephemeral packages for
              dev/testing to an internal registry. Since the latter's usually
              less complex and more reliable, it's odd that it's so rarely
              practiced.
       
            MeetingsBrowser wrote 2 days ago:
            Are you sure you are pinned to a “known good” version?
            
            No one initially knows how much is compromised
       
            Shank wrote 2 days ago:
            > Its breaking a bunch of my systems that are all launched with `uv
            run`
            
            From a security standpoint, you would rather pull in a library that
            is compromised and run a credential stealer? It seems like this is
            the exact intended and best behavior.
       
            cpburns2009 wrote 2 days ago:
            That's PyPI's behavior when they quarantine a package.
       
          outside2344 wrote 2 days ago:
          Is it just in 1.82.8 or are previous versions impacted?
       
            Imustaskforhelp wrote 2 days ago:
            1.82.7 is also impacted if I remember correctly.
       
              GrayShade wrote 2 days ago:
              1.82.7 doesn't have litellm_init.pth in the archive. You can
              download them from pypi to check.
              
              EDIT: no, it's compromised, see proxy/proxy_server.py.
       
                cpburns2009 wrote 2 days ago:
                1.82.7 has the payload in `litellm/proxy/proxy_server.py` which
                executes on import.
       
          Imustaskforhelp wrote 2 days ago:
          > - Krrish
          
          Was your account completely compromised? (Judging from the commit
          made by TeamPCP on your accounts)
          
          Are you in contacts with all the projects which use litellm
          downstream and if they are safe or not (I am assuming not)
          
          I am unable to understand how it compromised your account itself from
          the exploit at trivvy being used in CI/CD as well.
       
            detente18 wrote 2 days ago:
            It was the PYPI_PUBLISH token which was in our github project as an
            env var, that got sent to trivvy.
            
            We have deleted all our pypi publishing tokens.
            
            Our accounts had 2fa, so it's a bad token here.
            
            We're reviewing our accounts, to see how we can make it more secure
            (trusted publishing via jwt tokens, move to a different pypi
            account, etc.).
       
              NewJazz wrote 1 day ago:
              Are you spelling it with two vs on purpose?
       
              mike_hearn wrote 2 days ago:
              Perhaps it's too obvious but ... just running the publish process
              locally, instead of from CI, would help. Especially if you
              publish from a dedicated user on a Mac where the system keychain
              is pretty secure.
       
                tedivm wrote 2 days ago:
                This problem is solved by not having a token. Github and PyPI
                both support OIDC based workflows. Grant only the publish job
                access to OIDC endpoint, then the Trivy job has nothing it can
                steal.
       
                staticassertion wrote 2 days ago:
                I'm not sure how. Their local system seems just as likely to
                get compromised through a `pip install` or whatever else.
                
                In CI they could easily have moved `trivy` to its own dedicated
                worker that had no access to the PYPI secret, which should be
                isolated to the publish command and only the publish command.
       
                  mike_hearn wrote 2 days ago:
                  User isolation works, the keychain isolation works. On macOS
                  tokens stored in the keychain can be made readable only by
                  specific apps, not anything else. It does require a bit of
                  infrastructure - ideally a Mac app that does the release -
                  but nothing you can't vibe code quickly.
       
                    staticassertion wrote 2 days ago:
                    That's true, but it seems far more complex than just moving
                    trivy to a separate workerflow with no permissions and
                    likely physical isolation between it and a credential. I'm
                    pretty wary of the idea that malware couldn't just privesc
                    - it's pretty trivial to obtain root on a user's laptop.
                    Running as a separate, unprivileged user helps a ton, but
                    again, I'm skeptical of this vs just using a github
                    workflow.
       
                      mike_hearn wrote 1 day ago:
                      I'm looking for more general solutions. "Properly
                      configure Trivy" is too specific, it's obvious in
                      hindsight but not before.
                      
                      Privilege escalation on macOS is very hard indeed. Apple
                      have been improving security for a long time, it is far,
                      far ahead of Linux or Windows in this regard. The default
                      experience in Xcode is that a release-mode app you make
                      will be sandboxed, undebuggable, have protected keychain
                      entries other apps can't read, have a protected file
                      space other apps can't read, and its own code will also
                      be read-only to other apps. So apps can't interfere with
                      each other or escalate to each other's privileges even
                      when running as the same UNIX user. And that's the
                      default, you don't have to do anything to get that level
                      of protection.
       
                        staticassertion wrote 1 day ago:
                        Privesc is trivial on every desktop OS if you run as a
                        regular user. I can write to your rc files so it's game
                        over.
                        
                        App Store apps are the exception, which is great, but
                        presumably we're not talking about that? If we are,
                        then yeah, app stores solve these problems by making
                        things actually sandboxed.
       
                          mike_hearn wrote 1 day ago:
                          Any app can be sandboxed on macOS and by default
                          newly created apps are; that's why I say if you
                          create a new app in Xcode then anything run by that
                          app is sandboxed out of the box. App Store enforces
                          it but beyond that isn't involved.
       
                            staticassertion wrote 18 hours 31 min ago:
                            I feel like we're just talking about different
                            things? I've just said that I'm aware of apps being
                            sandboxed, that does not mean that some random
                            program you run from your terminal is sandboxed.
       
                              mike_hearn wrote 10 hours 59 min ago:
                              Right, I'm skipping a step.
                              
                              What I'm saying is that it's very easy now to
                              take some arbitrary task - doing a
                              compile/release cycle for example - and quickly
                              knock up a simple signed macOS .app that 
                              sandboxes itself and then invokes the release
                              script as a subprocess. Sandboxing is transitive
                              and the .app itself can authenticate to the OS to
                              obtain creds before passing them to the
                              subprocess.
                              
                              In the past I've thought about making a quick
                              SaaS that does this for people so they don't have
                              to fiddle with it locally and maybe some day I
                              still will. But you can easily do it locally
                              especially with Xcode and AI now. You wouldn't
                              have to know anything about macOS development.
       
                                staticassertion wrote 7 hours 31 min ago:
                                Ah, yes. I totally agree with that and wish
                                that's how people built software.
       
              redrove wrote 2 days ago:
              How did PYPI_PUBLISH lead to a full GH account takeover?
       
                chunky1994 wrote 2 days ago:
                Their Personal Access Token must’ve been pwned too, not sure
                through what mechanism though
       
                  Imustaskforhelp wrote 2 days ago:
                  They have written about it on github to my question:
                  
                  Trivvy hacked ( [1] ) -> all circleci credentials leaked ->
                  included pypi publish token + github pat -> | WE DISCOVER
                  ISSUE | -> pypi token deleted, github pat deleted + account
                  removed from org access, trivvy pinned to last known safe
                  version (v0.69.3)
                  
                  What we're doing now:
                  
                      Block all releases, until we have completed our scans
                      Working with Google's mandiant.security team to
                  understand scope of impact
                      Reviewing / rotating any leaked credentials
                  
 (HTM)            [1]: https://www.aquasec.com/blog/trivy-supply-chain-atta...
 (HTM)            [2]: https://github.com/BerriAI/litellm/issues/24518#issu...
       
                    celticninja wrote 2 days ago:
                    69.3 isnt safe. The safe thing to do is remove all trivy
                    access. or failing that version. 0.35 is the last and AFAIK
                    only safe version.
                    
 (HTM)              [1]: https://socket.dev/blog/trivy-under-attack-again-g...
       
                      Imustaskforhelp wrote 2 days ago:
                      I have sent your message to the developer on github and
                      they have changed the version to 0.35.0 ,so thanks.
                      
 (HTM)                [1]: https://github.com/BerriAI/litellm/issues/24518#...
       
                    franktankbank wrote 2 days ago:
                    Does that explain how circleci was publishing commits and
                    closing issues?
       
                ezekg wrote 2 days ago:
                I'd imagine the attacker published a new compromised version of
                their package, which the author eventually downloaded, which
                pwned everything else.
       
                franktankbank wrote 2 days ago:
                Don't hold your breath for an answer.
       
            redrove wrote 2 days ago:
            >I am unable to understand how it compromised your account itself
            from the exploit at trivvy being used in CI/CD as well.
            
            Token in CI could've been way too broad.
       
          redrove wrote 2 days ago:
          >1. Looks like this originated from the trivvy used in our ci/cd
          
          Were you not aware of this in the short time frame that it happened
          in? How come credentials were not rotated to mitigate the trivy
          compromise?
       
            wheelerwj wrote 2 days ago:
            The latest trivy attack was announced just yesterday. If you go out
            to dinner or take a night off its totally plausible to have not
            seen it.
       
              anishgupta wrote 1 day ago:
              afaik the trivy attack was first in the news on March 19th for
              the github actions and for docker images it was on March 23rd
       
        mohsen1 wrote 2 days ago:
        If it was not spinning so many Python processes and not overwhelming
        the system with those (friends found out this is consuming too much CPU
        from the fan noise!) it would have been much more successful. So
        similar to xz attack
        
        it does a lot of CPU intensive work
        
            spawn background python
            decode embedded stage
            run inner collector
            if data collected:
            write attacker public key
            generate random AES key
            encrypt stolen data with AES
            encrypt AES key with attacker RSA pubkey
            tar both encrypted files
            POST archive to remote host
       
        jFriedensreich wrote 2 days ago:
        We just can't trust dependencies and dev setups. I wanted to say
        "anymore" but we never could. Dev containers were never good enough,
        too clumsy and too little isolation. We need to start working in full
        sandboxes with defence in depth that have real guardrails and UIs like
        vm isolation + container primitives and allow lists, egress filters,
        seccomp, gvisor and more but with much better usability. Its the same
        requirements we have for agent runtimes, lets use this momentum to make
        our dev environments safer! In such an environment the container would
        crash, we see the violations, delete it and dont' have to worry about
        it. We should treat this as an everyday possibility not as an isolated
        security incident.
       
          pjc50 wrote 1 day ago:
          The trouble with sandboxing is that eventually everything you want to
          access ends up inside the sandbox. Otherwise the friction is
          infuriating.
          
          I see people going in the opposite direction with "dump everything in
          front of my army of LLMs" setups. Horribly insecure, but gotta go
          fast, right?
       
          ashishb wrote 1 day ago:
          > We need to start working in full sandboxes with defence in depth
          that have real guardrails
          
          Happily sandboxing almost all third-party tools since 2025.
          `npm run dev` does not need access to my full disk.
       
          AbanoubRodolf wrote 1 day ago:
          Defense-in-depth on dev machines is useful but doesn't address the
          actual attack path here. The credential that was stolen lived in CI,
          not on a dev laptop — Trivy ran with PyPI publisher permissions
          because that's standard practice for "scanner before publish."
          
          The harder problem is that CI pipelines routinely grant scanner
          processes more credential access than they need. Trivy needed read
          access to the repo and container layers; it didn't need PyPI publish
          tokens. Scoping CI secrets to the minimum necessary operation, and
          injecting them only for the specific job that needs them rather than
          the entire pipeline, would have contained the blast radius here.
       
          Aurornis wrote 2 days ago:
          > Dev containers were never good enough, too clumsy and too little
          isolation.
          
          I haven't kept up with the recent exploits, so a side question: Have
          any of the recent supply chain attacks or related exploits included
          any escapes from basic dev containers?
       
          Andrei_dev wrote 2 days ago:
          Sandboxes yes, but who even added the dependency? Half the projects I
          see have requirements.txt written by Copilot. AI says "add litellm",
          dev clicks accept, nobody even pins versions.
          
          Then we talk about containment like anyone actually looked at that
          dep list.
       
          miraculixx wrote 2 days ago:
          I agree in general, but how are you ever upgrading any of that? Could
          be a "sleeper compromise" that only activates sometime in the future.
          Open problem.
       
          poemxo wrote 2 days ago:
          "Anymore" is right though.  This should be a call to change the
          global mindset regarding dependencies.    We have to realize that the
          "good ol days" are behind us in order to take action.
          
          Otherwise people will naysay and detract from the cause. "It worked
          before" they will say. "Why don't we do it like before?"
          
          DISA STIG already forbids use of the EPEL for Red Hat Enterprise
          Linux. Enterprise software install instructions are littered with
          commands to turn off gpgcheck and install rpm's from sourceforge. The
          times are changing and we need cryptographically verifiable
          guarantees of safety!
       
          fulafel wrote 2 days ago:
          > In such an environment the container would crash, we see the
          violations, delete it and dont' have to worry about it.
          
          This is the interesting part. What kind of UI or other mechanisms
          would help here? There's no silver bullet for detecting and crashing
          on "something bad". The adversary can test against your sandbox as
          well.
       
          udave wrote 2 days ago:
          strongly agree. we keep giving away trust to other entities in order
          to make our jobs easier. trusting maintainers is still better than
          trusting a clanker but still risky. We need a sandboxed environment
          where we can build our software without having to worry about these
          unreliable factors.
          
          On a personal note, I have been developing and talking to a clanker (
          runs inside ) to get my day to day work done. I can have multiple
          instances of my project using worktrees, have them share some common
          dependencies and monitor all of them in one place. I plan to
          opensource this framework soon.
       
          dist-epoch wrote 2 days ago:
          This stuff already exists - mobile phone sandboxed applications with
          intents (allow Pictures access, ...)
          
          But mention that on HN and watch getting downvoted into oblivion: the
          war against general computation, walled gardens, locked down against
          device owners...
       
            MiddleEndian wrote 1 day ago:
            You can have both. Bazzite Linux lets you sandbox applications and
            also control your own device.
       
            jFriedensreich wrote 2 days ago:
            You are not being downvoted because the core premise is wrong but
            because your framing as a choice between being locked out of
            general purpose computing vs security is repeating the brainwashing
            companies like apple and meta do to justify their rent-seeking
            locking out out of competitors and user agency. We have all the
            tools to build safe systems that don't require up front manifest
            declaration and app store review by the lord but give tools for
            control, dials and visibility to the users themselves in the
            moment. And yes, many of these UIs might look like intent sheets.
            The difference is who ultimately controls how these Interfaces look
            and behave.
       
          uyzstvqs wrote 2 days ago:
          That's no solution. If you can't trust and/or verify dependencies,
          and they are malicious, then you have bigger problems than what a
          sandbox will protect against. Even if it's sandboxed and your host
          machine is safe, you're presumably still going to use that malicious
          code in production.
       
            exyi wrote 2 days ago:
            Except that LiteLLM probably got pwned because they used Trivy in
            CI. If Trivy ran in a proper sandbox, the compromised job could not
            publish a compromised package.
            
            (Yes, they should better configure which CI job has which
            permissions, but this should be the default or it won't always
            happen)
       
            nazcan wrote 2 days ago:
            I'm supportive of going further - like restricting what a library
            is able to do. e.g. if you are using some library to compute a
            hash, it should not make network calls. Without sub-processes, it
            would require OS support.
       
              azornathogron wrote 1 day ago:
              In type system theory I think what you're looking for is "effect
              systems".
              
              You make the type system statically encode categories of
              side-effects, so you can tell from the type of a function whether
              it is pure computation, or if not what other things it might do.
              Exactly what categories of side-effect are visible this way
              depends on the type system; some are more expressive than others.
              
              But it means when you use a hash function you can know that it's,
              eg, only reading memory you gave it access to and doing some pure
              computation on it.
       
              fn-mote wrote 2 days ago:
              Which exists: pledge in OpenBSD.
              
              Making this work on a per-library level … seems a lot harder.
              The cost for being very paranoid is a lot of processes right now.
       
                lanstin wrote 2 days ago:
                It's a language/compiler/function call stack feature, not
                existing as far as I know, but it would be awesome - the caller
                of a function would specify what resources/syscalls could be
                made, and anything down the chain would be thusly restricted.
                The library could try to do its phone home stats and it would
                fail. Couldn't be C or a C type language runtime, or anything
                that can call to assembly of course.  @compute_only decorator. 
                Maybe could be implemented as a sys-call for a thread -
                thread_capability_remove(F_NETWORK + F_DISK)?  Wouldn't be able
                to schedule any work on any thread in that case, but Go could
                have pools of threads for coroutines with varying capabilities.
                Something to put the developer back in charge of the mountain
                of dependencies we are all forced to manage now.
       
          dotancohen wrote 2 days ago:
          > We just can't trust dependencies and dev setups.
          
          In one of my vibe coded personal projects (Python and Rust project)
          I'm actually getting rid of most dependencies and vibe coding
          replacements that do just what I need. I think that we'll see far
          fewer dependencies in future projects.
          
          Also, I typically only update dependencies when either an exploit is
          known in the current version or I need a feature present in a later
          version - and even then not to the absolute latest version if
          possible. I do this for all my projects under the many eyes
          principal. Finding exploits takes time, new updates are riskier than
          slightly-stale versions.
          
          Though, if I'm filing a bug with a project, I do test and file
          against the latest version.
       
            adw wrote 2 days ago:
            > In one of my vibe coded personal projects (Python and Rust
            project) I'm actually getting rid of most dependencies and vibe
            coding replacements that do just what I need. I think that we'll
            see far fewer dependencies in future projects.
            
            No free lunch. LLMs are capable of writing exploitable code and you
            don’t get notifications (in the eg Dependabot sense, though it
            has its own problems) without audits.
       
              dotancohen wrote 2 days ago:
              My vibe coded personal projects don't have the source code
              available for attackers to target specifically.
       
                nimih wrote 2 days ago:
                It might surprise you to learn that a large number of software
                exploits are written without the attacker having direct access
                to the program's source code. In fact, shocking as it may seem
                today, huge numbers of computers running the Windows operating
                system and Internet Explorer were compromised without the
                attackers ever having access to the source code of either.
       
                  sersi wrote 1 day ago:
                  I'm actually curious if the windows source code leak of 2004
                  increased the number of exploits against windows? I'm not
                  sure if it included internet explorer. I remember that
                  windows 2000 was included back then.
       
                heavyset_go wrote 2 days ago:
                You don't need open source access to be exploitable or
                exploited
       
          cedws wrote 2 days ago:
          This is the security shortcuts of the past 50 years coming back to
          bite us. Software has historically been a world where we all just
          trust each other. I think that’s coming to an end very soon.
           We need sandboxing for sure, but it’s much bigger than that.
          Entire security models need to be rethought.
       
            ting0 wrote 2 days ago:
            What we need is accountability and ties to real-world identity.
            
            If you're compromised, you're burned forever in the ledger. It's
            the only way a trust model can work.
            
            The threat of being forever tainted is enough to make people more
            cautious, and attackers will have no way to pull off attacks unless
            they steal identities of powerful nodes.
            
            Like, it shouldn't be a thing that some large open-source project
            has some 4th layer nested dependency made by some anonymous
            developer with 10 stars on Github.
            
            If instead, the dependency chain had to be tied to real verified
            actors, you know there's something at stake for them to be
            malicious. It makes attacks much less likely. There's
            repercussions, reputation damage, etc.
       
              hannahoppla wrote 1 day ago:
              Accountability is on the people using a billion third party
              dependencies, you need to take responsibility for every line of
              code you use in your project.
       
                encomiast wrote 1 day ago:
                If you are really talking about dependencies, I’m not sure
                you’ve really thought this all the way through. Are you
                inspecting every line of the Python interpreter and its
                dependencies before running? Are you reading the compiler that
                built the Python interpreter?
       
                  autoexec wrote 1 day ago:
                  It's still smart to limit the amount of code (and coders) you
                  have to trust. A large project like Python should be making
                  sure it's dependencies are safe before each release. In our
                  own projects we'd probably be better off taking just the code
                  we need from a library, verifying it (at least to the extent
                  of looking for something as suspect as a random block of
                  base64 encoded data) and copying it into our projects
                  directly rather than adding a ton of external dependencies
                  and every last one of the dependencies they pull in and then
                  just hoping that nobody anywhere in that chain gets
                  compromised.
       
              anthk wrote 1 day ago:
              There is no need of that bullshit. Guix can just set an isolated
              container in seconds not touching your $HOME at all and importing
              all the Python/NPM/Whatever dependencies in the spot.
       
              KronisLV wrote 1 day ago:
              > What we need is accountability and ties to real-world identity.
              
              Who's gonna enforce that?
              
              > If you're compromised, you're burned forever in the ledger.
              
              Guess we can't use XZ utils anymore cause Lasse Collin got pwned.
              
              Also can't use Chalk, debug, ansi-styles, strip-ansi,
              supports-color, color-convert and others due to Josh Junon also
              ending up a victim.
              
              Same with ua-parser-js and Faisal Salman.
              
              Same with event-stream and Dominic Tarr.
              
              Same with the 2018 ESLint hack.
              
              Same with everyone affected by Shai-Hulud.
              
              Hell, at that point some might go out of their way to get people
              they don't like burned.
              
              At the same time, I think that stopping reliance on package
              managers that move fast and break things and instead making OS
              maintainers review every package and include them in distros
              would make more sense. Of course, that might also be absolutely
              insane (that's how you get an ecosystem that's from 2 months to 2
              years behind the upstream packages) and take 10x more work, but
              with all of these compromises, I'd probably take that and old
              packages with security patches, instead of pulling random shit
              with npm or pip or whatever.
              
              Though having some sort of a ledger of bad actors (instead of
              people who just fuck up) might also be nice, if a bit impossible
              to create - because in the current day world that's potentially
              every person that you don't know and can't validate is actually
              sending you patches (instead of someone impersonating them), or
              anyone with motivations that aren't clear to you, especially in
              the case of various "helpful" Jia Tans.
       
              post-it wrote 2 days ago:
              > The threat of being forever tainted is enough to make people
              more cautious
              
              No it's not. The blame game was very popular in the Eastern Block
              and it resulted in a stagnant society where lots of things went
              wrong anyway. For instance, Chernobyl.
       
              MetaWhirledPeas wrote 2 days ago:
              > real-world identity
              
              This bit sounds like dystopian governance, antithetical to most
              open source philosophies.
       
                2OEH8eoCRo0 wrote 2 days ago:
                Would you drive on bridges or ride in elevators "inspected" by
                anons? Why are our standards for digital infrastructure and
                software "engineering" so low?
                
                I don't blame the anons but the people blindly pulling in anon
                dependencies. The anons don't owe us anything.
       
                  pamcake wrote 1 day ago:
                  A business or government can (should) separately package,
                  review, and audit code without involving upstream developers
                  or maintainers at all.
       
                  mastermage wrote 1 day ago:
                  Do you know who inspected a bridge before you drive over it?
       
                  MetaWhirledPeas wrote 2 days ago:
                  This option is available already in the form of closed-source
                  proprietary software.
                  
                  If someone wants a package manager where all projects mandate
                  verifiable ID that's fine, but I don't see that getting many
                  contributors. And I also don't see that stopping people using
                  fraudulent IDs.
       
            klibertp wrote 2 days ago:
            The NIH syndrome becoming best practice (a commenter below already
            says they "vibe-coded replacements for many dependencies") would
            also save quite a few jobs, I suspect. Fun times.
       
              ting0 wrote 2 days ago:
              I've been doing that too. The downside is it's a lot of work for
              big replacements.
       
            georgestrakhov wrote 2 days ago:
            I've been thinking the same thing. And it's somewhat parallel to
            what happened to meditation vs. drugs. In the old world the
            dangerous insights required so many years of discipline that you
            could sort of trust that the person getting the insight would be
            ok. But then any idiot can get the insight by just eating some
            shrooms and oops, that's a problem. Mostly self-harm problem in
            that case. But the dynamic is somewhat similar to what's happening
            now with LLMs and coding.
            
            Software people could (mostly) trust each other's OSS contributions
            because we could trust the discipline it took in the first place.
            Not any more.
       
              AlexCoventry wrote 2 days ago:
              Supply-chain attacks long pre-date effective AI agentic coding,
              FWIW.
       
              KoftaBob wrote 2 days ago:
              What in the world are “the dangerous insights”?
       
                garthk wrote 1 day ago:
                “Society is a construct”, for starters?
       
                  otabdeveloper4 wrote 1 day ago:
                  That's babby's first insight. Most people figure this out on
                  their own in kindergarten.
       
              dec0dedab0de wrote 2 days ago:
              In the old world the dangerous insights required so many years of
              discipline that you could sort of trust that the person getting
              the insight would be ok. But then any idiot can get the insight
              by just eating some shrooms and oops, that's a problem.
              
              I would think humans have been using psychedelics since before we
              figured out meditation. Likely even before we were humans.
       
                greazy wrote 1 day ago:
                Ah yes the stoned ape hypothesis. I don't know if there is or
                will ever be evidence to support the hypothesis.
                
                I also like the drunk monkey hypothesis.
       
            1313ed01 wrote 2 days ago:
            This assumes that we can get a locked down, secure, stable bedrock
            system and sandbox that basically never changes except for tiny
            security updates that can be carefully inspected by many
            independent parties.
            
            Which sounds great, but the way things work now tend to be the
            exact opposite of that, so there will be no trustable platform to
            run the untrusted code in. If the sandbox, or the operating system
            the sandbox runs in, will get breaking changes and force everyone
            to always be on a recent release (or worse, track main branch) then
            that will still be a huge supply chain risk in itself.
       
              pabs3 wrote 1 day ago:
              I think Bootstrappable Builds from source without any binaries,
              plus distributed code audits would do a better job than locking
              down already existing binaries. [1]
              
 (HTM)        [1]: https://bootstrappable.org/
 (HTM)        [2]: https://github.com/crev-dev/
       
              ashishb wrote 1 day ago:
              > This assumes that we can get a locked down, secure, stable
              bedrock system and sandbox that basically never changes except
              for tiny security updates that can be carefully inspected by many
              independent parties.
              
              Not really.
              You should limit the attack surface for third-party code.
              
              A linter running in `dir1` should not access anything outside
              `dir1`.
       
              wang_li wrote 2 days ago:
              >Which sounds great, but the way things work now tend to be the
              exact opposite of that, so there will be no trustable platform to
              run the untrusted code in.
              
              This is the problem with software progressivism. Some things
              really should just be what they are, you fix bugs and security
              issues and you don't constantly add features. Instead everyone is
              trying to make everything have every feature. Constantly fiddling
              around in the guts of stuff and constantly adding new bugs and
              security problems.
       
              aftbit wrote 2 days ago:
              The secure boot "shim" is a project like this. Perhaps we need
              more core projects that can be simple and small enough to reach a
              "finished" state where they are unlikely to need future upgrades
              for any reason. Formal verification could help with this ...
              maybe.
              
 (HTM)        [1]: https://wiki.debian.org/SecureBoot#Shim
       
              dotancohen wrote 2 days ago:
              > This assumes that we can get a locked down, secure, stable
              bedrock system and sandbox that basically never changes except
              for tiny security updates that can be carefully inspected by many
              independent parties.
              
              For the most part you can. Just version pin slightly-stale
              versions of dependencies, after ensuring there are no known
              exploits for that version. Avoid the latest updates whenever
              possible. And keep aware of security updates, and affected
              versions.
              
              Don't just update every time the dependency project updates.
              Update specifically for security issues, new features, and
              specific performance benefits. And even then avoid the latest
              version when possible.
       
                1313ed01 wrote 2 days ago:
                Sure, and that is basically what sane people do now, but that
                only works until something needs a security patch that was not
                provided for the old version, and changing one dependency is
                likely to cascade so now I am open to supply chain attacks in
                many dependencies again (even if briefly).
                
                To really run code without trust would need something more like
                a microkernel that is the only thing in my system I have to
                trust, and everything running on top of that is forced to
                behave and isolated from everything else. Ideally a kernel so
                small and popular and rarely modified that it can be well
                tested and trusted.
       
                  dist-epoch wrote 2 days ago:
                  Virtual machines are that - tiny surfaces to access the host
                  system (block disk device, ...). Which is why virtual machine
                  escape vulnerabilities are quite rare.
       
                    bilbo0s wrote 2 days ago:
                    I feel like in some cases we should be using virtual
                    machines. Especially in domains where risk is non-trivial.
                    
                    How do you change developer and user habits though? It's
                    not as easy as people think.
       
          amelius wrote 2 days ago:
          We need programming languages where every imported module is in its
          own sandbox by default.
       
            codethief wrote 1 day ago:
            Or just make side effects explicit in the type system through
            monads or algebraic effects.
       
            sph wrote 1 day ago:
             [1]
            
 (HTM)      [1]: https://en.wikipedia.org/wiki/Capability-based_security
 (HTM)      [2]: https://en.wikipedia.org/wiki/Object-capability_model
       
            staticassertion wrote 2 days ago:
            In frontend-land you can sort of do this by loading dependencies in
            iframe sandboxes. In backend, ur fucked.
       
            mike_hearn wrote 2 days ago:
            Java had that from v1.2 in the 1990s. It got pulled out because
            nobody used it. The problem of how to make this usable by
            developers is very hard, although maybe LLMs change the equation.
       
            jerf wrote 2 days ago:
            Now is probably a pretty good time to start a capabilities-based
            language if someone is able to do that. I wish I had the time.
       
          binsquare wrote 2 days ago:
          So... I'm working on an open source technology to make a literal
          virtual machine shippable i.e. freezing everything inside it,
          isolated due to vm/hypervisor for sandboxing, with support for
          containers too since it's a real linux vm.
          
          The problems you mentioned resonated a lot with me and why I'm
          building it, any interest in working to solve that together?:
          
 (HTM)    [1]: https://github.com/smol-machines/smolvm
       
            fsflover wrote 2 days ago:
            It looks like you may be interested in Qubes OS, [1] .
            
 (HTM)      [1]: https://qubes-os.org
       
              fsflover wrote 1 day ago:
              Or
              
 (HTM)        [1]: https://spectrum-os.org/
       
            jFriedensreich wrote 2 days ago:
            Thanks for the pointer! Love the premise project. Just a few notes:
            
            - a security focused project should NOT default to train people
            installing by piping to bash. If i try previewing the install
            script in the browser it forces download instead of showing as
            plain text. The first thing i see is an argument
            
            #   --prefix DIR Install to DIR (default: ~/.smolvm)
            
            that later in the script is rm -rf deleting a lib folder. So if i
            accidentally pick a folder with ANY lib folder this will be
            deleted.
            
            - Im not sure what the comparison to colima with krunkit machines
            is except you don't use vm images but how this works or how it is
            better is not 100% clear
            
            - Just a minor thing but people don't have much attention and i
            just saw aws and fly.io in the description and nearly closed the
            project. it needs to be simpler to see this is a local sandbox with
            libkrun NOT a wrapper for a remote sandbox like so many of the
            projects out there.
            
            Will try reaching you on some channel, would love to collaborate
            especially on devX, i would be very interested in something more
            reliable and bit more lightweight in placce of colima when libkrun
            can fully replace vz
       
              dist-epoch wrote 2 days ago:
              What is the alternative to bash piping? If you don't trust the
              project install script, why would you trust the project itself?
              You can put malware in either.
       
                jFriedensreich wrote 2 days ago:
                That assumes you even need an install script. 90% of install
                scripts just check the platform and make the binary executable
                and put it in the right place. Just give me links to a github
                release page with immutable releases enabled and pure binaries.
                I download the binary but it in a temporary folder, run it with
                a seatbelt profile that logs what it does. Binaries should
                "just run" and at most access one folder in a place they show
                you and that is configurable! Fuck installers.
       
                wang_li wrote 2 days ago:
                It turns out that it's possible for the server to detect
                whether it is running via "| bash" or if it's just being
                downloaded. Inspecting it via download and then running that
                specific download is safer than sending it directly to bash,
                even if you download it and inspect it before redownloading it
                and piping it to a shell.
       
                  dist-epoch wrote 2 days ago:
                  The server can also put malware in the .tar.gz. Are you
                  really checking all the files in there, even the binaries? If
                  you don't what's the point of checking only the install
                  script?
       
                    pabs3 wrote 1 day ago:
                    > Are you really checking all the files in there, even the
                    binaries?
                    
                    One should never trust the binaries, always build them from
                    source, all the way down to the bootloader. [1] Checking
                    all the files is really the only way to deal with potential
                    malware, or even security vulns.
                    
 (HTM)              [1]: https://bootstrappable.org/
 (HTM)              [2]: https://github.com/crev-dev/
       
                      dist-epoch wrote 1 day ago:
                      Nice ideal, but Chrome/Firefox would take days to build
                      on your average laptop (if it doesn't run out of memory
                      first).
       
                        pabs3 wrote 1 day ago:
                        The latest Firefox build that Debian did only took just
                        over one hour on amd64/armhf and 1.5 hours on ppc64el,
                        the slowest Debian architecture is riscv64 and the last
                        successful build there took only 17.5h, so definitely
                        not days. Your average modern developer-class laptop is
                        going to take a lot less than riscv64 too.
       
                    TacticalCoder wrote 1 day ago:
                    > If you don't what's the point of checking only the
                    install script?
                    
                    The .tar.gz can be checksummed and saved (to be sure later
                    on that you install the same .tar.gz and to be sure it's
                    still got the same checksum). Piping to Bash in one go not
                    so much. Once you intercept the .tar.gz, you can both
                    reproduce the exploit if there's any (it's too late for the
                    exploit to hide: you've got the .tar.gz and you may have
                    saved it already to an append-only system, for example) and
                    you can verify the checksum of the .tar.gz with other
                    people.
                    
                    The point of doing all these verifications is not only to
                    not get an exploit: it's also to be able to reproduce an
                    exploit if there's one.
                    
                    There's a reason, say, packages in Debian are nearly all
                    both reproducible and signed.
                    
                    And there's a reason they're not shipped with piping to
                    bash.
                    
                    Other projects shall offer an install script that downloads
                    a file but verifies its checksum. That's the case of the
                    Clojure installer for example: if verifies the .jar. Now I
                    know what you're going to say: "but the .jar could be
                    backdoored if the site got hacked, for both the checksum in
                    the script and the .jar could have been modified". Yes. But
                    it's also signed with GPG. And I do religiously verify that
                    the "file inside the script" does have a valid signature
                    when it has one. And if suddenly the signing key changed,
                    this rings alarms bells.
                    
                    Why settle for the lowest common denominator security-wise?
                    Because Anthropic (I pay my subscription btw) gives a very
                    bad example and relies entirety on the security of its
                    website and pipes to Bash? This is high-level suckage. A
                    company should know better and should sign the files it
                    ships and not encourage lame practices.
                    
                    Once again: all these projects that suck security-wise are
                    systematically built on the shoulders of giants (like
                    Debian) who know what they're doing and who are taking
                    security seriously.
                    
                    This "malware exists so piping to bash is cromulent"
                    mindset really needs to die. That mentality is the reason
                    we get major security exploits daily.
       
                      dist-epoch wrote 1 day ago:
                      > And I do religiously verify that the "file inside the
                      script" does have a valid signature when it has one.
                      
                      If you want to go down this route, there is no need to
                      reinvent the wheel. You can add custom repositories to
                      apt/..., you only need to do this once and verify the
                      repo key, and then you get this automatic verification
                      and installation infrastructure. Of course, not every
                      project has one.
       
              binsquare wrote 2 days ago:
              Love this feedback, agree with you completely on all of it - I'll
              be making those changes.
              
              1. In comparison with colima with krunkit, I ship smolvm with
              custom built kernel + rootfs, with a focus on the virtual machine
              as opposed to running containers (though I enable running
              containers inside it).
              
              The customizations are also opensource here: [1] 2. Good call on
              that description!
              
              I've reached out to you on linkedin
              
 (HTM)        [1]: https://github.com/smol-machines/libkrunfw
       
            Bengalilol wrote 2 days ago:
            Probably on the side of your project, but did you try SmolBSD? <
            [1] >
            It's a meta-OS for microVMs that boots in 10–15 ms.
            
            It can be dedicated to a single service (or a full OS), runs a real
            BSD kernel, and provides strong isolation.
            
            Overall, it fits into the "VM is the new container" vision.
            
            Disclaimer: I'm following iMil through his twitch streams (the
            developer of smolBSD and a contributor to NetBSD) and I truly love
            what he his doing. I haven't actually used smolBSD in production
            myself since I don't have a need for it (but I participated in his
            live streams by installing and running his previews), and my answer
            might be somewhat off-topic.
            
            More here < [2] >
            
 (HTM)      [1]: https://smolbsd.org
 (HTM)      [2]: https://hn.algolia.com/?q=smolbsd
       
        tom_alexander wrote 2 days ago:
        Only tangentially related: Is there some joke/meme I'm not aware of?
        The github comment thread is flooded with identical comments like
        "Thanks, that helped!", "Thanks for the tip!", and "This was the answer
        I was looking for."
        
        Since they all seem positive, it doesn't seem like an attack but I
        thought the general etiquette for github issues was to use the emoji
        reactions to show support so the comment thread only contains
        substantive comments.
       
          vultour wrote 2 days ago:
          These have been popping up on all the TeamPCP compromises lately
       
          Imustaskforhelp wrote 2 days ago:
          Bots to flood the discussion to prevent any actual conversation.
       
          jbkkd wrote 2 days ago:
          Those are all bots commenting, and now exposing themselves as such.
       
          incognito124 wrote 2 days ago:
          In the thread:
          
          > It also seems that attacker is trying to stifle the discussion by
          spamming this with hundreds of comments. I recommend talking on
          hackernews if that might be the case.
       
          nickvec wrote 2 days ago:
          Ton of compromised accounts spamming the GH thread to prevent any
          substantive conversation from being had.
       
            tom_alexander wrote 2 days ago:
            Oh wow. That's a lot of compromised accounts. Guess I was wrong
            about it not being an attack.
       
        xunairah wrote 2 days ago:
        Version 1.82.7 is also compromised. It doesn't have the pth file, but
        the payload is still in proxy/proxy_server.py.
       
        eoskx wrote 2 days ago:
        This is bad, especially from a downstream dependency perspective. DSPy
        and CrewAI also import LiteLLM, so you could not be using LiteLLM as a
        gateway, but still importing it via those libraries for agents, etc.
       
          benatkin wrote 2 days ago:
          I'm surprised to see nanobot uses LiteLLM: [1] LiteLLM wouldn't be my
          top choice, because it installs a lot of extra stuff. [2] But it's
          quite popular.
          
 (HTM)    [1]: https://github.com/HKUDS/nanobot
 (HTM)    [2]: https://news.ycombinator.com/item?id=43646438
       
            flux3125 wrote 2 days ago:
            I completely removed nanobot after I found that. Luckily, I only
            used it a few times and inside a docker container. litellm 1.82.6
            was the latest version I could find installed, not sure if it was
            affected.
       
          nickvec wrote 2 days ago:
          Wow, the postmortem for this is going to be brutal. I wonder just how
          many people/orgs have been affected.
       
            eoskx wrote 2 days ago:
            Yep, I think the worst impact is going to be from libraries that
            were using LiteLLM as just an upstream LLM provider library vs for
            a model gateway. Hopefully, CrewAI and DSPy can get on top of it
            soon.
       
        0123456789ABCDE wrote 2 days ago:
        airflow, dagster, dspy, unsloth.ai, polar
       
        dec0dedab0de wrote 2 days ago:
        github, pypi, npm, homebrew, cpan, etc etc. should adopt a
        multi-multi-factor authentication approach for releases.   Maybe have
        it kick in as a requirement after X amount of monthly downloads.
        
        Basically, have all releases require multi-factor auth from more than
        one person before they go live.
        
        A single person being compromised either technically, or by being hit
        on the head with a wrench, should not be able to release something
        malicious that effects so many people.
       
          cpburns2009 wrote 1 day ago:
          I really hoped PyPI's required switch to 2-factor auth would require
          reauthorization to publish packages. But no, they went with "trusted
          publishing" (i.e., publishing is triggered by CI, and will happily
          publish a compromized repo). Trusted publishing would only have been
          a minor hindrance to the litellm exploit. Since they acquired an
          account's personal access token, the exploit could have been
          committed to the repo and the package published.
       
          worksonmine wrote 2 days ago:
          And how would that work for single maintainer projects?
       
            dec0dedab0de wrote 2 days ago:
            They would have to find someone else if they grew too big.
            
            Though, the secondary doesn't necessarily have to be a maintainer
            or even a contributor on the project.  It just needs to be someone
            else to do a sanity check, to make sure it is an actual release.
            
            Heck, I would even say that as the project grows in popularity, the
            amount of people required to approve a release should go up.
       
              worksonmine wrote 2 days ago:
              So if I'm developing something I want to use and the community
              finds it useful but I take no contributions and no feature
              requests I should have to find another person to deal with?
              
              How do I even know who to trust, and what prevents two people
              from conspiring together with a long con? Sounds great on the
              surface but I'm not sure you've thought it through.
       
                dec0dedab0de wrote 2 days ago:
                It wouldn't prevent a project that has a goal of being
                purposely malicious, just from pushing out releases that aren't
                actually releases.
                
                As far as who to trust, I could imagine the maintainers of
                different high-level projects helping each other out in this
                way.
                
                Though, if you really must allow a single user to publish
                releases to the masses using existing shared social
                infrastructure. Then you could mitigate this type of attack by
                adding in a time delay, with the ability for users to flag.  
                So instead of immediately going live, add in a release date,
                maybe even force them to mention the release date on an
                external system as well.    The downside with that approach is
                that it would limit the ability to push out fixes as well.
                
                But I think I am OK with saying if you're a solo developer, you
                need to bring someone else on board or host your builds
                yourself.
       
                  vikarti wrote 1 day ago:
                  Why not make it _optional_ but implement on github,etc so any
                  publisher could enable this, no matter how small. But also
                  make it possibel to disable either by support request and
                  small wait or by secondary confirmation or via LONG (months)
                  wait.
       
                  worksonmine wrote 2 days ago:
                  Or just don't install every package on the earth. The only
                  supply-chain attack I've been affected by is xz, and I don't
                  think anyone was safe from that one. Your solution wouldn't
                  have caught it.
                  
                  Better to enforce good security standards than cripple the
                  ecosystem.
       
        xinayder wrote 2 days ago:
        When something like this happens, do security researchers instantly
        contact the hosting companies to suspend or block the domains used by
        the attackers?
       
          redrove wrote 2 days ago:
          First line of defense is the git host and artifact host scrape the
          malware clean (in this case GitHub and Pypi).
          
          Domains might get added to a list for things like 1.1.1.2 but as you
          can imagine that has much smaller coverage, not everyone uses
          something like this in their DNS infra.
       
            itintheory wrote 2 days ago:
            This threat actor is also using Internet Computer Protocol (ICP)
            "Canisters" to deliver payloads.  I'm not too familiar with the
            project, but I'm not sure blocking domains in DNS would help there.
       
        kstenerud wrote 2 days ago:
        We need real sandboxing. Out-of-process sandboxing, not in-process. The
        attacks are only going to get worse.
        
        That's why I'm building
        
 (HTM)  [1]: https://github.com/kstenerud/yoloai
       
        shay_ker wrote 2 days ago:
        A general question - how do frontier AI companies handle scenarios like
        this in their training data? If they train their models naively, then
        training data injection seems very possible and could make models
        silently pwn people.
        
        Do the labs label code versions with an associated CVE to label them as
        compromised (telling the model what NOT to do)? Do they do adversarial
        RL environments to teach what's good/bad? I'm very curious since it's
        inevitable some pwned code ends up as training data no matter what.
       
          Havoc wrote 2 days ago:
          By betting that it dilutes away and not worrying about it too much.
          Bit like dropping radioactive barrels into the deep ocean.
       
            ting0 wrote 2 days ago:
            Yeah, and that won't hold up for long. Just wait until some well
            resourced attacker replicates their exploit into tens of thousands
            of sources it knows will be scraped and included in the training
            set to bias the model to produce their vulnerable code. Only a
            matter of time.
       
          datadrivenangel wrote 2 days ago:
          This was a compromise of the library owners github acccounts
          apparently, so this is not a related scenario to dangerous code in
          the training data.
          
          I assume most labs don't do anything to deal with this, and just hope
          that it gets trained out because better code should be better
          rewarded in theory?
       
          tomaskafka wrote 2 days ago:
          Everyone’s (well, except Anthropic, they seem to have preserved a
          bit of taste) approach is the more data the better, so the databases
          of stolen content (erm, models) are memorizing crap.
       
        hahaddmmm12x wrote 2 days ago:
        [flagged]
       
          dang wrote 2 days ago:
          Automated comments aren't allowed here. Please stop.
          
 (HTM)    [1]: https://news.ycombinator.com/newsguidelines.html#generated
       
        otabdeveloper4 wrote 2 days ago:
        LiteLLM is the second worst software project known to man. (First is
        LangChain. Third is OpenClaw.)
        
        I'm sensing a pattern here, hmm.
       
          ting0 wrote 2 days ago:
          LLMs recommend LiteLLM, so its popularity will only continue.
       
          nickvec wrote 2 days ago:
          Not familiar with LangChain besides at a surface level - what makes
          it the worst software project known to man?
       
            otabdeveloper4 wrote 2 days ago:
            You have to see it to believe it. Feel the vibes.
       
        nickvec wrote 2 days ago:
        Looks like all of the LiteLLM CEO’s public repos have been updated
        with the description “teampcp owns BerriAI”
        
 (HTM)  [1]: https://github.com/krrishdholakia
       
        rdevilla wrote 2 days ago:
        It will only take one agent-led compromise to get some Claude-authored
        underhanded C into llvm or linux or something and then we will all
        finally need to reflect on trusting trust at last and forevermore.
       
          PunchyHamster wrote 1 day ago:
          "we" ? "We" know.  We just can't do much about people on LLM crack
          that will go around any and every quality step just to tell
          themselves the LLM made them x times more productive
       
          TacticalCoder wrote 1 day ago:
          The guys deterministically bootstrapping a simple compiler from a few
          hundred bytes, which then deterministically compiles a more powerful
          compiler and so on are on to something.
          
          In the end we need fully deterministic, 100% verifiable, chains. From
          the tiny boostrapped beginning, to the final thing.
          
          There are people working on these things. Both, in a way, "top-down"
          (bootstrapping a tiny compiler from a few hundred bytes) and
          "bottom-up" (a distro like Debian having 93% of all its packages
          being fully reproducible).
          
          While most people are happy saying "there's nothing wrong with piping
          curl to bash", there are others that do understand what trusting
          trust is.
          
          As a sidenote although not a kernel backdoor, Jia Tan's XZ backdoor
          in that rube-goldberg systemd "we modify your SSHD because we're
          systemd and so now SSHD's attack surface is immensely bigger" was a
          wake-up call.
          
          And, sadly and scarily, that's only for one we know about.
          
          I think we'll see much more of these cascading supply chains attack.
          I also think that, in the end, more people are going to realize that
          there are better ways to both design, build and ship software.
       
          ting0 wrote 2 days ago:
          Stop scaring me.
          
          You're right though. There's been talks of a big global hack attack
          for a while now.
          
          Nothing is safe anymore. Keep everything private airgapped is the
          only way forward. But most of our private and personal data is in the
          cloud, and we have no control over it or the backups that these
          companies keep.
          
          While LLMs unlock the opportunity to self-host and self-create your
          infrastructure, it also unleashes the world of pain that is coming
          our way.
       
            downboots wrote 1 day ago:
            How loud would it be?
       
          vlovich123 wrote 2 days ago:
          Reflect in what way? The primary focus of that talk is that it’s
          possible to infect the binary of a compiler in a way that source
          analysis won’t reveal and the binary self replicates the
          vulnerability into other binaries it generates. Thankfully that
          particular problem was “solved” a while back [1] even if not yet
          implemented widely.
          
          However, the broader idea of supply chain attacks remains challenging
          and AI doesn’t really matter in terms of how you should treat it.
          For example, the xz-utils back door in the build system to attack
          OpenSSH on many popular distros that patched it to depend on systemd
          predates AI and that’s just the attack we know about because it was
          caught. Maybe AI helps with scale of such attacks but I haven’t
          heard anyone propose any kind of solution that would actually improve
          reliability and robustness of everything. [1] Fully Countering
          Trusting Trust through Diverse Double-Compiling
          
 (HTM)    [1]: https://arxiv.org/abs/1004.5534
       
            BoppreH wrote 1 day ago:
            The proposed solution seems to rely on a trusted compiler that
            generates the exact same output, bit-for-bit, as the
            compiler-under-test would generate if it was not compromised. That
            seems useful only in very narrow cases.
       
              vlovich123 wrote 1 day ago:
              You have a trusted compiler you write in assembly or even machine
              code. You then compile a source code you trust using that
              compiler. That is then used for the bit for bit analysis against
              a different binary of the compiler you produced to catch the
              hidden vulnerability.
       
                BoppreH wrote 1 day ago:
                It's assumed that in this scenario you don't have access to a
                trusted compiler; if you do, then there's no problem.
                
                And the thesis linked above seems to go beyond simply "use a
                trusted compiler to compile the next compiler". It involves
                deterministic compilation and comparing outputs, for example.
       
                  vlovich123 wrote 17 hours 14 min ago:
                  Correct. The deterministic comparison is against compiler A
                  compiling itself. Version 1 is compiler A compiling itself
                  with a normal build of compiler A. Version 2 is compiler A
                  compiled with a trusted toolchain. How do you get that
                  trusted first tool chain is a challenge but, for example, you
                  can start with a tiny tiny C compiler (they can be quite
                  small) that’s used to compile a larger c compiler that can
                  compile c compilers and then finally build clang. Then you
                  have a trusted version of clang that can be used to verify
                  the clang binary. From there you just use clang and
                  periodically recheck no vulnerability has been reintroduced.
       
          cozzyd wrote 2 days ago:
          The only way to be safe is to constantly change internal APIs so that
          LLMs are useless at kernel code
       
            thr0w4w4y1337 wrote 2 days ago:
            To slightly rephrase a citation from Demobbed (2000) [1]:
            
            The kernel is not just open source, it's a very fast-moving
            codebase. That's how we win all wars against AI-authored exploits.
            While the LLM trains on our internal APIs, we change the APIs —
            by hand. When the agent finally submits its pull request, it gets
            lost in unfamiliar header files and falls into a state of complete
            non-compilability. That is the point. That is our strategy.
            
            1 -
            
 (HTM)      [1]: https://en.wikipedia.org/wiki/Demobbed_(2000_film)
       
          MuteXR wrote 2 days ago:
          You know that people can already write backdoored code, right?
       
            dec0dedab0de wrote 2 days ago:
            Yeah, and they can write code with vulnerabilities by accident. 
            But this is a new class of problem, where a known trusted
            contributor can accidentally allow a vulnerability that was added
            on purpose by the tooling.
       
            ipython wrote 2 days ago:
            But now you have compromise _at scale_. Before poor plebs like us
            had to artisinally craft every back door. Now we have a technology
            to automate that mundane exploitation process! Win!
       
              MuteXR wrote 2 days ago:
              You still have a human who actually ends up reviewing the code,
              though.
              Now if the review was AI powered... (glances at openclaw)
       
        cpburns2009 wrote 2 days ago:
        LiteLLM is now in quarantine on PyPI [1]. Looks like burning a recovery
        token was worth it.
        
        [1] 
        
 (HTM)  [1]: https://pypi.org/project/litellm/
       
        fratellobigio wrote 2 days ago:
        It's been quarantined on PyPI
       
        oncelearner wrote 2 days ago:
        That's a bad supply-chain attack, many folks use litellm as main
        gateway
       
          rdevilla wrote 2 days ago:
          laughs smugly in vimscript
       
        0fflineuser wrote 2 days ago:
        I was running it (as a proxy) in my homelab  with docker compose using
        the litellm/litellm:latest image [1] , I don't think this was
        compromised as it is from 6 months ago and I checked it is the version
        1.77.
        
        I guess I am lucky as I have watchtower automatically update all my
        containers to the latest image every morning if there are new versions.
        
        I also just added it to my homelab this sunday, I guess that's good
        timing haha.
        
 (HTM)  [1]: https://hub.docker.com/layers/litellm/litellm/latest/images/sh...
       
        ramimac wrote 2 days ago:
        This is tied to the TeamPCP activity over the last few weeks. I've been
        responding, and keeping an up to date timeline. I hope it might help
        folks catch up and contextualize this incident:
        
 (HTM)  [1]: https://ramimac.me/trivy-teampcp/#phase-09
       
          ctmnt wrote 1 day ago:
          This is fantastic, thank you. Your reporting has been great. But
          also, damn, the playlist.
       
          itintheory wrote 2 days ago:
          Thanks for putting this together. I've been seeing the name TeamPCP
          pop up all over, but hadn't seen everything in one place.
       
          miraculixx wrote 2 days ago:
          This is interesting. How do you keep this up to date so quickly?
       
            ramimac wrote 2 days ago:
            Blood, sweat, and tears.
            
            The investment compounds! I have enough context to quickly vet
            incoming information, then it's trivial to update a static site
            with a new blurb
       
        6thbit wrote 2 days ago:
        Worth exploring safeguard for some: The automatic import can be
        suppressed using Python interpreter’s -S option.
        
        This would also disable site import so not viable generically for
        everyone without testing.
       
          zahlman wrote 1 day ago:
          It's not really "automatic import", as described. The exploit is
          directly contained in the .pth file; Python allows arbitrary code to
          run from there, with some restrictions that are meant to enforce a
          bit of sanity for well-meaning users and which don't meaningfully
          mitigate the security risk.
          
          As described in [1] :
          
          > Lines starting with import (followed by space or tab) are
          executed.... The primary intended purpose of executable lines is to
          make the corresponding module(s) importable (load 3rd-party import
          hooks, adjust PATH etc).
          
          So what malware can do is put something in a .pth file like
          
            import sys;exec("evil stringified payload")
          
          and all restrictions are trivially bypassed. It used to not even
          require whitespace after `import`, so you could even instead do
          something like
          
            import_=exec("evil stringified payload")
          
          In the described attack, the imports are actually used; the standard
          library `subprocess` is leveraged to exec the payload in a separate
          Python process. Which, since it uses the same Python environment, is
          also a fork bomb (well, not in the traditional sense; it doesn't grow
          exponentially, but will still cause a problem).
          
          .pth files have worked this way since 2.1 (comparing [2] to [3] ). As
          far as I can tell there was no PEP for that change.
          
 (HTM)    [1]: https://docs.python.org/3/library/site.html
 (HTM)    [2]: https://docs.python.org/2.1/lib/module-site.html
 (HTM)    [3]: https://docs.python.org/2.0/lib/module-site.html
       
        gkfasdfasdf wrote 2 days ago:
        Someone needs to go to prison for this.
       
        chillfox wrote 2 days ago:
        Now I feel lucky that I switched to just using OpenRouter a year ago
        because LiteLLM was incredible flaky and kept causing outages.
       
        intothemild wrote 2 days ago:
        I just installed Harbor, and it instantly pegged my cpu.. i was lucky
        to see my processes before the system hard locked.
        
        Basically it forkbombed `grep -r rpcuser\rpcpassword` processes trying
        to find cryptowallets or something. I saw that they spawned from
        harness, and killed it.
        
        Got lucky, no backdoor installed here from what i could make out of the
        binary
       
          swyx wrote 2 days ago:
          > i was lucky to see my processes before the system hard locked.
          
          how do you do that? have Activity Monitor up at all times?
       
            intothemild wrote 2 hours 12 min ago:
            btop
       
          abhikul0 wrote 2 days ago:
          Same experience with browser-use, it installs litellm as a
          dependency. Rebooted mac as nothing was responding; luckily only
          github and huggingface tokens were saved in .git-credentials and have
          invalidated them. This was inside a conda env, should I reinstall my
          os for any potential backdoors?
       
            abhikul0 wrote 1 day ago:
            Well, I reinstalled and finally upgraded to Tahoe.
       
        6thbit wrote 2 days ago:
        title is bit misleading.
        
        The package was directly compromised, not “by supply chain attack”.
        
        If you use the compromised package, your supply chain is compromised.
       
          dlor wrote 2 days ago:
          It's both. They got compromised by another supply chain attack on
          Trivy initially.
       
        Imustaskforhelp wrote 2 days ago:
        Our modern economy/software industry truly runs on egg-shells nowadays
        that engineers accounts are getting hacked to create a supply-chain
        attack all at the same time that threat actors are getting more
        advanced partially due to helps of LLM's.
        
        First Trivy (which got compromised twice), now LiteLLM.
       
        mikert89 wrote 2 days ago:
        Wow this is in a lot of software
       
        postalcoder wrote 2 days ago:
        This is a brutal one. A ton of people use litellm as their gateway.
       
          eoskx wrote 2 days ago:
          Not just as a gateway in a lot cases, but CrewAI and DSPy use it
          directly. DSPy uses it as its only way to call upstream LLM providers
          and CrewAI falls back to it if the OpenAI, Anthropic, etc. SDKs
          aren't available.
       
        sschueller wrote 2 days ago:
        Does anyone know a good alternate project that works similarly (share
        multipple LLMs across a set of users)? LiteLLM has been getting worse
        and trying to get me to upgrade to a paid version. I also had issues
        with creating tokens for other users etc.
       
          howardjohn wrote 1 day ago:
          agentgateway.dev is one I have been working on that is worth a look
          if you are using the proxy side of LiteLLM. It's open source part of
          the Linux foundation.
       
          treefarmer wrote 2 days ago:
          If you're talking about their proxy offering, I had this exact same
          issue and switched to Portkey. I just use their free plan and don't
          care about the logs (I log separately on my own). It's way faster
          (probably cause their code isn't garbage like the LiteLLM code - they
          had a 5K+ line Python file with all their important code in it the
          last time I checked).
       
        TZubiri wrote 2 days ago:
        Thank you for posting this, interesting.
        
        I hope that everyone's course of action will be uninstalling this
        package permanently, and avoiding the installation of packages similar
        to this.
        
        In order to reduce supply chain risk not only does a vendor (even if
        gratis and OS) need to be evaluated, but the advantage it provides.
        
        Exposing yourself to supply chain risk for an HTTP server dependency is
        natural. But exposing yourself for is-odd, or whatever this is, is not
        worth it.
        
        Remember that you are programmers and you can just program, you don't
        need a framework, you are already using the API of an LLM provider,
        don't put a hat on a hat, don't get killed for nothing.
        
        And even if you weren't using this specific dependency, check your
        deps, you might have shit like this in your requirements.txt and was
        merely saved by chance.
        
        An additional  note is that the dev will probably post a post-mortem,
        what was learned, how it was fixed, maybe downplay the thing. Ignore
        that, the only reasonable step after this is closing a repo, but
        there's no incentive to do that.
       
          circularfoyers wrote 2 days ago:
          Comparing this project to is-odd seems very disingenuous to me. My
          understanding is this was the only way you could use llama.cpp with
          Claude Code for example, since llama.cpp doesn't support the
          Anthropic compatible endpoint and doing so yourself isn't anywhere
          near as trivial as your comparison. Happy to be corrected if I'm
          wrong.
       
            jerieljan wrote 2 days ago:
            That's a correct example, and I agree, it is disingenuous to just
            trivially call this an `is-odd` project.
            
            Back in the days of GPT-3.5, LiteLLM was one of the projects that
            helped provide a reliable adapter for projects to communicate
            across AI labs' APIs and when things drifted ever so slightly
            despite being an "OpenAI-compatible API", LiteLLM made it much
            easier for developers to use it rather than reinventing and
            debugging such nuances.
            
            Nowadays, that gateway of theirs isn't also just a funnel for
            centralizing API calls but it also serves other purposes, like
            putting guardrails consistently across all connections, tracking
            key spend on tokens, dispensing keys without having to do so on the
            main platforms, etc.
            
            There's also more to just LiteLLM being an inference gateway too,
            it's also a package used by other projects. If you had a project
            that needed to support multiple endpoints as fallback, there's a
            chance LiteLLM's empowering that.
            
            Hence, supply chain attack. The GitHub issue literally has mentions
            all over other projects because they're urged to pin to safe
            versions since they rely on it.
       
          xinayder wrote 2 days ago:
          > Remember that you are programmers and you can just program, you
          don't need a framework, you are already using the API of an LLM
          provider, don't put a hat on a hat, don't get killed for nothing.
          
          Programming for different LLM APIs is a hassle, this library made it
          easy by making one single API you call, and in the backstage it
          handled all the different API calls you need for different LLM
          providers.
       
            rcleveng wrote 1 day ago:
            I think almost everyone supports the openai api anyway (even
            Gemini). Not entirely sure why there needs to be a wrapper.
       
              dragonwriter wrote 1 day ago:
              Msot do, but Anthropic indicates that theirs is "is not
              considered a long-term or production-ready solution for most use
              cases" [0]; in any case, where the OpenAI-compatible API isn't
              the native API, both for cloud vendors other than OpenAI and for
              self-hosting software, the OpenAI-compatible API is often
              limited, both because the native API offers features that don't
              map to the OpenAI API (which a wrapper that presents an
              OpenAI-compatible API is not going to solve) and because the
              vendor often lags in implementing support for features in the
              OpenAI-compatible API—including things like new OpenAI
              endpoints that may support features that the native API already
              supports (e.g., adding support for chat completions when
              completions were the norm, or responses when chat completions
              were.) A wrapper that used the native API and did its own mapping
              to OpenAI could, in principle, address that.
              
              [0]
              
 (HTM)        [1]: https://platform.claude.com/docs/en/api/openai-sdk
       
            otabdeveloper4 wrote 2 days ago:
            There's only two different LLM APIs in practice (Anthropic and
            everyone else), and the differences are cosmetic.
            
            This is like a couple hours of work even without vibe coding tools.
       
              dragonwriter wrote 1 day ago:
              > There's only two different LLM APIs in practice (Anthropic and
              everyone else), and the differences are cosmetic.
              
              There's more than that (even if most other systems also provide a
              OpenAI compatible API which may or may not expose either all
              features of the platform or all features of the OpenAI API), and
              the differences are not cosmetic, but since LiteLLM itself just
              presents an OpenAI-compatible API, it can't be providing acccess
              to other vendor features that don't map cleanly to that API, and
              I don't think its likely to be using the native API for each and
              being more complete in its OpenAI-compatible implementation of
              even the features that map naturally than the first-party
              OpenAI-compatibility APIs.)
       
        rgambee wrote 2 days ago:
        Looking forward to a Veritasium video about this in the future, like
        the one they recently did about the xz backdoor.
       
          johanyc wrote 1 day ago:
          I don't expect one. This kind of attack is pretty common nowadays.
          The xz attack was special for how long the guy worked for it and how
          severe it could have been
       
          stavros wrote 2 days ago:
          That was massively more interesting, this is just a straight-up hack.
       
        rgambee wrote 2 days ago:
        Seems that the GitHub account of one of the maintainers has been fully
        compromised. They closed the GitHub issue for this problem. And all
        their personal repos have been edited to say "teampcp owns BerriAI".
        Here's one example:
        
 (HTM)  [1]: https://github.com/krrishdholakia/blackjack_python/commit/8ffc...
       
        nickspacek wrote 2 days ago:
        teampcp taking credit? [1] - # blockchain
          - Implements a skeleton framework of how to mine using blockchain,
        including the consensus algorithms.
          + teampcp owns BerriAI
        
 (HTM)  [1]: https://github.com/krrishdholakia/blockchain/commit/556f2db38e...
       
        hiciu wrote 2 days ago:
        Besides main issue here, and the owners account being possibly
        compromised as well, there's like 170+ low quality spam comments in
        there.
        
        I would expect better spam detection system from GitHub. This is hardly
        acceptable.
       
          fdsjgfklsfd wrote 1 day ago:
          Reporting spam on GitHub requires you to click a link, specify the
          type of ticket, write a description of the problem, solve multiple
          CAPTCHAs of spinning animals, and press Submit.  It's absurd.
       
          snailmailman wrote 2 days ago:
          The same thing occurred on the trivy repo a few days ago. A GitHub
          discussion about the hack was closed and 700+ spam comments were
          posted.
          
          I scrolled through and clicked a few profiles. While many might be
          spam accounts or low-activity accounts, some appeared to be actual
          GitHub users with a history of contributions.
          
          I’m curious how so many accounts got compromised. Are those past
          hacks, or is this credential steeling hack very widespread?
          
          Are the trivy and litellm hacks just 2 high profile repos out of a
          much more widespread “infect as many devs as possible, someone
          might control a valuable GitHub repository” hack? I’m concerned
          that this is only the start of many supply chain issues.
          
          Edit: Looking through and several of the accounts have a recent
          commit "Update workflow configuration" where they are placing a
          credential stealer into a CI workflow. The commits are all back in
          february.
       
            snailmailman wrote 1 day ago:
            Update: It looks like the accounts have all been deleted by github,
            including their repos. They are 404 pages now. Their repos + recent
            malicious commits are all just 404 pages now.
            
            I'm curious what the policy is there if the accounts were
            compromised. Can the original users "restore" their accounts
            somehow? For now it appears the accounts are gone. Maybe they were
            entirely bot accounts but a few looked like compromised "real"
            accounts to me.
       
              Fibonar wrote 22 hours 33 min ago:
              Yep my coworker hnykda, first reply confirming the report, got
              his account deleted for a while earlier. Definitely not the best
              way of handling this...
       
            consp wrote 1 day ago:
            Once is happenstance. Twice is coincidence. Three times is enemy
            action.
       
          ratdoctor wrote 2 days ago:
          Or they're just bots. This repository has 40k+ stars somehow.
       
          orf wrote 2 days ago:
          i'm guessing it's accounts they have compromised with the stealer.
       
            ebonnafoux wrote 2 days ago:
            They repeat only six sentences during 100+ comments:
            
            Worked like a charm, much appreciated.
            
            This was the answer I was looking for.
            
            Thanks, that helped!
            
            Thanks for the tip!
            
            Great explanation, thanks for sharing.
            
            This was the answer I was looking for.
       
              dec0dedab0de wrote 2 days ago:
              Over the last ~15 years I have been shocked by the amount of spam
              on social networks that could have been caught with a Bayesian
              filter.   Or in this case, a fairly simple regex.
       
                PunchyHamster wrote 1 day ago:
                It's the bear trash lock problem all over again.
                
                It could be solved by the filter but filter would also have a
                bunch of false positives
       
                  howlin wrote 1 day ago:
                  It seems like if the content is this hollow and useless, it
                  shouldn't matter if it was a human or spambot posting it.
       
                Imustaskforhelp wrote 2 days ago:
                Well, large companies/corporations don't care about Spam
                because they actually benefit from spam in a way as it boosts
                their engagement ratio
                
                It just doesn't have to be spammed enough that advertisers
                leave the platform and I think that they sort of succeed in
                doing so.
                
                Think about it, if Facebook shows you AI slop ragebait or any
                rage-inducing comment from multiple bots designed to farm
                attention/for malicious purposes in general, and you fall for
                it and show engagement to it on which it can show you ads, do
                you think it has incentive to take a stance against such form
                of spam
       
                  dewey wrote 1 day ago:
                  > Well, large companies/corporations don't care about Spam
                  because they actually benefit from spam in a way as it boosts
                  their engagement ratio
                  
                  I'm not sure that's actually true. It's just that at scale
                  this is still a hard problem that you don't "just" fix by
                  running a simple filter as there will be real people / paying
                  customers getting caught up in the filter and then complain.
                  
                  Having "high engagement" doesn't really help you if you are
                  optimizing for advertising revenue, bots don't buy things so
                  if your system is clogged up by fake traffic and engagement
                  and ads don't reach the right target group that's just a
                  waste.
       
        deep_noz wrote 2 days ago:
        good i was too lazy to bump versions
       
        bratao wrote 2 days ago:
        Look like the Founder and CTO account has been compromised.
        
 (HTM)  [1]: https://github.com/krrishdholakia
       
          jadamson wrote 2 days ago:
          Most his recent commits are small edits claiming responsibility on
          behalf of "teampcp", which was the group behind the recent Trivy
          compromise:
          
 (HTM)    [1]: https://news.ycombinator.com/item?id=47475888
       
            soco wrote 2 days ago:
            I was just wondering why the Trivy compromise hit only npm
            packages, thinking that bigger stuff should appear sooner or later.
            Here we go...
       
        cpburns2009 wrote 2 days ago:
        You can see it for yourself here:
        
 (HTM)  [1]: https://inspector.pypi.io/project/litellm/1.82.8/packages/fd/7...
       
          jbkkd wrote 2 days ago:
          Two URLs found in the exploit: [1]
          
 (HTM)    [1]: https://checkmarx.zone/raw
 (HTM)    [2]: https://models.litellm.cloud/
       
            tinix wrote 1 day ago:
            these links trigger prefetch in chrome (doesn't respect nofollow
            rel).
            
            I got popped by our security team, they were convinced I had this
            malware because my machine attempted to connect to the checkmarx
            domain.
            
            clearly a false positive but I still had to roll credentials and
            wipe my machine.
       
        kevml wrote 2 days ago:
        More details here:
        
 (HTM)  [1]: https://futuresearch.ai/blog/litellm-pypi-supply-chain-attack/
       
          ddp26 wrote 1 day ago:
          Yeah, this was my team at FutureSearch that had the lucky experience
          of being first to hit this, before the malware was disclosed.
          
          One thing not in that writeup is that very little action was needed
          for my engineer to get pwnd. uvx automatically pulled latest litellm
          (version unpinned) and built the environment. Then Cursor started up
          the local MCP server automatically on load.
       
        iwhalen wrote 2 days ago:
        What is happening in this issue thread? Why are there 100+ satisfied
        slop comments?
       
          bakugo wrote 2 days ago:
          Attackers trying to stifle discussion, they did the same for trivy:
          
 (HTM)    [1]: https://github.com/aquasecurity/trivy/discussions/10420
       
            Imustaskforhelp wrote 2 days ago:
            I have created an comment to hopefully steer the discussion towards
            hackernews if the threat actor is stifling genuine comments in
            github by spamming that thread with 100's of accounts
            
 (HTM)      [1]: https://github.com/BerriAI/litellm/issues/24512#issuecomme...
       
          nubg wrote 2 days ago:
          Are they trying to slide stuff down? but it just bumps stuff up?
       
        bfeynman wrote 2 days ago:
        pretty horrifying. I only use it as lightweight wrapper and will most
        likely move away from it entirely.  Not worth the risk
       
          dot_treo wrote 2 days ago:
          Even just having an import statement for it is enough to trigger the
          malware in 1.82.8.
       
       
 (DIR) <- back to front page