Post B045eGuTXsuppinsAa by keepassxc@fosstodon.org
 (DIR) More posts by keepassxc@fosstodon.org
 (DIR) Post #B00EPURR4iRePcPHyy by Ember@blobfox.coffee
       0 likes, 1 repeats
       
       if you use keepassxc, do not update past the current latest update (2.7.10) as future updates will include LLM generated code, which is a utterly horrible idea for an application that manages people's passwordsthe maintainers approve of this and don't see the horrible security implications in allowing it
       
 (DIR) Post #B02LfWqwxsO8WIr7xY by HN414@chaos.social
       0 likes, 0 repeats
       
       @Ember They made fun of AI slop not even a  year ago. Aged like fine milk.https://fosstodon.org/@keepassxc/113742260380752442@keepassxc
       
 (DIR) Post #B02LfbLwEbYKTZNvoe by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @HN414 @Ember Sure did! A lot happens in a year.
       
 (DIR) Post #B03IAB2ptN1HCXjZlg by joeo10@mastodon.sdf.org
       0 likes, 0 repeats
       
       @keepassxc The ultimate heel turn then, betrayal of your users and a lot of goodwill in the process.You're a piece of shit.@HN414 @Ember
       
 (DIR) Post #B041gj8x94f6A0UT7A by undead@masto.hackers.town
       0 likes, 0 repeats
       
       @EmberI'm going to have o assume they are trying this out.  But this will absolutely backfire on them with the first major vulnerability."If the majority of a code submission is made using Generative AI (e.g., agent-based or vibe coding) then we will document that in the pull request. All code submissions go through a rigorous review process regardless of the development workflow or submitter."
       
 (DIR) Post #B041gk1tqmc8uQCLlA by kkarhan@infosec.space
       0 likes, 0 repeats
       
       @undead @Ember which is not how to handle hallucinated bullshit, as @bagder can tell them...
       
 (DIR) Post #B041gkcldi6YklRnE0 by bagder@mastodon.social
       0 likes, 1 repeats
       
       @kkarhan @undead @Ember I do however expect every software project to receive and accept AI assisted code over time. We can't avoid them and there's no stopping it. But: with review, adhering to code style and proper testing, the risk shouldn't be worse than accept human-only changes.
       
 (DIR) Post #B043HRbbPhyYR6xlRI by anselmschueler@ieji.de
       0 likes, 0 repeats
       
       @bagder @kkarhan @undead @EmberI don't think this is a good strategy. We shouldn't assume people will try to circumvent any ban. I grant that a direct confrontational ban might not be the ideal strategy, but it will still remove some AI slop that an open policy wouldn't. Not everyone who wants to contribute with AI is someone who cynically wants code in the project for whatever reason, no matter how. And AI is and will almost certainly continue to be a huge risk. The risk isn't just in the technical quality of the work, but in who's responsible and who's getting access to the code. We shouldn't assume code review will catch problematic things, and AI access isn't just the same as access by low (or high) skill contributors.We also should push back on AI contributions just because AI is bad. Here, I mean the current "system" of commercial generative AI. It is just directly bad and causes harm, and risks much more harm. Much of that is just due to general problems in society but a lot of it is specific to current AI trends and companies.
       
 (DIR) Post #B043HSY5uElPMWKTbs by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @anselmschueler You should be very concerned about the quality of any submission. Code review isn’t perfect, but it’s only part of our quality assurance, and it catches the majority of issues. This is true regardless of who or what submitted the code. The quality of AI code has a strong correlation with the coding skills of the submitter. Saying we should avoid AI because the ecosystem is harmful is a very different argument and of no consequence for the quality.@bagder @kkarhan@infosec.space @undead @Ember
       
 (DIR) Post #B044MT9HCgjwclfpHk by dzwiedziu@mastodon.social
       0 likes, 0 repeats
       
       @keepassxc >  Saying we should avoid AI because the ecosystem is harmful is a very different argument and of no consequence for the quality.It is still not an argument off the table.The ecosystem is harmful, because it causes harm to the environment, infrastructure and society (resource abuse (power, water), pollution, job insecurity, etc.)That's why you shouldn't separate those two. Not as cause-effect, but as costs.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B045eGuTXsuppinsAa by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @dzwiedziu These are separate issues. Most of the negative comments are complaining about trust issues and vulnerabilities introduced by LLMs, which we address by proper quality assurance. It’s a problem we can handle. You cannot say our code base will become insecure because AI training harms the planet. It’s false logic.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B046IzA0RV3UenTunQ by dzwiedziu@mastodon.social
       0 likes, 0 repeats
       
       @keepassxc > You cannot say our code base will become insecure because AI training harms the planet. It’s false logic.I am explicitly not saying that in the part that I wrote it's not a cause-effect relationship, but a cost of use one.I am not the “most of the negative comments”.You're making a strawman fallacy by stating something I didn't say.I did not expect you to go so low. And that's why I cannot trust you any more.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B046fQXVKRMLwUAp2e by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @dzwiedziu They’re still separate issues. You’re making an environmental case. That’s fine, I’m not arguing with you on that. But you cannot merge the security aspect into that, it has absolutely nothing to do with it.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B047DXT24rTjH5LcCu by dzwiedziu@mastodon.social
       0 likes, 0 repeats
       
       @keepassxc Again, I am not merging the security aspect into that.Do not put words in my mouth.So for now I will assume that you know about the cost, but are choosing to ignore them.I wish not to hear from you again, until you change your mind and stop strawmaning.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B047RRG3xZI04TuQfQ by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @dzwiedziu You said not to separate the two. I have no other way to interpret that.@anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B04VYG2OTPayCNIevo by Sadness@aleph.land
       0 likes, 0 repeats
       
       @keepassxc When he says they cannot be separated, he means they are both critical issues with LLM use. Satisfying concerns about code quality to your own standards should not be sufficient to start using it, you cannot separate out and ignore environmental concerns (environment, infrastructure and society) - they are just as essential.(environmental cost falls 90% on queries, not training. Using LLMs as an end-user is not harmless - MIT https://www.technologyreview.com/2025/05/20/1116327/ai-energy-usage-climate-footprint-big-tech/)@dzwiedziu @anselmschueler @bagder @undead @Ember
       
 (DIR) Post #B04WxtshZYd8xuYeFk by keepassxc@fosstodon.org
       0 likes, 0 repeats
       
       @Sadness @dzwiedziu @anselmschueler @bagder @undead @Ember They are still orthogonal dimensions in the contingency table.