[HN Gopher] Curl: We still have not seen a valid security report...
       ___________________________________________________________________
        
       Curl: We still have not seen a valid security report done with AI
       help
        
       Author : indigodaddy
       Score  : 325 points
       Date   : 2025-05-06 17:07 UTC (5 hours ago)
        
 (HTM) web link (www.linkedin.com)
 (TXT) w3m dump (www.linkedin.com)
        
       | jacksnipe wrote:
       | Something that really frustrates me about interacting with (some)
       | people who use AI a lot is that they will often tell me things
       | that start "I asked ChatGPT and it said..." stop it!!! If the
       | chatbot taught you something and you understood it, explain it to
       | me. If you didn't understand or didn't trust it, then keep it to
       | yourself!
        
         | x3n0ph3n3 wrote:
         | Thanks for this. It's a great response I intend to use going
         | forward.
        
         | esafak wrote:
         | I had to deal with someone who tried to check in hallucinated
         | code with the defense "I checked it with chatGPT!"
         | 
         | If you're just parroting what you read, what is it that you do
         | here?!
        
           | giantg2 wrote:
           | Manage people?
        
             | tough wrote:
             | then what the fuck are they doing commiting code? leave
             | that to the coders
        
               | giantg2 wrote:
               | That sounds good, but not might be how it works in
               | Chapter Lead models.
        
           | qmr wrote:
           | I hope you dealt with them by firing them.
        
             | esafak wrote:
             | Yes, unfortunately. This was the last straw, not the first.
        
         | hashmush wrote:
         | As much as I'm also annoyed by that phrase, is it really any
         | different from:
         | 
         | - I had to Google it...
         | 
         | - According to a StackOverflow answer...
         | 
         | - Person X told me about this nice trick...
         | 
         | - etc.
         | 
         | Stating your sources should surely not be a bad thing, no?
        
           | nraynaud wrote:
           | the first 2 bullet points give you an array of
           | answers/comments helping you cross check (also I'm a freak,
           | and even on SO, I generally click on the posted documentation
           | links).
        
           | spiffyk wrote:
           | Well, it is not, but the three "sources" you mention are not
           | worth much either, much like ChatGPT.
        
             | gruez wrote:
             | >but the three "sources" you mention are not worth much
             | either, much like ChatGPT.
             | 
             | I don't think I've ever seen anyone lambasted for citing
             | stackoverflow as a source. At best, they chastised for not
             | reading the comments, but nowhere as much pushback as for
             | LLMs.
        
               | comex wrote:
               | From what I've seen, Stack Overflow answers are much more
               | reliable than LLMs.
               | 
               | Also, using Stack Overflow correctly requires more
               | critical thinking. You have to determine whether any
               | given question-and-answer is actually relevant to your
               | problem, rather than just pasting in your code and seeing
               | what the LLM says. Requiring more work is not inherently
               | a good thing, but it does mean that if you're citing
               | Stack Overflow, you probably have a somewhat better
               | understanding of whatever you're citing it for than if
               | you cited an LLM.
        
               | spiffyk wrote:
               | I have personally always been kind of against using
               | StackOverflow as a sole source for things. It is _very_
               | often a good pointer, but it 's _always_ a good idea to
               | cross-check with primary sources. Otherwise you get all
               | sorts of interesting surprises, like that Razer Synapse +
               | Docker for Windows debacle. Not to mention that you are
               | technically not allowed to just copy-paste stuff from SO.
        
               | mynameisvlad wrote:
               | I mean, if all they did is regurgitate a SO post
               | wholesale without checking the correctness or
               | applicability, and the answer was in fact not correct or
               | applicable, they would probably get equally lambasted.
               | 
               | If anything, SO having verified answers helps its
               | credibility _slightly_ compared to a LLM which are all
               | known to regularly hallucinate (see: literally this
               | post).
        
             | bloppe wrote:
             | SO at least has reputation scores and people vote on
             | answers. An answer with 5000 upvotes, written by someone
             | with high karma, is probably legit.
        
             | dpoloncsak wrote:
             | ...isn't that exactly why someone states that?
             | 
             | "Hey, I didn't study this, I found it on Google. Take it
             | with a grain of caution, as it came from the internet" has
             | been shortened to "I googled it and...", which is now
             | evolving to "Hey, I asked chatGPT, and...."
        
           | hx8 wrote:
           | It depends on if they are just repeating things without
           | understanding, or if they have understanding. My issue is
           | that people that say "I asked gpt" is that they often do not
           | have any understanding themselves.
           | 
           | Copy and pasting from ChatGPT has the same consequences as
           | copying and pasting from StackOverflow, which is to say
           | you're now on the hook supporting code in production that you
           | don't understand.
        
             | tough wrote:
             | We cannot blame the tools for how they are used by those
             | yielding them.
             | 
             | I can use ChatGPT to teach me and understand a topic or i
             | can use it to give me an answer and not double check and
             | just copy paste.
             | 
             | Just shows off how much you care about the topic at hand,
             | no?
        
               | multjoy wrote:
               | How do you know that ChatGPT is teaching you about the
               | topic? It doesn't know what is right or what is wrong.
        
               | tough wrote:
               | It can consult any sources about any topic, ChatGPT is as
               | good at teaching as the pupil's capabilities to ask the
               | right questions, if you ask me
        
               | multjoy wrote:
               | It may well consult any source about the topic, or it may
               | simply make something up.
               | 
               | If you don't know anything about the subject area, how do
               | you know if you are asking the right questions?
        
               | ryandrake wrote:
               | LLM fans never seem very comfortable answering the
               | question "How do you know it's correct?"
        
               | mystraline wrote:
               | I'm a moderate fan of LLMs.
               | 
               | I will ask for all claims to be backed with cited
               | evidence. And then, I check those.
               | 
               | In other cases, of things like code generation, I ask for
               | a test harness be written in and test.
               | 
               | In some foreign language translation (High German to
               | english), I ask for a sentence to sentence comparison in
               | the syntax of a diff.
        
               | the_snooze wrote:
               | I like to ask AI systems sports trivia. It's something
               | low-stakes, easy-to-check, and for which there's a ton of
               | good clean data out there.
               | 
               | It sucks at sports trivia. It will confidently return
               | information that is straight up wrong [1]. This should be
               | a walk in the park for an LLM, but it fails spectacularly
               | at it. How is this useful for learning at all?
               | 
               | [1] https://news.ycombinator.com/item?id=43669364
        
               | giantrobot wrote:
               | But just because it's wrong about sports trivia doesn't
               | mean it's wrong about anything else! /s [0]
               | 
               | [0] https://en.m.wikipedia.org/wiki/Gell-
               | Mann_amnesia_effect
        
               | theamk wrote:
               | If you used ChatGPT to teach you the topic, you'd write
               | your own words.
               | 
               | Starting the answer with "I asked ChatGPT and it said..."
               | almost 100% means the poster did not double-check.
               | 
               | (This is the same with other systems: If you say,
               | "According to Google...", then you are admitting you
               | don't know much about this topic. This can occasionally
               | be useful, but most of the time it's just annoying...)
        
               | misnome wrote:
               | We can absolutely blame the people selling and marketing
               | those tools.
        
               | tough wrote:
               | Yeah, marketing always seemed to me like a misnomer or
               | doublespeak for legal lies.
               | 
               | All marketing departments are trying to manipulate you to
               | buy their thing, it should be illegal.
               | 
               | But just testing out this new stuff and seeing what's
               | useful for you (or not) is usually the way
        
               | jacksnipe wrote:
               | I see nobody here blaming tools and not people!
        
               | layer8 wrote:
               | This subthread was about blaming people, not the tool.
        
               | tough wrote:
               | my bad I had just woke up!
        
           | stonemetal12 wrote:
           | In general those point to the person's understanding being
           | shallow. So far when someone says "GPT said..." it is a new
           | low in understanding, and there is no more to the article
           | they googled or second stackOverflow answer with a different
           | take on it, it is the end of the conversation.
        
           | mentalpiracy wrote:
           | It is not about stating a source, the bad thing is treating
           | chatGPT as an authoritative source like it is a subject
           | matter expert.
        
             | silversmith wrote:
             | But is "I asked chatgpt" assigning any authority to it? I
             | use precisely that sentence as a shorthand for "I didn't
             | know, looked it up in the most convenient way, and it
             | sounded plausible enough to pass on".
        
               | jacksnipe wrote:
               | In my own experience, the vast majority of people using
               | this phrase ARE using it as a source of authority. People
               | will ask me about things I am an actual expert in, and
               | then when they don't like my response, hit me with the
               | ol' "well, I asked chatGPT and it said..."
        
               | jstanley wrote:
               | I think you are misunderstanding them. I also frequently
               | cite ChatGPT, as a way to accurately convey my source,
               | not as a way to claim it as authoritative.
        
               | billyoneal wrote:
               | I think you are in the minority of people who use that
               | phrase.
        
               | jacksnipe wrote:
               | I have interrogated it in those cases. I was not
               | misunderstanding.
        
               | mirrorlake wrote:
               | It's a social-media-level of fact checking, that is to
               | say, you feel something is right but have no clue if it
               | actually is. If you had a better source for a fact, you'd
               | quote that source rather than the LLM.
               | 
               | Just do the research, and you don't have to qualify it.
               | "GPT said that Don Knuth said..." Just verify that Don
               | said it, and report the real fact! And if something turns
               | out to be too difficult to fact check, that's still
               | valuable information.
        
           | rhizome wrote:
           | All three of those should be followed by "...and I checked it
           | to see if it was a sufficient solution to X..." or words to
           | that effect.
        
           | billyoneal wrote:
           | The complaint isn't about stating the source. The complaint
           | is about asking for advice, then ignoring that advice. If one
           | asks how to do something, get a reply, then reply to that
           | reply 'but Google says', that's just as rude.
        
           | kimixa wrote:
           | It's a "source" that cannot be reproduced or actually
           | referenced in any way.
           | 
           | And all the other examples will have a chain of "upstream"
           | references, data and discussion.
           | 
           | I suppose you can use those same phrases to reference things
           | without that, random "summaries" without references or
           | research, "expert opinion" from someone without any
           | experience in that sector, opinion pieces from similarly
           | reputation-less people etc. but I'd say they're equally
           | worthless as references as "According to GPT...", and should
           | be treated similarly.
        
         | yoyohello13 wrote:
         | Seriously. Being able to look up stuff using AI is not unique.
         | I can do that too.
         | 
         | This is kind of the same with any AI gen art. Like I can go
         | generate a bunch of cool images with AI too, why should I give
         | a shit about your random Midjourney output.
        
           | h4ck_th3_pl4n3t wrote:
           | How can you be so harsh on all the new kids with Senior
           | Prompt Engineer in their job titles?
           | 
           | They have to prove to someone that they're worth their money.
           | /s
        
           | alwa wrote:
           | I mean... I have a fancy phone camera in my pocket too, but
           | there are photographers who, with the same model of fancy
           | phone camera, do things that awe and move me.
           | 
           | It took a solid hundred years to legitimate photography as an
           | artistic medium, right? To the extent that the controversy
           | still isn't entirely dead?
           | 
           | Any cool images I ask AI for are going to involve a lot less
           | patience and refinement than some of these things the kids
           | are using AI to turn out...
           | 
           | For that matter, I've watched friends try to ask for factual
           | information from LLMs and found myself screaming inwardly at
           | how vague and counterproductive their style of questioning
           | was. They can't figure out why I get results I find useful
           | while they get back a wall of hedging and waffling.
        
           | kristopolous wrote:
           | Comfyui workflows, fine-tuning models, keeping up with the
           | latest arxiv papers, patching academic code to work with
           | generative stacks, this stuff is grueling.
           | 
           | Here's an example https://files.meiobit.com/wp-
           | content/uploads/2024/11/22l0nqm...
           | 
           | Being dismissive of AI art is like those people who dismiss
           | electronic music because there's a drum machine.
           | 
           | Doing things well still requires an immense amount of skill
           | and exhaustive amount of effort. It's wildly complicated
        
             | codr7 wrote:
             | Makes even less sense when you put it like that, why not
             | invest that effort into your own skills instead?
        
               | kristopolous wrote:
               | It _is_ somebody 's own skill.
               | 
               | Photographers are not painters.
               | 
               | People who do modular synths aren't guitarists.
               | 
               | Technical DJing is quite different from tapping on a
               | Spotify app on a smartphone.
               | 
               | Just because you've exclusively exposed yourself to crude
               | implementations doesn't mean sophisticated ones don't
               | exist.
        
               | delfinom wrote:
               | But you just missed the point.
               | 
               | People aren't trying to push photographs into painted
               | works displays
               | 
               | People who do modular synths aren't typically trying to
               | sell their music as country/rock/guitar based music.
               | 
               | A 3D modeler of a statue isn't pretending to be a
               | sculpturist.
               | 
               | People pushing AI art are trying to slide it right into
               | "human art" displays. Because they are talentless
               | otherwise.
        
         | evandrofisico wrote:
         | It is supremely annoying when i ask in a group if someone has
         | experience with a tool or system and some idiot copies my
         | question into some LLM and paste the answer. I can use the LLM
         | just like anyone, if i'm asking for EXPERIENCE it is because I
         | want the opinion of a human who actually had to deal with stuff
         | like corner cases.
        
           | jsheard wrote:
           | _If it 's not worth writing, it's not worth reading._
        
             | pixl97 wrote:
             | I mean, there is a lot of hand written crap to, so even
             | that isn't a good rule.
        
               | mcny wrote:
               | It is a necessary but not sufficient condition, perhaps?
        
               | colecut wrote:
               | That rule does not imply the inverse
        
               | pixl97 wrote:
               | I mean we have automated systems that 'write' things like
               | tornado warnings. Would you rather we have someone hand
               | write that out?
               | 
               | It seems the initial rule seems rather worthless.
        
               | colecut wrote:
               | 1. I think the warnings are generally "written" by
               | humans. Maybe some variables filled in during the
               | automation.
               | 
               | 2. So a rule with occasional exceptions is worthless, ok
        
               | layer8 wrote:
               | That sounds like
               | https://en.wikipedia.org/wiki/Denying_the_antecedent.
        
               | leptons wrote:
               | >I mean, there is a lot of hand written crap to
               | 
               | You know how I know the difference between something an
               | AI wrote and something a human wrote? The AI knows the
               | difference between "to" and "too".
               | 
               | I guess you proved your point.
        
               | meindnoch wrote:
               | Both statements can be true at the same time, even though
               | they seem to point in different directions. Here's how:
               | 
               | 1. *"If it's not worth writing, it's not worth reading"*
               | is a normative or idealistic statement -- it sets a
               | standard or value judgment about the quality of writing
               | and reading. It suggests that only writing with value,
               | purpose, or quality should be produced or consumed.
               | 
               | 2. *"There is a lot of handwritten crap"* is a
               | descriptive statement -- it observes the reality that
               | much of what is written (specifically by hand, in this
               | case) is low in quality, poorly thought-out, or not
               | meaningful.
               | 
               | So, putting them together:
               | 
               | * The first expresses *how things _ought_ to be*. * The
               | second expresses *how things _actually_ are*.
               | 
               | In other words, the existence of a lot of poor-quality
               | handwritten material does not invalidate the ideal that
               | writing should be worth doing if it's to be read. It just
               | highlights a gap between ideal and reality -- a common
               | tension in creative or intellectual work.
               | 
               | Would you like to explore how this tension plays out in
               | publishing or education?
        
               | palata wrote:
               | > If it's not worth writing, it's not worth reading.
               | 
               | It does _NOT_ mean, _AT ALL_ , that if it is worth
               | writing, it is worth reading.
               | 
               | Logic 101?
        
             | floren wrote:
             | Reminds me of something I wrote back in 2023: "If you wrote
             | it with an LLM, it wasn't worth writing"
             | https://jfloren.net/b/2023/11/1/0
        
             | ToValueFunfetti wrote:
             | There's a lot of documentation out there that I've found
             | was left unwritten but that I would have loved to read
        
           | ModernMech wrote:
           | It's the 2025 version of lmgtfy.
        
             | layer8 wrote:
             | Nah, that's different. Lmgtfy has nothing to do with
             | experience, other than experience in googling. Lmgtfy
             | applies to stuff that can expediently be googled.
        
               | ModernMech wrote:
               | In my experience, usually what people had done was take
               | your question on a forum, go to lmgtfy, paste the exact
               | words in and then link back to it. As if to say "See how
               | easy that was? Why are you asking us when you could have
               | just done that?"
               | 
               | Yes is true there could have been a skill issue. But it
               | could also be true that the person just wanted input from
               | people rather than Google. So that's why I drew the
               | connection.
        
               | layer8 wrote:
               | I largely agree with your description, and I think that's
               | different from the above case of explicitly asking for
               | experience and then someone posing the question to an
               | LLM. Also, when googling, you typically (used to) get
               | information written down by people, from a much larger
               | pool and better curated via page ranking, than whoever
               | you are asking. So it's not like you were getting better
               | quality by not googling, typically.
        
               | ModernMech wrote:
               | That's why I said it's the 2025 version of that, given
               | the new technology. I'm not saying it's the same thing. I
               | guess I'm not being clear, sorry.
        
               | layer8 wrote:
               | It's not clear to me in what way it is a version of that,
               | other than the response being different from what the
               | asker wanted. The point of lmgtfy is to show that the
               | asker could have legitimately and reasonably easily have
               | found the answer by himself. You can argue that it is
               | sometimes done on cases where googling actually wouldn't
               | provide the desired information, but that is far from the
               | common case. This present version is substantially
               | different from that. It is invariably true that an LLM
               | response won't give you the awareness and judgement of
               | someone with experience in a certain topic.
        
               | ModernMech wrote:
               | Okay I see the confusion. We are coming from different
               | perspectives.
               | 
               | There are three main reasons I can think of for asking
               | the Internet a question in 2010:
               | 
               | 1. You don't know how to ask Google / you are too lazy.
               | 
               | 2. You don't trust Google.
               | 
               | 3. You already tried Google and it doesn't have the
               | answer or it's wrong.
               | 
               | Maybe there are more I can't think of. But let's say you
               | have one of those three reasons, so you post a question
               | to an Internet forum in the year 2010. Someone replies
               | back with lmgtfy. There are three typical responses
               | depending on which of the those reasons you had f or
               | posting:
               | 
               | 1. "Thanks"
               | 
               | 2. "Thanks, but I don't trust those sources, so I
               | reiterate my question."
               | 
               | 3. "Thanks, but I tried that and the answer is wrong, so
               | I reiterate my question."
               | 
               | Now it's the year 2025 and you post a question to an
               | Internet forum because you either don't know how to ask
               | ChatGPT, don't trust ChatGPT, or already tried it and
               | it's giving nonsense. Someone replies back with an answer
               | from ChatGPT. There are three typical responses depending
               | on your reason for posting to the forum.
               | 
               | 1. "Thanks"
               | 
               | 2. "Thanks, but I don't trust those sources, so I
               | reiterate my question."
               | 
               | 3. "Thanks, but I tried that and the answer is wrong, so
               | I reiterate my question."
               | 
               | So the reason I drew the parallel was because of the
               | similarity of experiences between 2010 and now for
               | someone who doesn't trust this new technology.
        
               | XorNot wrote:
               | In my experience what happened was the top hit for the
               | question was a topical forum, with a lmgtfy link as a
               | response to the exact question I'm googling.
        
             | jacksnipe wrote:
             | That's _exactly_ how I feel
        
             | soulofmischief wrote:
             | The whole point of paying a domain expert is so that you
             | don't have to google shit all day.
        
         | cogman10 wrote:
         | I recently had this happen from a senior engineer. What's
         | really frustrating is I TOLD them the issues and how to fix it.
         | Instead of listening to what I told them, they plugged it into
         | GPT and responded with "Oh, interesting this is what GPT says"
         | (Which, spoiler, was similar but lacking from what I'd said).
         | 
         | Meaning, instead of listening to a real-life expert in the
         | company telling them how to handle the problem they ignored my
         | advice and instead dumped the garbage from GPT.
         | 
         | I really fear that a number of engineers are going to us GPT to
         | avoid thinking. They view it as a shortcut to problem solve and
         | it isn't.
        
           | delusional wrote:
           | Those people weren't engineers to start with.
        
             | layer8 wrote:
             | Software engineers rarely are.
             | 
             | I'm saying this tongue in cheek, but there's some truth to
             | it.
        
               | throwanem wrote:
               | There is much truth. Railway engineers 'rarely were' too,
               | once upon a time, and for in my view essentially the same
               | reasons.
        
           | colechristensen wrote:
           | If I had a dollar for every time I told someone how to fix
           | something and they did something else...
           | 
           | Let's just say not listening to someone and then complaining
           | that doing something else didn't work isn't exactly _new_.
        
           | colechristensen wrote:
           | >They view it as a shortcut to problem solve and it isn't
           | 
           | Oh but it is, used wisely.
           | 
           | One: it's a replacement for googling a problem and much
           | faster. Instead of spending half an hour or half a day
           | digging through bug reports, forum posts, and stack overflow
           | for the solution to a problem. LLMs are a lot faster,
           | occasionally correct, and very often at least rather close.
           | 
           | Two: it's a replacement for learning how to do something I
           | don't want to learn how to do. Case Study: I have to create a
           | decent-enough looking static error page for a website. I
           | could do an awful job with my existing knowledge, I could
           | spend half a day relearning and tweaking CSS, elements, etc.
           | etc. or I could ask an LLM to do it and then tweak the
           | results. Five minutes for "good enough" and it really is.
           | 
           | LLMs are not a replacement for real understanding, for
           | digging into a codebase to really get to the core of a
           | problem, or for becoming an expert in something, but in many
           | cases _I do not want to_ , and moreover it is a poor use of
           | my time. Plenty of things are not my core competence or
           | anywhere near the goals I'm trying to achieve. I just need a
           | quick solution for a topic I'm not interested in.
        
             | ijidak wrote:
             | This exactly!
             | 
             | There are so many things that a human worker or coder has
             | to do in a day and a lot of those things are non-core.
             | 
             | If someone is trying to be an expert on every minor task
             | that comes across their desk, they were never doing it
             | right.
             | 
             | An error page is a great example.
             | 
             | There is functionality that sets a company apart and then
             | there are things that look the same across all products.
             | 
             | Error pages are not core IP.
             | 
             | At almost any company, I don't want my $200,000-300,000 a
             | year developer mastering the HTML and CSS of an error page.
        
           | throwanem wrote:
           | You should ask yourself why this organization wants
           | engineering advice from a chatbot more than from you.
           | 
           | I doubt the reason has to do with your qualities as an
           | engineer, which must be basically sound. Otherwise why bother
           | to launder the product of your judgment, as you described
           | here someone doing?
        
           | silversmith wrote:
           | I often do this - ask a LLM for an answer when I already have
           | it from an expert. I do it to evaluate the ability of the
           | LLM. Usually not in the presence of said expert tho.
        
           | jsight wrote:
           | I wonder if this is an indication that they didn't really
           | understand what you said to begin with.
        
           | tharant wrote:
           | Is it possible that what happened was an impedance mismatch
           | between you and the engineer such that they couldn't grok
           | what you told them but ChatGPT was able to describe it in a
           | manner they could understand? Real-life experts (myself
           | included, though I don't claim to be an expert in much)
           | sometimes have difficulty explaining domain-specific concepts
           | to other folks; it's not a flaw in anyone, folks just have
           | different ways of assembling mental models.
        
             | kevmo314 wrote:
             | Whenever someone has done that to me, it's clear they
             | didn't read the ChatGPT output either and were sending it
             | to me as some sort of "look someone else thinks you're
             | wrong".
        
               | tharant wrote:
               | Again, is it possible you and the other party have
               | (perhaps significantly) different mental models of the
               | domain--or maybe different perspectives of the issues
               | involved? I get that folks can be contrarian (sadly,
               | contrariness is probably my defining trait) but it seems
               | unlikely that someone would argue that you're wrong by
               | using output they didn't read. I see impedance mismatches
               | regularly yet folks seem often to assume
               | laziness/apathy/stupidity/pride is the reason for the
               | mismatch. Best advice I ever received is "Assume folks
               | are acting rationally, with good intention, and with a
               | willingness to understand others." -- which for some
               | reason, in my contrarian mind, fits oddly nicely with
               | Hanlon's razor but I tend to make weird connections like
               | that.
        
           | tharant wrote:
           | > I really fear that a number of engineers are going to us
           | GPT to avoid thinking. They view it as a shortcut to problem
           | solve and it isn't.
           | 
           | How is this sentiment not different from my grandfather's
           | sentiment that calculators and computers (and probably his
           | grandfather's view of industrialization) are a shortcut to
           | avoid work? From my perspective most tools are used as a
           | shortcut to avoid work; that's kinda the while point--to give
           | us room to think about/work on other stuff.
        
             | stevage wrote:
             | Did you grandpa think that calculators made engineers worse
             | at their jobs?
        
         | candiddevmike wrote:
         | This happens to me all the time at work. People have turned
         | into frontends for LLM, even when it's their job to know the
         | answer to these types of questions. We're talking technical
         | leads.
         | 
         | Seems like if all you do is forward questions to LLMs, maybe
         | you CAN be replaced by a LLM.
        
         | mrkurt wrote:
         | Wow that's a wildly cynical interpretation of what someone is
         | saying. Maybe it's right, but I think it's equally likely that
         | people are saying that to give you the right context.
         | 
         | If they're saying it to you, why wouldn't you assume they
         | understand and trust what they came up with?
         | 
         | Do you need people to start with "I understand and believe and
         | trust what I'm about to show you ..."?
        
           | jacksnipe wrote:
           | I do not need people to lead on that. That's precisely _why_
           | leading on "I asked ChatGPT and it said..." makes me trust
           | something less -- the speaker is actively assigning
           | responsibility for what's to come to some other agent,
           | because for one reason or another, they won't take it on
           | themselves.
        
         | laweijfmvo wrote:
         | the problem is that when you ask a ChatBot something, it always
         | gives you an answer...
        
         | JohnFen wrote:
         | I agree wholeheartedly.
         | 
         | "I asked X and it said..." is an appeal to authority and
         | suspect on its face whether or not X is an LLM. But when it's
         | an LLM, then it's even worse. Presumably, the reason for the
         | appeal is because the person using it considers the LLM to be
         | an authoritative or meaningful source. That makes me question
         | the competence of the person saying it.
        
         | Szpadel wrote:
         | I find that only acceptable (only little annoying) when this is
         | some lead in case we're we have no idea what could be the
         | issue, it might help to brainstorm and note that this is not
         | verified information is important.
         | 
         | most annoying is when people trust chatgpt more that experts
         | they pay. we had case when our client asked us for some
         | specific optimization, and we told him that it makes no sense,
         | then he asked the other company that we cooperate with and got
         | similar response, then he asked chatgpt and it told him it's
         | great idea. And guess what, he bought $20k subscription to
         | implement it.
        
           | 38 wrote:
           | > when this is some lead in case we're we have no idea what
           | could be the issue
           | 
           | English please
        
             | jacksnipe wrote:
             | We're was autocorrected from where
        
           | hedora wrote:
           | I do this occasionally when it's time sensitive, and I cannot
           | find a reasonable source to read. e.g., "ChatGPT says cut the
           | blue wire, not the red one. I found the bomb schematics it
           | claims say this, but they're paywalled."
           | 
           | If that's all the available information and you're out of
           | time, you may as well cut the blue wire. But, pretty much any
           | other source is automatically more trustworthy.
        
         | RadiozRadioz wrote:
         | There was a brief period of time in the first couple weeks of
         | ChatGPT existing where people did this all the time on Hacker
         | News and were upvoted for it. I take pride in the fact that I
         | thought it was cringeworthy from the start.
        
         | Frost1x wrote:
         | I work in a corporate environment as I'm sure many others do.
         | Many executives have it in their head that LLMs are this brand
         | new efficiency gain they can pad profit margins with, so you
         | should be using it for efficiency. There's a lot of push for
         | that, everywhere where I work.
         | 
         | I see email blasts suggesting I should be using it, I get peers
         | saying I should be using it, I get management suggesting I
         | should use it to cut costs... and there is _some_ truth there
         | but as usual, it depends.
         | 
         | I, like many others, can't be asked to take on inefficiency in
         | the name of efficiency ontop of currently most efficient ways
         | to do my work. So I too say "ChatGPT said: ..." because I dump
         | lots of things into it now. Some things I can't quickly verify,
         | some things are off, and in general it can produce far more
         | information than I have time to check. Saying "ChatGPT said..."
         | is the current CYA caveat statement around the world of: use
         | this thing but also take liability for it. No, if you
         | practically mandate I use something, the liability falls on you
         | or that thing. If it's a quick verify I'll integrate it into
         | knowledge. A lot of things aren't.
        
           | rippleanxiously wrote:
           | It just feels to me like a boss walking into a car mechanic's
           | shop holding some random tool, walking up to a mechanic, and:
           | 
           | "Hey, whatcha doin?"
           | 
           | "Oh hi, yea, this car has a slight misfire on cyl 4, so I was
           | just pulling one of the coilpacks to-"
           | 
           | "Yea alright, that's great. So hey! You _really_ need to use
           | this tool. Trust me, it's gonna make your life so much
           | easier"
           | 
           | "umm... that's a 3d printer. I don't really think-"
           | 
           | "Trust me! It's gonna 10x your work!"
           | 
           | ...
           | 
           | I love the tech. It's the evangelists that don't seem to
           | bother researching the tech beyond making an account and
           | asking it to write a couple scripts that bug me. And then
           | they proclaim it can replace a bunch of other stuff they
           | don't/haven't ever bothered to research or understand.
        
         | godelski wrote:
         | > Something that really frustrates me about interacting with
         | 
         | Something that frustrates me with LLMs is that they are
         | optimized such that errors are as silent as possible.
         | 
         | It is just bad design. You want errors to be _as loud as
         | possible_. So they can be traced and resolved. On the other
         | hand, LLMs optimize human preference (or some proxy of this).
         | While humans prefer accuracy, it would be naive to ignore all
         | the other things that optimize this objective. Specifically,
         | humans prefer answers that they don 't know are wrong over
         | those that they do know are wrong.
         | 
         | This doesn't make LLMs useless but certainly it should strongly
         | inform how we use them. Frankly, you cannot trust outputs, so
         | you have to verify. I think this is where there's a big
         | divergence between LLM users (and non-users). Those that
         | blindly trust and those that don't (extreme case is non-users).
         | If you need to constantly verify _AND_ recognize that
         | verification is extra hard (because it is optimized to be
         | invisible to you), it can create extra work, not less.
         | 
         | It really is two camps and I think it says a lot:
         | - "Blindly" trust       - "Trust" but verify
         | 
         | Wide range of opinions in these two camps, but I think it comes
         | down to some threshold of default trust or default suspicion.
        
         | __turbobrew__ wrote:
         | I had someone at work lead me down a wild goose chase because
         | claude told them to do something which was outright wrong to
         | solve some performance issues they were having in their app. I
         | helped them do this migration and it turned put that claude's
         | suggestions made performance worse! I know for sure the time
         | wasted on this task was not debited from the so called company
         | productivity stats that come from AI usage.
        
         | xnx wrote:
         | I can see why this would be frustrating, but it's probably a
         | good thing to have people be curious and consult an expert
         | system.
         | 
         | Current systems are definitely flawed (incomplete, biased, or
         | imagined information), but I'd pick the answers provided by
         | Gemini over a random social post, blog page, or influencer
         | every time.
        
       | unsnap_biceps wrote:
       | For those of you who don't want to click into linked in,
       | https://hackerone.com/reports/3125832 is the latest example of a
       | invalid curl report
        
         | nneonneo wrote:
         | Good god did they hallucinate the segmentation fault and the
         | resulting GDB trace too? Given that the diffs don't even apply
         | and the functions don't even exist, I guess the answer is yes -
         | in which case, this is truly a new low for AI slop bug reports.
        
           | bluGill wrote:
           | An real report would have a GDB trace that looks like that,
           | so it isn't hard to create such a trace. Many of us could
           | create a real looking GDB trace just as well by hand - it
           | would be tedious, boring, and pointless but we could.
        
           | terom wrote:
           | The git commit hashes in the diff are interesting:
           | 1a2b3c4..d4e5f6a
           | 
           | I think my wetware pattern-matching brain spots a pattern
           | there.
        
             | mitchellpkt wrote:
             | Excellent catch! I had to go back and take a second look,
             | because I completely missed that the first time.
        
             | terom wrote:
             | Going a bit further, it seems like there's a grain of truth
             | here, HTTP/2 has a stream priority dependency mechanism [1]
             | and this report [2] from Imperva describes an actual
             | Dependency Cycle DoS in the nghttp implementation.
             | 
             | Unfortunately that's where it seems to end... I'm not that
             | familiar with QUIC and HTTP/2, but I think the closest it
             | gets is that the GitHub repo exists and has a `class
             | QuicConnection` [3]. Beyond that, the QUIC protocol layer
             | doesn't have any concept of exchanging stream priorities
             | [4] and HTTP/2 priorities are something the client sends,
             | not the server? The PoC also mentions HTTP/3 and
             | PRIORITY_UPDATE frames, but those are from the newer RFC
             | 9218 [5] and lack the stream dependencies used in HTTP/2
             | PRIORITY frames.
             | 
             | I should learn more about HTTP/3!
             | 
             | [1] https://blog.cloudflare.com/adopting-a-new-approach-to-
             | http-...
             | 
             | [2] https://www.imperva.com/docs/imperva_hii_http2.pdf
             | 
             | [3] https://github.com/aiortc/aioquic/blob/218f940467cf25d3
             | 64890...
             | 
             | [4] https://datatracker.ietf.org/doc/html/rfc9000#name-
             | stream-pr...
             | 
             | [5] https://www.rfc-editor.org/rfc/rfc9218.html#name-the-
             | priorit...
        
         | harrisi wrote:
         | This is interesting because they've apparently made a couple
         | thousand dollars reporting things to other companies. Is it
         | just a case of a broken clock being right twice a day? Seems
         | like a terrible use of everyone's time and money. I find it
         | hard to believe a random person on the internet using ChatGPT
         | is worth $1000.
        
           | billyoneal wrote:
           | There are places that will pay bounties on even very flimsy
           | reports to avoid the press / perception that they aren't
           | responding to researchers. But that's only going to remain as
           | long as a very small number of people are doing this.
           | 
           | It's easy for reputational damage to exceed $1'000, but if
           | 1000 people do this...
        
           | bluGill wrote:
           | $1000 is cheap... The real question is when will companies
           | become wise to this scam?
           | 
           | Most companies make you fill in expense reports for every
           | trivial purchase. It would be cheaper to just let employees
           | take the cash - and most employees are honest enough. However
           | the dishonest employee isn't why they do expense reports
           | (there are other ways to catch dishonest employees). There
           | used to be a scam where someone would just send a bill for
           | "services" and those got paid often enough until companies
           | realized the costs and started making everyone do the expense
           | reports so they could track the little expenses.
        
       | parliament32 wrote:
       | Didn't even have to click through to the report in question to
       | know it would be all hallucinations -- both the original
       | patchfile and the segfault
       | ("ngtcp2_http3_handle_priority_frame".. "There is no function
       | named like this in current ngtcp2 or nghttp3.") I guess these
       | guys don't bother to verify, they just blast out AI slop and hope
       | one of them hits?
        
         | indigodaddy wrote:
         | Reminds me of when some LLM (might have been Deepseek) told me
         | I could add wasm_mode=True in my FastHTML python code which
         | would allow me to compile it to WebAssembly, when of course
         | there is no such feature in FastHTML. This was even when I had
         | provided it full llms-ctx.txt
        
           | alabastervlog wrote:
           | I had Google's in-search "AI" invent a command line switch
           | that would have been very helpful... if it existed. Complete
           | with usage caveats and warnings!
           | 
           | This was like two weeks ago. These things suck.
        
             | j_w wrote:
             | My favorite is when their in search "AI answer"
             | hallucinates on the Golang standard lib. Always makes me
             | happy to see.
        
               | hedora wrote:
               | You think that's funny? Try using AI help button in
               | Google's office suite the next time you're trying to
               | track down the right button to press.
        
             | sidewndr46 wrote:
             | Isn't there a website that builds git man pages this way?
             | By just stringing together random concepts into sentences
             | that seem vaguely like something Git would implement. I
             | thought it was silly and potentially harmful the first time
             | I saw it. Apparently, it may have just been ahead of the
             | curve.
        
         | spiffyk wrote:
         | > I guess these guys don't bother to verify, they just blast
         | out AI slop and hope one of them hits?
         | 
         | Yes. Unfortunately, some companies seem to pay out the bug
         | bounty without even verifying that the report is actually
         | valid. This can be seen on the "reporter"'s profile:
         | https://hackerone.com/evilginx
        
         | pixl97 wrote:
         | >"ngtcp2_http3_handle_priority_frame"
         | 
         | I wonder if you could use AI to classify the probability factor
         | that something is AI bullshit and deprioritize it?
        
           | pacifika wrote:
           | AI red tape.
        
         | soraminazuki wrote:
         | Considering that even the reporter responded to requests for
         | clarification with yet another AI slop, they likely lack the
         | technical background.
        
       | hx8 wrote:
       | It's probably a net positive that ChatGPT isn't going around
       | detecting zero day vulnerabilities. We should really be saving
       | those for the state actors to find.
        
       | vessenes wrote:
       | Reading the straw that broke the camel's back commit illustrates
       | the problem really well: https://hackerone.com/reports/3125832 .
       | This shit must be infuriating to dig through.
       | 
       | I wonder if reputation systems might work here - you could give
       | anyone who id's with an AML/KYC provider some reputation, enough
       | for two or three reports, let people earn reputation digging
       | through zero rep submissions and give someone like 10,000
       | reputation for each accurate vulnerability found, and 100s for
       | any accurate promoted vulnerabilities. This would let people
       | interact anonymously if they want to edit, quickly if they found
       | something important and are willing to AML/KYC, and privilege
       | quality people.
       | 
       | Either way, AI is definitely changing economics of this stuff, in
       | this case enshittifying first.
        
         | emushack wrote:
         | Reputation systems for this kind of thing sounds like rubbing
         | some anti-itch cream on bullet wound. I feel like the problem
         | seems to me to be behavior, not a technology issue.
         | 
         | Personally I can't imagine how miserable it would be for my
         | hard-earned expertise to be relegated to sifting through SLOP
         | where maybe 1 in hundreds or even thousands of inquiries is
         | worth any time at all. But it also doesn't seem prudent to just
         | ignore them.
         | 
         | I don't think better ML/AI technology or better information
         | systems will make a significant difference on this issue. It's
         | fundamentally about trust in people.
        
           | delusional wrote:
           | I consider myself a left leaning soyboy, but this could be
           | the outcome of too "nice" of a discourse. I won't advocate
           | for toxicity, but I am considering if we bolster the self-
           | image of idiots when we refuse to call them idiots. Because
           | you're right, this is fundamentally a people problem,
           | specifically we need people to filter this themselves.
           | 
           | I don't know where the limit would go.
        
             | orthecreedence wrote:
             | Shame is a useful social tool. It can be overused or
             | underused, but it's still a tool and people like this
             | should be made to publicly answer for their obnoxious and
             | destructive behavior.
        
               | squigz wrote:
               | How?
        
           | squigz wrote:
           | I guess I'm confused by your position here.
           | 
           | > I feel like the problem seems to me to be behavior, not a
           | technology issue.
           | 
           | Yes, it's a behavior issue, but that doesn't mean it can't be
           | solved or at least minimized by technology, particularly as a
           | technology is what's exacerbating the issue?
           | 
           | > It's fundamentally about trust in people.
           | 
           | Who is lacking trust in who here?
        
             | me_again wrote:
             | Vulnerability reports are interesting from a trust point of
             | view, because each party has a different financial
             | incentive. You can't 100% trust the vendor to accurately
             | assess the severity of an issue - they have a lot riding on
             | downplaying an issue in some cases. The person reporting
             | the bug is also likely looking for bounty and reputational
             | benefit, both of which are enhanced if the issue is
             | considered high severity. So a user of the supposedly-
             | vulnerable program can't blindly trust either party.
        
           | Analemma_ wrote:
           | > I feel like the problem seems to me to be behavior, not a
           | technology issue.
           | 
           | To be honest, this has been a grimly satisfying outcome of
           | the AI slop debacle. For decades, the general stance of tech
           | has been, "there is no such thing as a behavioral/social
           | problem, we can always fix it with smarter technology", and
           | AI is taking that opinion and drowning it in a bathtub. You
           | can't fix AI slop with technology because anything you do to
           | detect it will be incorporated into better models until they
           | evade your tests.
           | 
           | We now have no choice but to acknowledge the social element
           | of these problems, although considering what a shitshow all
           | of Silicon Valley's efforts at social technology have been up
           | to now, I'm not optimistic this acknowledgement will actually
           | lead anywhere good.
        
         | bflesch wrote:
         | there is a reputation system already. according to hackerone
         | reputation system, it is a credible reporter. it's really bad
        
           | hedora wrote:
           | The vast majority of developers are 10-100x more likely to
           | find a security hole in a random tool than spend time
           | improving their reputation on a bug bounty site that pays <
           | 10% their salary.
           | 
           | That makes it extremely hard to build a reputation system for
           | a site like that. Almost all the accounts are going to be
           | spam, and the highest quality accounts are going to freshly
           | created and take ~ 1 action on the platform.
        
       | uludag wrote:
       | I can imagine that most LLMs, if you ask it to find a security
       | vulnerability in a given piece of code, will make something up
       | completely out of the air. I've (mistakenly) sent valid code with
       | an unrelated error and to this day I get nonsense "fixes" for
       | these errors.
       | 
       | This alignment problem between responding with what the user
       | wants (e.g. a security report, flattering responses) and going
       | against the user seems a major problem limiting the effectiveness
       | of such systems.
        
       | rdtsc wrote:
       | > evilginx updated the severity from none to high
       | 
       | Well the reporter in the report that stated it that they are open
       | for employment https://hackerone.com/reports/3125832 Anyone want
       | to hire them? They can play with ChatGPT all day and spam random
       | projects with the AI slop.
        
         | gorbachev wrote:
         | Growth hack: hire this person to find vulnerabilities in
         | competitors' products.
        
       | bogwog wrote:
       | If I wanted to slip a vulnerability into a major open source
       | project with a lot of eyes on it, using AI to DDOS their
       | vulnerability reports so they're less likely to find a real
       | report from someone who caught me seems like an obvious (and
       | easy) step.
       | 
       | Looking at one of the bogus reports, it doesn't even seem like a
       | real person. Why do this if you're not trying to gain
       | recognition?
        
         | jsheard wrote:
         | > Why do this if you're not trying to gain recognition?
         | 
         | They're doing it for money, a handful of their reports did
         | result in payouts. Those reports aren't public though, so
         | there's no way to know if they actually found real bugs or the
         | reviewer rubber-stamped them without doing their due diligence.
        
       | zulban wrote:
       | Shame they need to put up with that spam. However, every big open
       | source project has by now had good contributions with "AI help".
       | Many millions of developers are using AI a little as a tool, like
       | Google.
        
         | eestrada wrote:
         | And that increase in LLM usage has resulted in an enormous
         | increase of code duplications and code churn in said open
         | source projects. Any benefit from new features implemented by
         | LLMs is being offset by the tech debt caused by duplication and
         | the maintenance burden of constantly reverting bad code (i.e.
         | churn).
         | 
         | https://arc.dev/talent-blog/impact-of-ai-on-code/
        
           | zulban wrote:
           | Yes. The internet has also created a ton of email spam but I
           | wouldn't say "we've never seen a single valid contribution to
           | our project that had internet help". Many millions of
           | developers are using AI. Sometimes in a good way. When that
           | results in a good MR, they likely don't even mention they
           | used Google, or stackoverflow, or AI, they just submit.
        
             | Analemma_ wrote:
             | I mean, I certainly _would_ say "I've never seen a single
             | commercial email that was valid and useful to me as a
             | customer", and this is entirely because of spam. Any
             | unsolicited email with commercial intent goes instantly,
             | reflexively, to the trash (plus whatever my spam filters
             | prevent me from ever seeing to begin with). This presumably
             | has cost me the opportunity to purchase things I genuinely
             | would've found useful, and reduced the effectiveness of
             | well-meaning people doing cold outreach for actually-good
             | products, but spam has left me no choice.
             | 
             | In that sense, it has destroyed actual value as the noise
             | crowds out the signal. AI could easily do the same to,
             | like, all Internet communication.
        
             | marcosdumay wrote:
             | If they never got a valid contribution to their project
             | through the internet, yes, they would say exactly that.
             | 
             | They don't say it because the internet provides actual
             | value.
        
         | joaohaas wrote:
         | I unironically can't remember a single case where AI managed to
         | find a vulnerability in an open source project.
         | 
         | And most contributions with 'AI help' tend to not follow the
         | code practices of the code base itself, while also in general
         | generating worse code.
         | 
         | Also, just like in HTTP stuff 'if curl does it its probably
         | right', I'm also tend to think that 'if the curl team says
         | something its bullshit its probably bullshit'.
        
           | zulban wrote:
           | You wouldn't say "the Google search engine contributed to an
           | open source project". Similarly, many millions of developers
           | are using AI. Sometimes in a good way. When that results in a
           | good MR, they likely don't even mention they used Google, or
           | stackoverflow, or AI, they just submit.
        
       | molticrystal wrote:
       | There is or at various times was, nitter for twitter, Invidious
       | for youtube, Imginn for instagram, and even many variations of
       | ones for hackernews like hckrnews.com & ones that are lighter,
       | work better in terminals, etc.
       | 
       | Anything for linkedin, a light interface that doesn't required
       | logging in?
       | 
       | I pretty much stopped going to linkedin years ago because they
       | started aggressively directing a person to login. I was shocked
       | this post works without login. I don't know if that is how it has
       | always been, or if that is a recent change, or what. It would be
       | nice to have alternative interfaces.
       | 
       | In case some people are getting gated here is their post:
       | 
       | ===
       | 
       | Daniel Stenberg curl CEO. Code Emitting Organism
       | 
       | That's it. I've had it. I'm putting my foot down on this
       | craziness.
       | 
       | 1. Every reporter submitting security reports on #Hackerone for
       | #curl now needs to answer this question:
       | 
       | "Did you use an AI to find the problem or generate this
       | submission?"
       | 
       | (and if they do select it, they can expect a stream of proof of
       | actual intelligence follow-up questions)
       | 
       | 2. We now ban every reporter INSTANTLY who submits reports we
       | deem AI slop. A threshold has been reached. We are effectively
       | being DDoSed. If we could, we would charge them for this waste of
       | our time.
       | 
       | We still have not seen a single valid security report done with
       | AI help.
       | 
       | ---
       | 
       | This is the latest one that really pushed me over the limit:
       | https://hackerone.com/reports/3125832
       | 
       | ===
        
         | perching_aix wrote:
         | > Anything for linkedin, a light interface that doesn't
         | required logging in?
         | 
         | I just opened the site with JS off on mobile. No issues.
        
       | ianbutler wrote:
       | Counterpoint we have a CVE attributable to ours and I suspect the
       | difference is my co-founder was an offensive kernel researcher so
       | our system is tuned for this in a way your average...ambulance
       | chaser is unable to do.
       | 
       | https://blog.bismuth.sh/blog/bismuth-found-the-atop-bug
       | 
       | https://www.cve.org/CVERecord?id=CVE-2025-31160
       | 
       | The amount of bad reports curl in particular has gotten is
       | staggering and it's all from people who have no background just
       | latching onto a tool that won't elevate them.
       | 
       | Edit: Also shoutout to one of our old professors Brendan Dolan-
       | Gavitt who now works on offensive security agents who has a
       | highly ranked vulnerability agent XBOW.
       | 
       | https://hackerone.com/xbow?type=user
       | 
       | So these tools are there and doing real work its just there are
       | so many people looking for a quick buck that you really have to
       | tease the noise from the bs.
        
         | pizzalife wrote:
         | I would try to find a better example than CVE-2025-31160. If
         | you ask me, this kind of 'vulnerability' is CVE spam.
        
           | ianbutler wrote:
           | Except if you read the blog post we helped a very confused
           | maintainer when they had this dropped on them with no
           | explanation on hacker news except "oooh potential scary heap
           | vuln"
        
       | danielvf wrote:
       | I handle reports for a one million dollar bug bounty program.
       | 
       | AI spam is bad. We've also never had a valid report from an by an
       | LLM (that we could tell).
       | 
       | People using them will take any being told why a bug report is
       | not valid, questions, or asks for clarification and run them back
       | through the same confused LLM. The second pass through generates
       | even deeper nonsense.
       | 
       | It's making even responding with anything but "closed as spam"
       | not worth the time.
       | 
       | I believe that one day there will be great code examining
       | security tools. But people believe in their hearts that that day
       | is today, and that they are riding the backs of fire breathing
       | hack dragons. It's the people that concern me. They cannot tell
       | the difference between truth and garbage.
        
         | VladVladikoff wrote:
         | This sounds more like an influx of scammers than security
         | researchers leaning too hard on AI tools. The main problem is
         | the bounty structure. And I don't think these influx of low
         | quality reports will go away, or even get any less aggressive
         | as long as there is money to attract the scammers. Perhaps
         | these bug bounty programs need to develop an automatic
         | pass/fail tester of all submitted bug code, to ensure the
         | reporter really found a bug, before the report is submitted to
         | the vendor.
        
         | datatrashfire wrote:
         | > I believe that one day there will be great code examining
         | security tools.
         | 
         | Based on current state, what makes you think this is given?
        
       | meindnoch wrote:
       | The solution is simple. Before submitting a security report, the
       | reporter must escrow $10 which is awarded to the reviewer if the
       | submission turns out to be AI slop.
        
       ___________________________________________________________________
       (page generated 2025-05-06 23:01 UTC)