[HN Gopher] Brandolini's Law
       ___________________________________________________________________
        
       Brandolini's Law
        
       Author : xrd
       Score  : 67 points
       Date   : 2023-04-14 10:47 UTC (12 hours ago)
        
 (HTM) web link (en.wikipedia.org)
 (TXT) w3m dump (en.wikipedia.org)
        
       | dang wrote:
       | Related:
       | 
       |  _Brandolini 's Law_ -
       | https://news.ycombinator.com/item?id=25956905 - Jan 2021 (59
       | comments)
        
       | rpastuszak wrote:
       | Aka Gish Gallop (if you do it really, really fast)
        
       | jongjong wrote:
       | That's why I mostly look at incentives.
       | 
       | Scientists funded by special interest groups could easily design
       | studies in a way to show the results desired by their
       | benefactors. In fact, that's exactly what happened with Big
       | Tobacco to 'disprove' the link between smoking and cancer. It's
       | easier to introduce bias into a study than to prove that bias was
       | introduced. It's like with complex computer simulations, if you
       | change one seemingly insignificant variable, the simulation could
       | give you a completely different result.
       | 
       | Looking at incentives can lead you astray in the short term (due
       | to complexity of incentives or hidden incentives), but in the
       | long run, incentives provide the most predictable framework for
       | figure out the behavior of people and animals.
        
         | breck wrote:
         | I agree with this.
         | 
         | Especially with multi-agent systems (anything to do with people
         | or biology), which are inherently vastly more complex and
         | harder to predict than mechanical systems, it's very important
         | to know the incentives of the people making the statements.
         | 
         | Most often they are overestimating their confidence with the
         | bias toward their own incentives.
        
         | Paul-Craft wrote:
         | [flagged]
        
       | sorokod wrote:
       | That one order of magnitude number should be revisited given the
       | recent advances in AI.
        
         | Paul-Craft wrote:
         | Hah. In which direction?
        
       | 3pm wrote:
       | I think it can also be called 'Burden of proof' inversion.
       | https://en.wikipedia.org/wiki/Burden_of_proof_(philosophy)
        
       | renewiltord wrote:
       | This is one of those things that didn't exist before LLMs. Unlike
       | AI agents, people usually form an informed opinion of things and
       | don't hallucinate facts with great confidence.
        
         | jen20 wrote:
         | Alberto tweeted this about 10 years ago, and was discussing it
         | before then. I heard it directly 'out of the horses mouth' in
         | London around that time.
        
         | gowld wrote:
         | Check the date. And it wasn't new then.
         | 
         | See also:
         | 
         | "Gish Gallop."
         | 
         | "A lie gets halfway around the world before the truth puts it's
         | shows on."
         | 
         | The parable of the pillow feathers
         | https://www.chabad.org/library/article_cdo/aid/812861/jewish...
        
           | renewiltord wrote:
           | Pretty astounding that all these people had LLMs back then
           | and didn't release them. It's also crazy that the rabbi was
           | talking with an LLM back then. The folk story precedes modern
           | computers. How did he run inference? There are probably many
           | such secrets in our ancient ancestors' pasts. Who knows how
           | many h100s they had back then?
        
         | jimhefferon wrote:
         | Try cable sometime.
        
           | renewiltord wrote:
           | I watched some cable news channels once at a friend's. It
           | appears they have already embraced the AI revolution since
           | they had newscasters reading things that seemed unlikely and
           | which I later confirmed to be untrue. The only conclusion is
           | that LLMs hallucinated the text and AI-generated newscasters
           | read it.
           | 
           | Humans would never make the mistakes these guys did. I think
           | we should regulate this news that isn't factual.
        
       | fdhfdjkfhdkj wrote:
       | [dead]
        
       | waynesonfire wrote:
       | same thing applies to fixing tech debt.
        
       | DiscourseFan wrote:
       | Its usually not just random bullshit that gets spread around, but
       | believable stories that follow a similar sort of logic to those
       | we generally agree are true. Though, what might seem to you
       | unbelievable may be dramatically different from what others
       | consider so, and some things you think are true others might call
       | bullshit. Though the material application of knowledge, in the
       | end, always demonstrates its useful validity.
        
         | naikrovek wrote:
         | yeah. people never (or almost never) make an effort to verify
         | things that they believe are true.
         | 
         | therefore, if it is believable, it is believed.
        
           | airstrike wrote:
           | because learning something that contradicts your beliefs is
           | actually stressful
           | 
           | https://en.wikipedia.org/wiki/Cognitive_dissonance
        
             | tedunangst wrote:
             | I'll see it when I believe it.
        
       | jihadjihad wrote:
       | Kind of like the Mark Twain quote... "A lie can travel halfway
       | around the world while the truth is still putting on its shoes."
        
       | User23 wrote:
       | Adding to the confusion is that now "debunking" is often a
       | euphemism for a kind of bullshitting.
        
       | dtolnay wrote:
       | I think about pull requests the same way:
       | 
       | As a library maintainer, closing and empathetically conveying why
       | a pull request is not a net benefit to the project is an order of
       | magnitude more effort than what it takes to throw up not-well-
       | motivated pull requests on someone else's project.
        
       | pachico wrote:
       | I met him last year when he was invited by the company to talk
       | about Event Storming.
       | 
       | He's a very smart and nice guy.
        
         | davidw wrote:
         | Yeah I met him at a conference in Firenze a few years back.
         | Really nice, bright person.
        
         | paganel wrote:
         | > talk about Event Storming.
         | 
         | Didn't know about it, so I had to check it out [1]. Looks like
         | another corporate bleak thing where they have to game-fy and
         | infantilise a so called "process" in order to fool some execs
         | into paying money to newly found experts in this new business
         | technique. Capitalism is really doomed with these kind of
         | people running the show, Schumpeter was right.
         | 
         | [1] https://en.wikipedia.org/wiki/Event_storming
        
           | jen20 wrote:
           | If you've ever worked with the kind of organisation where
           | this technique is valuable, you'll understand it's exactly
           | that: fooling people into telling you what they actually need
           | to build, instead of what their Serious Businessperson
           | Cosplay persona tells them they need.
        
       | cs702 wrote:
       | Interestingly, the amount of effort needed to get large language
       | models (LLMs) to generate trustworthy information in a reliable
       | manner often seems to be an order of magnitude bigger than the
       | amount of effort needed to get them to generate bulls#!t.
        
       | rukuu001 wrote:
       | Brandolini is a great guy! He's also behind 'event storming'
       | which can be a pretty nice way of getting shared understanding
       | and design of a system.
        
         | rpastuszak wrote:
         | I've been running event storming sessions for years and just
         | realised that I had two different Brandolinis in my head.
        
       | ucirello wrote:
       | I wonder if Brandolini was referring to
       | https://www.youtube.com/watch?v=ly2BaYeej2Q
        
       | santoshalper wrote:
       | GPT makes this 100x or maybe even 1000x. On the other hand, can
       | we potentially train generative AI to detect and refute BS as
       | well? It may be our only hope.
        
         | rpastuszak wrote:
         | Bonus: a quick tutorial on how to use GPT to scale up
         | attribution bias: https://sonnet.io/posts/emotive-conjugation/
        
         | wolfram74 wrote:
         | Neal Stephenson's Anathem[0] which revolves around epistemology
         | a lot coined the term Artificial Inanity for AI
         | 
         | [0]https://englishwotd.wordpress.com/2014/02/17/artificial-
         | inan...
        
         | nerpderp82 wrote:
         | GPT is also pretty good at cutting through BS. It can detect
         | logical fallacies for instance or explain a lack of rigor in a
         | discussion. Depends on how you fine tune it, couple it with an
         | external fact database and you could get it to cite its
         | sources. Couple it with a prolog engine AND a fact database and
         | it could modus pwnens ur ass.
        
           | xedrac wrote:
           | That's funny, because ChatGPT feeds me BS quite often. It's
           | only when I call it out that it corrects itself.
        
         | vkou wrote:
         | > On the other hand, can we potentially train generative AI to
         | detect and refute BS as well? It may be our only hope.
         | 
         | LLMs store their training information in an incredibly lossy
         | format. You're going to need some kind of different approach if
         | you want one to tell the difference between plausible-sounding
         | bullshit and implausible-sounding truth.
        
       ___________________________________________________________________
       (page generated 2023-04-14 23:00 UTC)