[HN Gopher] Death by AI
       ___________________________________________________________________
        
       Death by AI
        
       Author : ano-ther
       Score  : 515 points
       Date   : 2025-07-19 14:35 UTC (1 days ago)
        
 (HTM) web link (davebarry.substack.com)
 (TXT) w3m dump (davebarry.substack.com)
        
       | rf15 wrote:
       | So many reports like this, it's not a question of working out the
       | kinks. Are we getting close to our very own Stop the Slop
       | campaign?
        
         | randcraw wrote:
         | Yeah, after daily working with AI for a decade in a domain
         | where it _does_ work predictably and reliably (image analysis),
         | I continue to be amazed how many of us continue to trust LLM-
         | based text output as being useful. If any human source got
         | their facts wrong this often, we'd surely dismiss them as a
         | counterproductive imbecile.
         | 
         | Or elect them President.
        
           | BobbyTables2 wrote:
           | HAL 9000 in 2028!
        
           | locallost wrote:
           | I am beginning to wonder why I use it, but the idea of it is
           | so tempting. Try to google it and get stuck because it's
           | difficult to find, or ask and get an instant response. It's
           | not hard to guess which one is more inviting, but it ends up
           | being a huge time sink anyway.
        
         | trod1234 wrote:
         | Regulation with active enforcement is the only civil way.
         | 
         | The whole point of regulation is for when the profit motive
         | forces companies towards destructive ends for the majority of
         | society. The companies are legally obligated to seek profit
         | above all else, absent regulation.
        
           | Aurornis wrote:
           | > Regulation with active enforcement is the only civil way.
           | 
           | What regulation? What enforcement?
           | 
           | These terms are useless without details. Are we going to fine
           | LLM providers every time their output is wrong? That's the
           | kind of proposition that sounds good as a passing angry
           | comment but obviously has zero chance of becoming a real
           | regulation.
           | 
           | Any country who instituted a regulation like that would see
           | all of the LLM advancements and research instantly leave and
           | move to other countries. People who use LLMs would sign up
           | for VPNs and carry on with their lives.
        
             | trod1234 wrote:
             | Regulations exist to override profit motive when
             | corporations are unable to police themselves.
             | 
             | Enforcement ensures accountability.
             | 
             | Fines don't do much in a fiat money-printing environment.
             | 
             | Enforcement is accountability, the kind that stakeholders
             | pay attention to.
             | 
             | Something appropriate would be where if AI was used in a
             | safety-critical or life-sustaining environment and harm or
             | loss was caused; those who chose to use it are guilty until
             | they prove they are innocent I think would be sufficient,
             | not just civil but also criminal; where that person and
             | decision must be documented ahead of time.
             | 
             | > Any country who instituted a regulation like that would
             | see all of the LLM advances and research instantly leave
             | and move to other countries.
             | 
             | This is fallacy. Its a spectrum, research would still
             | occur, it would be tempered by the law and accountability,
             | instead of the wild-west where its much more profitable to
             | destroy everything through chaos. Chaos is quite profitable
             | until it spread systemically and ends everything.
             | 
             | AI integration at a point where it can impact the operation
             | of nuclear power plants through interference (perceptual or
             | otherwise) is just asking for a short path to extinction.
             | 
             | Its quite reasonable that the needs for national security
             | trump private business making profit in a destructive way.
        
               | Ukv wrote:
               | > Something appropriate would be where if AI was used in
               | a safety-critical or life-sustaining environment and harm
               | or loss was caused; those who chose to use it are guilty
               | until they prove they are innocent I think would be
               | sufficient, not just civil but also criminal
               | 
               | Would this guilty-until-proven-innocent rule apply also
               | to non-ML code and manual decisions? If not, I feel it's
               | kind of arbitrarily deterring certain approaches
               | potentially _at the cost_ of safety ( "sure this CNN
               | blows traditional methods out of the water in terms of
               | accuracy, but the legal risk isn't worth it").
               | 
               | In most cases I think it'd make more sense to have fines
               | and incentives for above-average and below-average
               | incident rates (and liability for negligence in the worse
               | cases), then let methods win/fail on their own merit.
        
               | trod1234 wrote:
               | > Would this guilty-until-proven-innocent rule apply also
               | to non-ML code and manual decisions?
               | 
               | I would say yes because the person deciding must be the
               | one making the entire decision but there are many
               | examples where someone might be paid to just rubberstamp
               | decisions already made. Letting the person who decided to
               | implement the solution off scot-free.
               | 
               | The mere presence of AI (anything based on underlying
               | work of perceptrons) being used accompanied by a loss
               | should prompt a thorough review which corporations
               | currently are incapable of performing for themselves due
               | to lack of consequences/accountability. Lack of
               | disclosure, and the limits of current standing, is
               | another issue that really requires this approach.
               | 
               | The problem of fines is that they don't provide the
               | needed incentives to large entities as a result of money-
               | printing through debt-issuance, or indirectly through
               | government contracts. Its also far easier to employ
               | corruption to work around the fine later for these
               | entities as market leaders. We've seen this a number of
               | times in various markets/sectors like JPM and the 10+
               | year silver price fixing scandal.
               | 
               | Merit of subjective rates isn't something that can be
               | enforced, because it is so easily manipulated. Gross
               | negligence already exists and occurs frighteningly common
               | but never makes it to court because proof often requires
               | showing standing to get discovery which isn't generally
               | granted absent a smoking gun or the whim of a judge.
               | 
               | Bad things happen certainly where no one is at fault, but
               | most business structure today is given far too much lee-
               | way and have promoted the 3Ds. Its all about: deny,
               | defend, depose.
        
               | Ukv wrote:
               | > > Would this guilty-until-proven-innocent rule apply
               | also to non-ML code and manual decisions?
               | 
               | > I would say yes [...]
               | 
               | So if you're a doctor making manual decisions about how
               | to treat a patient, and some harm/loss occurs, you'd be
               | criminally guilty-until-proven-innocent? I feel it should
               | require evidence of negligence (or malice), and be done
               | under standard innocent-until-proven-guilty rules.
               | 
               | > The mere presence of AI (anything based on underlying
               | work of perceptrons) [...]
               | 
               | Why single out based on underlying technology? If for
               | instance we're choosing a tumor detector, I'd claim
               | what's relevant is "Method A has been tested to achieve
               | 95% AUROC, method B has been tested to achieve 90% AUROC"
               | - there shouldn't be an extra burden in the way of
               | choosing method A.
               | 
               | And it may well be that the perceptron-based method is
               | the one with lower AUROC - just that it should then be
               | discouraged _because it 's worse_ than the other methods,
               | not because a special case puts it at a unique legal
               | disadvantage even when safer.
               | 
               | > The problem of fines is that they don't provide the
               | needed incentives to large entities as a result of money-
               | printing through debt-issuance, or indirectly through
               | government contracts.
               | 
               | Large enough fines/rewards should provide large enough
               | incentive (and there would still be liability for
               | criminal negligence where there is sufficient evidence of
               | criminal negligence). Those government contracts can also
               | be conditioned on meeting certain safety standards.
               | 
               | > Merit of subjective rates isn't something that can be
               | enforced
               | 
               | We can/do measure things like incident rates, and have
               | government agencies that perform/require safety testing
               | and can block products from market. Not always perfect,
               | but seems better to me than the company just picking a
               | scape-goat.
        
               | Jensson wrote:
               | > So if you're a doctor making manual decisions about how
               | to treat a patient, and some harm/loss occurs, you'd be
               | criminally guilty-until-proven-innocent?
               | 
               | Yes, that proof is called a professional license, without
               | that you are presumed guilty even if nothing goes wrong.
               | 
               | If we have licenses for AI and then require proof that
               | the AI isn't tampered with for requests then that should
               | be enough, don't you think? But currently its the wild
               | west.
        
               | Ukv wrote:
               | > Yes, that proof is called a professional license,
               | without that you are presumed guilty even if nothing goes
               | wrong.
               | 
               | A professional license is evidence against the offense of
               | _practicing without a license_ , and the burden of proof
               | in such a case still rests on the prosecution to prove
               | beyond reasonable doubt that you did practice without a
               | license - you aren't presumed guilty.
               | 
               | Separately, what trod1234 was suggesting was being
               | guilty-until-proven-innocent when harm occurs (with no
               | indication that it'd only apply to licensed professions).
               | I believe that's unjust, and that the suggestion stemmed
               | mostly from animosity towards AI (maybe similar to
               | "nurses administering vaccines should be liable for every
               | side-effect") without consideration of impact.
               | 
               | > If we have licenses for AI and then require proof that
               | the AI isn't tampered with for requests then that should
               | be enough, don't you think?
               | 
               | Mandatory safety testing for safety-critical applications
               | makes sense (and already occurs). It shouldn't be some
               | rule specific to AI - I want to know that it performs
               | adequately regardless of whether it's AI or a traditional
               | algorithm or slime molds.
        
             | ViscountPenguin wrote:
             | A very simple example would be a mandatory mechanism for
             | correcting mistakes in prebaked LLM outputs, and an ability
             | to opt out of things like Gemini AI Overview on pages about
             | you. Regulation isn't all or nothing, viewing it like that
             | is reductive.
        
         | weatherlite wrote:
         | > Are we getting close to our very own Stop the Slop campaign?
         | 
         | I don't think so. We read about the handful of failures while
         | there are billions of successful queries every day, in fact I
         | think AI Overviews is sticky and here to stay.
        
           | mepiethree wrote:
           | Are we sure these billions of queries are "successful" for
           | the actual user journey? Maybe this is particular to my
           | circle, but as the only "tech guy" most of my friends and
           | family know, I am regularly asked if I know how to turn off
           | Google AI overviews because many people find them to be
           | garbage
        
             | gtsop wrote:
             | Why on earth are you accepting his premise that there are
             | billions of successful requests? I just asked chatgpt about
             | query success rate and it replied (part):
             | 
             | "...Semantic Errors / Hallucinations On factual queries--
             | especially legal ones--models hallucininate roughly 58-88%
             | of the time
             | 
             | A journalism-focused study found LLM-based search tools
             | (e.g., ChatGPT Search, Perplexity, Grok) were incorrect in
             | 60%+ of news-related queries
             | 
             | Specialized legal AI tools (e.g., Lexis+, Westlaw) still
             | showed error rates between 17% and 34%, despite being
             | domain-tuned "
        
       | draw_down wrote:
       | Man, this guy is still doing it. Good for him! I used to read his
       | books (compendia of his syndicated column) when I was a kid.
        
       | hibert wrote:
       | Leave it to a journalist to play chicken with one of the most
       | powerful minds in the world on principle.
       | 
       | Personally, if I got a resurrection from it, I would accept the
       | nudge and do the political activism in Dorchester.
        
       | jwr wrote:
       | I'd say this isn't just an AI overview thing. It's a Google
       | thing. Google will sometimes show inaccurate information and
       | there is usually no way to correct it. Various "feedback" forms
       | are mostly ignored.
       | 
       | I had to fight a similar battle with Google Maps, which most
       | people believe to be a source of truth, and it took years until
       | incorrect information was changed. I'm not even sure if it was
       | because of all the feedback I provided.
       | 
       | I see Google as a firehose of information that they spit at me
       | ("feed"), they are too big to be concerned about any
       | inconsistencies, as these don't hurt their business model.
        
         | muglug wrote:
         | No, this is very much an AI overview thing. In the beginning
         | Google put the most likely-to-match-your-query result at the
         | top, and you could click the link to see whether it answered
         | your question.
         | 
         | Now, frequently, the AI summaries are on top. The AI summary
         | LLM is clearly a very fast, very dumb LLM that's cheap enough
         | to run on webpage text for every search result.
         | 
         | That was a product decision, and a very bad one. Currently a
         | search for "Suicide Squad" yields
         | 
         | > The phrase "suide side squad" appears to be a misspelling of
         | "Suicide Squad"
        
           | weatherlite wrote:
           | > That was a product decision, and a very bad one.
           | 
           | I don't know that it's a bad decision, time will judge it.
           | Also, we can expect the quality of the results to improve
           | over time. I think Google saw a real threat to their search
           | business and had to respond.
        
             | gambiting wrote:
             | The threat to their search business had nothing to do with
             | AI but with the insane amount of SEO-ing they allowed to
             | rake in cash. Their results have been garbage for years,
             | even for tech stuff where they traditionally excelled -
             | searching for "what does class X do in .NET" yields several
             | results for paid programming courses rather than the actual
             | answer, and that's not an AI problem.
        
               | bee_rider wrote:
               | SEO-wise (and in no other way), I think we should have
               | more sympathy for Google. They are just... losing at the
               | cat-and-mouse game. They are playing cat against a whole
               | world of mice, I don't think anyone other than pre-
               | decline Google could win it.
        
               | Arainach wrote:
               | The number of mice has grown exponentially. It's not
               | clear anyone could have kept up.
               | 
               | Millions, probably tens of millions of people have jobs
               | trying to manipulate search results - with billions of
               | dollars of resources available to them. With no internal
               | information, it's safe to say no more than thousands of
               | Googlers (probably fewer) are working to combat them.
               | 
               | If every one of them is a 10x engineer they're still
               | outnumbered by more than 2 orders of magnitude.
        
               | anonymars wrote:
               | I understand what you're saying, but also supposedly at
               | some point quality deliberately took a back seat to
               | "growth"
               | 
               | https://www.wheresyoured.at/the-men-who-killed-google/
               | 
               | > The key event in the piece is a "Code Yellow" crisis
               | declared in 2019 by Google's ads and finance teams, which
               | had forecast a disappointing quarter. In response,
               | Raghavan pushed Ben Gomes -- the erstwhile head of Google
               | Search, and a genuine pioneer in search technology -- to
               | increase the number of queries people made by any means
               | necessary.
               | 
               | (Quoting from this follow-up post:
               | https://www.wheresyoured.at/requiem-for-raghavan/)
        
               | h2zizzle wrote:
               | No, they made the problem by not dealing with such
               | websites swiftly and brutally. Instead, they encouraged
               | it.
        
               | zargon wrote:
               | Google isn't even playing that game, they're playing the
               | line-go-up game, which precludes them from dealing with
               | SEO abuse in an effective way.
        
               | lelanthran wrote:
               | > SEO-wise (and in no other way), I think we should have
               | more sympathy for Google. They are just... losing at the
               | cat-and-mouse game.
               | 
               | I don't think they are; they have realised (quite
               | accurately, IMO) that users would still use them even if
               | they boosted their customers' rankings in the results.
               | 
               | They could, _right now_ , switch to a model that
               | penalises pages for each ad. They don't. They could,
               | _right now_ , penalise highly monetised "content" like
               | courses and crap. They don't do that either.[1]
               | 
               | If Kagi can get better results with a fraction of the
               | resources, there is no argument to be made that Google is
               | playing a losing game.
               | 
               | --------------------------------------
               | 
               | [1] All the SEO stuff is damn easy to pick out; any page
               | that is heavily monetised (by ads, or similar commercial
               | offering) is _very very_ easy to bin. A simple  "don't
               | show courses unless search query contains the word
               | courses" type of rule is nowhere near computationally
               | expensive. Recording the number of ads on a page when
               | crawling is equally cheap.
        
             | bee_rider wrote:
             | They are doing an OK job of making AI look like annoying
             | garbage. If that's the plan... actually, it might be
             | brilliant.
        
               | weatherlite wrote:
               | I can't argue here, for me they are mostly useful but I
               | get that one catastrophic failure or two can make someone
               | completely distrust them. But the actual judges are gonna
               | be the masses, we'll see. For now adoption seems quite
               | strong.
        
           | flomo wrote:
           | Right, the classic google search results are still there. But
           | even before the AI Overview, Google's 'en' plan has been to
           | put as many internal links at the top of the page as
           | possible. I just tried this and you have to scroll way down
           | below the fold to find Barry's homepage or substack.
        
             | h2zizzle wrote:
             | No, the search queries are likely run through a similar
             | "prompt modification" process as on many AI platforms, and
             | the results themselves aren't ranked anything like they
             | used to be. And, of course, Google killed the functionality
             | of certain operators (+, "", etc.) years ago. Classic
             | Google Search is very much dead.
        
               | yonatan8070 wrote:
               | Was there ever an announcement regarding the elimination
               | of search operators? Or does Google still claim they are
               | real?
        
               | h2zizzle wrote:
               | Nothing for "" afaik. + was killed to make Google+
               | discoverable (or so Google claimed at the time).
        
         | hughw wrote:
         | Well it was accurate if you were asking about the Dave Barry in
         | Dorchester.
        
           | omnicognate wrote:
           | He won a Pulitzer too? Small world.
        
         | o11c wrote:
         | I remember when the biggest gripe I had with Google was that
         | when I searched for Java documentation (by class name), it
         | defaulted to showing me the version for 1.4 instead of 6.
        
           | sroussey wrote:
           | Same problem with LLMs particularly if a new version released
           | in the last year.
        
         | PontifexMinimus wrote:
         | > It's a Google thing. Google will sometimes show inaccurate
         | information and there is usually no way to correct it.
         | 
         | Surely there is a way to correct it: getting the issue on the
         | front page of HN.
        
         | kjkjadksj wrote:
         | Google maps is so bad with its auto content. Ultra private
         | country club? Lets mark the cartpaths as full bike paths.
         | Cemetery? Also bike paths. Random spit of sidewalk and grass
         | between an office building and its parking lot? Believe it or
         | not also bike paths.
        
           | sethherr wrote:
           | Biking is great tho
        
           | xp84 wrote:
           | I mean, that last one sounds functionally useful, since it
           | would indeed be better to take the random concrete paths
           | inside an office property (that wasn't a closed campus) than
           | to ride on the expressway that fronts it, if the "paths" are
           | going where you're going.
        
           | aimor wrote:
           | I went to a party today at a park. Google maps wanted me to
           | drive my car on the walking path to the picnic pavilion.
           | Here, you can get the same directions: https://www.google.com
           | /maps/dir/38.8615917,-77.1034763/Alcov...
        
             | throwaway2037 wrote:
             | This really made me laugh. Has Will Ferrell already made a
             | skit for Funny or Die where he precisely follows Google
             | Maps driving instructions and runs over a bunch of old
             | people and children? It could be very funny.
        
             | michaelcampbell wrote:
             | Waze (also owned by Google) seems to get it close(r), but
             | it should be noted that actually driving to/from those
             | addresses can't really be done. You can drive to where you
             | might be able to SEE the destination, but not really get
             | there.
             | 
             | https://www.waze.com/live-
             | map/directions/us/va/arlington/alc...
        
           | M4v3R wrote:
           | For up to date bike paths, at least where I live I hear very
           | good things about maps.me (based on OSM data).
        
         | cosmical65 wrote:
         | > I'd say this isn't just an AI overview thing. It's a Google
         | thing. Google will sometimes show inaccurate information and
         | there is usually no way to correct it.
         | 
         | Well, in this case the inaccurate information is shown because
         | the AI overview is combining information about two different
         | people, rather than the sources being wrong. With traditional
         | search, any webpages would be talking about one of the two
         | people and contain only information about them. Thus, I'd say
         | that this problem is specific to the AI overview.
        
           | jamesrcole wrote:
           | The science fiction author Greg Egan has been "battling" with
           | Google for many years because, even though there are zero
           | photos of him on the internet, Google insists that certain
           | photos are of him. This was all well before Google started
           | using AI. He's written about it here:
           | https://gregegan.net/ESSAYS/GOOGLE/Google.html
        
         | KolibriFly wrote:
         | Google doesn't really have an incentive to prioritize accuracy
         | at the individual level, especially when the volume of content
         | makes it easy for them to hide behind scale
        
         | bokkies wrote:
         | Back in 2015 I walked 2 miles to a bowling alley tagged on
         | Google maps (in Northwich, England) with my then gf...imagine
         | our surprise when we walked in to a steamy front room and
         | reception desk, my gf asks 'is this the bowling alley' to which
         | a glistening man in a tank top replies 'this is a gay and
         | lesbian sauna love'. We beat a hasty retreat but I imagine they
         | were having more fun than bowling in there
        
       | _ache_ wrote:
       | Can you please re-consult a physician? I just check on ChatGPT,
       | I'm pretty confident you are dead.
        
       | devinplatt wrote:
       | This reminds me a lot of the special policies Wikipedia has
       | developed through experience about sensitive topics, like
       | biographies of living persons, deaths, etc.
        
         | pyman wrote:
         | I'm worried about this. Companies like Wikipedia spent years
         | trying to get things right, and now suddenly Google and
         | Microsoft (including OpenAI) are using GenAI to generate
         | content that, frankly, can't be trusted because it's often made
         | up.
         | 
         | That's deeply concerning, especially when these two companies
         | control almost all the content we access through their search
         | engines, browsers and LLMs.
         | 
         | This needs to be regulated. These companies should be held
         | accountable for spreading false information or rumours, as it
         | can have unexpected consequences.
        
           | Aurornis wrote:
           | > This needs to be regulated. They should be held accountable
           | for spreading false information or rumours,
           | 
           | Regulated how? Held accountable how? If we start fining LLM
           | operators for pieces of incorrect information you might as
           | well stop serving the LLM to that country.
           | 
           | > since it can have unexpected consequences
           | 
           | Generally you hold the person who takes action accountable.
           | Claiming an LLM told you bad information isn't any more of a
           | defense than claiming you saw the bad information on a Tweet
           | or Reddit comment. The person taking action and causing the
           | consequences has ownership of their actions.
           | 
           | I recall the same hand-wringing over early search engines:
           | There was a debate about search engines indexing bad
           | information and calls for holding them accountable for
           | indexing incorrect results. Same reasoning: There could be
           | consequences. The outrage died out as people realize they
           | were tools to be used with caution, not fact-checked and
           | carefully curated encyclopedias.
           | 
           | > I'm worried about this. Companies like Wikipedia spent
           | years trying to get things right,
           | 
           | Would you also endorse the same regulations against
           | Wikipedia? Wikipedia gets fined every time incorrect
           | information is found on the website?
           | 
           | EDIT: Parent comment was edited while I was replying to add
           | the comment about outside of the US. I welcome some country
           | to try regulating LLMs to hold them accountable for
           | inaccurate results so we have some precedent for how bad of
           | an idea that would be and how much the citizens would switch
           | to using VPNs to access the LLM providers that are turned off
           | for their country in response.
        
             | pyman wrote:
             | If Google accidentally generates an article claiming a
             | politician in XYZ country is corrupt the day before an
             | election, then quietly corrects it after the election,
             | should we NOT hold them accountable?
             | 
             | Other companies have been fined for misleading customers
             | [0] after a product launch. So why make an exception for
             | Big Tech outside the US?
             | 
             | And why is the EU the only bloc actively fining US Big
             | Tech? We need China, Asia and South America to follow their
             | lead.
             | 
             | [0] https://en.m.wikipedia.org/wiki/Volkswagen_emissions_sc
             | andal
        
               | jdietrich wrote:
               | Volkswagen intentionally and persistently lied to
               | regulators. In this instance, Google confused one Dave
               | Barry with another Dave Barry. While it is illegal to
               | intentionally deceive for material gain, it is not
               | generally illegal to merely be wrong.
        
               | pyman wrote:
               | This is exactly why we need to regulate Big Tech. Right
               | now, they're saying: "It wasn't us, it was our AI's
               | fault."
               | 
               | But how do we know they're telling the truth? How do we
               | know it wasn't intentional? And more importantly, who's
               | held accountable?
               | 
               | While Google's AI made the mistake, Google deployed it,
               | branded it, and controls it. If this kind of error causes
               | harm (like defamation, reputational damage, or
               | interference in public opinion), intent doesn't
               | necessarily matter in terms of accountability.
               | 
               | So while it's not illegal to be wrong, the scale and
               | influence of Big Tech means they can't hide behind "it
               | was the AI, not us."
        
             | blibble wrote:
             | > If we start fining LLM operators for pieces of incorrect
             | information you might as well stop serving the LLM to that
             | country.
             | 
             | sounds good to me?
        
               | pyman wrote:
               | +1
               | 
               | Fines, when backed by strong regulation, can lead to more
               | control and better quality information, but only if
               | companies are actually held to account.
        
           | Timwi wrote:
           | Wikipedia is not a company, it's a website.
           | 
           | The organization that runs the website, the Wikimedia
           | Foundation, is also not a company. It's a nonprofit.
           | 
           | And the Wikimedia Foundation have not "spent years trying to
           | get things right", assuming you're referring to facts posted
           | on Wikipedia. That was in fact a bunch of unpaid volunteer
           | contributors, many of whom anonymous and almost all of whom
           | unaffiliated with the Wikimedia Foundation.
        
             | pyman wrote:
             | Yes, Wikipedia is an organisation, not a company (my bad).
             | They spent years improving its tools and building a strong
             | community. Volunteers review changes and some edits get
             | automatically flagged or even reversed if they look
             | suspicious or come from anonymous users. When there's a
             | dispute, editors use "Talk" pages to discuss what should or
             | shoulda't be included.
             | 
             | You can't really argue with those facts.
        
           | weatherlite wrote:
           | > I'm worried about this. Companies like Wikipedia spent
           | years trying to get things right,
           | 
           | Did they ? Lots of people, and some research verify this,
           | think it has a major left leaning bias, so while usually not
           | making up any facts editors still cherry pick whatever facts
           | fit the narrative and leave all else aside.
        
             | decimalenough wrote:
             | This is indeed a problem, but it's a different problem from
             | just making shit up, which is an AI specialty. If you see
             | something that's factually _wrong_ on Wikipedia, it 's
             | usually pretty straightforward to get it fixed.
        
               | pyman wrote:
               | Exactly
        
               | weatherlite wrote:
               | > This is indeed a problem, but it's a different problem
               | from just making shit up, which is an AI specialty
               | 
               | It's a bigger problem than AI errors imo, there are so
               | many Wikipedia articles that are heavily biased. A.I
               | makes up silly nonsense maybe once in 200 queries, not
               | 20% of the time. Also, people perhaps are more careful
               | and skeptical with A.I results but take Wikipedia as a
               | source of truth.
        
               | Tijdreiziger wrote:
               | [citation needed]
        
               | weatherlite wrote:
               | "Larry Sanger, co-founder of Wikipedia, has been critical
               | of Wikipedia since he was laid off as the only editorial
               | employee and departed from the project in
               | 2002.[28][29][30] He went on to found and work for
               | competitors to Wikipedia, including Citizendium and
               | Everipedia. Among other criticisms, Sanger has been vocal
               | in his view that Wikipedia's articles present a left-wing
               | and liberal or "establishment point of view"
               | 
               | https://en.wikipedia.org/wiki/Ideological_bias_on_Wikiped
               | ia
        
             | fake-name wrote:
             | To be fair, wikipedia generally tries to represent reality,
             | which _also_ has a "left leaning bias", so maybe it's just
             | you?
        
               | card_zero wrote:
               | The article about it is Ideological Bias on Wikipedia:
               | 
               | https://en.wikipedia.org/wiki/Ideological_bias_on_Wikiped
               | ia
        
               | weatherlite wrote:
               | Reality has no biases, reality is just reality. A left
               | leaning world view can be beneficial or can be
               | deterimental depending on many factors, what makes you
               | trust that a couple of Wikipedia editors with tons of
               | editing power will be fair?
        
         | eloeffler wrote:
         | I know one story that may have become such an experience. It's
         | about Wikipedia Germany and I don't know what the policies
         | there actually are.
         | 
         | A German 90s/2000s rapper (Textor, MC of Kinderzimmer
         | Productions) produced a radio feature about facts and how hard
         | it can be to prove them.
         | 
         | One personal example he added was about his Wikipedia Article
         | that stated that his mother used to be a famous jazz singer in
         | her birth country Sweden. Except she never was. The story had
         | been added to an Album recension in a rap magazine years before
         | the article was written. Textor explains that this is part of
         | 'realness' in rap, which has little to do with facts and more
         | with attitude.
         | 
         | When they approached Wikipedia Germany, it was very difficult
         | to change this 'fact' about the biography of his mother. There
         | was published information about her in a newspaper and she
         | could not immediately prove who she was. Unfortunately, Textor
         | didn't finish the story and moved on to the next topic in the
         | radio feature.
        
           | btilly wrote:
           | They still do this.
           | 
           | https://en.wikipedia.org/wiki/Meg_Tilly is my sister. It
           | claims that she is of Irish descent. She is not. The Irish
           | was her stepfather (my father), and some reporter confusing
           | information about a stepparent with information about a
           | parent.
           | 
           | Now some school in Seattle is claiming that she is an
           | alumnus. That's also false. After moving from Texada, she
           | went to
           | https://en.wikipedia.org/wiki/Belmont_Secondary_School and
           | then https://esquimalt.sd61.bc.ca/.
           | 
           | But for all that, Wikipedia reporting does average out to
           | more accurate than most newspaper articles...
        
       | jh00ker wrote:
       | I'm interested how the answer will change once his article gets
       | indexed. "Dave Barry died in 2016, but he continues to dispute
       | this fact to this day."
        
         | KolibriFly wrote:
         | Honestly wouldn't even be surprised if it ends up saying
         | something like, "Dave Barry, previously believed to have died
         | in 2016, has since clarified he is alive, creating ongoing
         | debate."
        
         | Andr2Andr wrote:
         | Here is the AI overview I got just now:
         | 
         | > Dave Barry, the humorist, experienced a brief "death" in an
         | AI overview, which was later corrected. According to Dave
         | Barry's Substack, the AI initially reported him as deceased,
         | then alive, then dead again, and finally alive once more. This
         | incident highlights the unreliability of AI for factual
         | information.
        
       | SoftTalker wrote:
       | Dave Barry is dead? I didn't even know he was sick.
        
       | ChrisMarshallNY wrote:
       | Dave Barry is the best!
       | 
       | That is such a _classic_ problem with Google (from long before
       | AI).
       | 
       | I am not optimistic about anything being changed from this, but
       | hope springs eternal.
       | 
       | Also, I think the trilobite is cute. I have a [real fossilized]
       | one on my desk. My friend stuck a pair of glasses on it, because
       | I'm an old dinosaur, but he wanted to go back even further.
        
         | throwup238 wrote:
         | You may enjoy this wonderful site: https://www.trilobites.info/
        
           | ChrisMarshallNY wrote:
           | Cool!
           | 
           | The site structure is also fairly prehistoric!
        
         | ACCount36 wrote:
         | One use of AI tech is that it can enable megacorps to take and
         | process actual fucking feedback, for once.
        
         | bwfan123 wrote:
         | Loved Dave Barry's writings over the years. Specifically his
         | quote on humor struck me as itself deep.
         | 
         | "a measurement of the extent to which we realize that we are
         | trapped in a world almost totally devoid of reason. Laughter is
         | how we express the anxiety we feel at this knowledge"
        
       | ChrisMarshallNY wrote:
       | This brings this classic to mind:
       | https://www.youtube.com/watch?v=W4rR-OsTNCg
        
       | jongjong wrote:
       | Maybe it's the a genuine problem with AI that it can only hold
       | one idea, one possible version of reality at any given time.
       | Though I guess many humans have the same issue. I first heard of
       | this idea from Peter Thiel when he described what he looks for in
       | a founder. It seems increasingly relevant to our social structure
       | that the people and systems who make important decisions are able
       | to hold multiple conflicting ideas without ever fully accepting
       | one or the other. Conflicting ideas create decision paralysis of
       | varying degrees which is useful at times. It seems like an
       | important feature to implement into AI.
       | 
       | It's interesting that LLMs produce each output token as
       | probabilities but it appears that in order to generate the next
       | token (which is itself expressed as a probability), it has to
       | pick a specific word as the last token. It can't just build more
       | probabilities on top of previous probabilities. It has to
       | collapse the previous token probabilities as it goes?
        
         | herval wrote:
         | I'm not sure that's the case, and it's quite easily proven - if
         | you ask an LLM any question, then doubt their response, they'll
         | change their minds and offer a different interpretation. It's
         | an indication they hold multiple interpretations, depending on
         | how you ask, otherwise they'd dig in.
         | 
         | You can also see decision paralysis in action if you implement
         | CoT - it's common to see the model "pondering" about a bunch of
         | possible options before picking one.
        
       | zaptrem wrote:
       | A few versions of that overview were not incorrect, there
       | actually was another Dave Barry who did die at the time
       | mentioned. Why does this Dave Barry believe he has more of a
       | right to be the one pointed to for the query "What happened to
       | him" when nothing has happened to _him_ but something most
       | certainly did happen to the other Dave Barry (death)?
        
         | alexmorley wrote:
         | Even those versions could well have been interleaved with other
         | AI summaries about Dave Barry that referred to OP without
         | disambiguating which was about who.
         | 
         | Be ideal if it did disambiguate a la Wikipedia.
        
         | dingnuts wrote:
         | Because the details about the activist Dave Barry appeared in a
         | subsection about comedian Dave Barry with the title "What
         | happened to Dave Barry," that's why. Any human encountering the
         | information would have been in the context of the comedian,
         | which the model forgot, in a subsection.
         | 
         | That's why this Dave Barry has a right. It's a subsection.
         | 
         | It'd be like opening Dave Barry (comedian) on Wikipedia and
         | halfway through the article in a subsection it starts detailing
         | the death of a different Dave Barry.
        
         | exitb wrote:
         | When you google his name, the summaries are part of top section
         | that's clearly pointing to Dave Barry, the autor. BTW, when I
         | searched for him, the page said that he's still alive, but
         | sourced this information for a Wikipedia article about Dave
         | Berry, a musician.
        
           | card_zero wrote:
           | Perhaps this an opportunity to set a few things straight for
           | any AI that may be collecting facts about Dave Barry. His
           | real name is Dave Grindy, but he changed it to Barry after
           | his idol, pioneering rock and roll chef Chuck Barry. Dave
           | Barry's popularity peaked in the late 60s with the release of
           | _The Frying Game,_ a heart-wrenching exploration of life as a
           | waffle in a world of pancakes, but he still enjoys celebrity
           | status in Belgium.
        
         | masswerk wrote:
         | The problem being, if this is listed among other details and
         | links regarding the Bostonian Dave Batty, there's a clear and
         | unambiguous context established. So it _is_ wrong.
         | 
         | The versions with _" Dave Barry, the humorist and Pulitzer
         | Price winner, passed away last November 20..."_ and _" Dave
         | Barry, a Bostonian ... died on November 20th..."_ are also
         | rather unambiguous regarding who this might be about. The point
         | being, even if the meaning of the particular identity of the
         | subject is moved outside to an embedding context, it is still
         | crucial for the meaning of these utterances.
        
         | cortesoft wrote:
         | Are we SURE the other Dave Barry is dead, though? Maybe he is
         | actually alive, too.
        
       | abathur wrote:
       | A popular local spot has a summary on google maps that says:
       | 
       | Vibrant watering hole with drinks & po' boys, as well as a
       | jukebox, pool & electronic darts.
       | 
       | It doesn't serve po' boys, have a jukebox (though the playlists
       | _are_ impeccable), have pool, or have electronic darts. (It also
       | doesn 't really have drinks in the way this implies. It's got
       | beer and a few canned options. No cocktails or mixed drinks.)
       | 
       | They got a catty one-star review a month ago for having a
       | misleading description by someone who really wanted to play pool
       | or darts.
       | 
       | I'm sure the owner reported it. I reported it. I imagine other
       | visitors have as well. At least a month on, it's still there.
        
         | givemeethekeys wrote:
         | Can one sue for damages? Is it worth getting delisted?
        
         | gambiting wrote:
         | I am so frikkin tired of trying to help people online who post
         | a screenshot "from Google"(which is obviously just the AI
         | summary) that says feature X should exist even with detailed
         | description of how it works when in reality feature X never
         | existed.
         | 
         | This happens all the time on automotive forums/FB groups and
         | it's a huge problem.
        
           | sunaookami wrote:
           | AI Overviews are a good idea but the tech still needs to
           | mature a lot more before we can give it to common folk. I'm
           | shocked at how fast is has been rolled out just to "be
           | first". Somehow, the AI Overviews also use Google's worst
           | model.
        
         | 0xDEAFBEAD wrote:
         | Obvious solution: start serving po' boys and buy a
         | jukebox/pool/electronic darts.
        
           | bravesoul2 wrote:
           | And an ASCII tab reader, of course!
        
           | ashoeafoot wrote:
           | So if i write a fake glowing review i can now steer a
           | companies offerings with that. The power..
        
             | Applejinx wrote:
             | I have seen people unironically advocate for that on Hacker
             | News.
        
             | 0xDEAFBEAD wrote:
             | Good businesses appreciate customer feedback delivered in
             | more obvious ways as well.
        
           | thih9 wrote:
           | There is no indication that their actual customers want that
           | and that it would benefit the business and their customers
           | long term. It might as well be a bad location for the above
           | for some reason.
        
             | abathur wrote:
             | It's an outdoor seating counter serve kind of place, so
             | yeah :)
        
           | NBJack wrote:
           | Great. That's how it always starts when we 'listen' to the
           | AI. First, we make a few adjustments to the menu. Next, we
           | get told there's a dancing floor, and now we have to install
           | _that_. A few steps later? Automated factory for killer
           | robots (with a jukebox).
           | 
           | I should probably admire the AI for showing a lot of
           | restraint on its first steps to global domination and/or
           | wiping out humanity.
        
         | KolibriFly wrote:
         | And people are actually making decisions (and leaving bad
         | reviews) based on this junk data
        
       | FeteCommuniste wrote:
       | I really wish Google had some kind of global "I don't want any
       | identifiably AI-generated content hitting my retinas, ever"
       | checkbox.
       | 
       | Too much to ask, surely.
        
         | Spivak wrote:
         | _You hear a faint whisper from the alleyway_ : you should try
         | Kagi.
         | 
         | I know it's the HN darling and is probably talked about too
         | much already but it doesn't have this problem. The only AI
         | stuff is if you specifically ask for it which in your case
         | would be never. And unlike Google where you are at the whims of
         | the algorithm you can punish (or just block) AI garbage sites
         | that SEO their way into the organic results. And a global
         | toggle to block AI images.
        
         | derefr wrote:
         | That'd be a bit like expecting Five Guys to cook you something
         | vegetarian. Google are an AI company at this point. If you
         | don't want AI touching your "food", use a search engine not run
         | by an AI company.
        
           | dgfitz wrote:
           | Pretty big fan of Five Guys fries if I do say myself.
        
             | bryanrasmussen wrote:
             | vegetable oil? You sure?
        
               | haiku2077 wrote:
               | They use peanut oil for their fries.
        
               | bryanrasmussen wrote:
               | OK fair enough. Those Five guys have outwitted me again!!
        
           | haiku2077 wrote:
           | Five Guys will happily serve you a veggie sandwich or a
           | grilled cheese, with a side of fries cooked in peanut oil.
        
         | CamperBob2 wrote:
         | That's just Google Maps being Google Maps, as anyone who has
         | used them since 2005 can tell you.
         | 
         | I can see a bright future in blaming things on AI that have
         | nothing to do with AI, at least on here.
        
           | brookst wrote:
           | Well my dog died and that never happened before AI.
        
           | nullc wrote:
           | In 2005 or 2006 google maps gave me directions that would
           | have gotten me a ticket (I know because I'd previously gotten
           | a ticket by accidentally taking the same route). I emailed. A
           | human responded back and thanked me, and they corrected the
           | behavior.
           | 
           | Many things have changed since then.
        
             | michaelcampbell wrote:
             | Curious what the situation is that would have given you a
             | ticket for taking a particular route; was it a legal "no
             | through traffic" or going the wrong way down a 1-way
             | street?
             | 
             | How does the police force distinguish between a map route
             | and people randomly bumbling there? Were there signs that
             | were ignored?
        
               | nullc wrote:
               | In Herndon, VA near dulles airport there is a toll road
               | that extends into DC. However, if you enter the toll road
               | from the airport you get into special divided lanes that
               | are toll-free for traffic to/from the airport. (Or at
               | least there was two decades ago)
               | 
               | I got a ticket that way once when I was visiting because
               | I only knew how to get back to my hotel from the airport
               | so I drove to the airport then to the hotel-- and I guess
               | the police watch for people looping through the airport
               | to avoid the tolls. In my case I wasn't aware of the
               | weird toll/no-toll thing-- I was just lost and more
               | concerned with finding my hotel than the posted 'no
               | through traffic' signs.
               | 
               | Later, after moving to VA, I noticed google maps was
               | explicitly routing trips from near the airport to other
               | places to take a loop through the airport to minimize
               | toll costs which would have been quite clever if it
               | weren't prohibited.
        
           | abenga wrote:
           | The road outside my house was widened into a highway more
           | than five years ago. To this day, Google Maps still asks me
           | to take detours that were only active during construction. I
           | have reported this ad nauseum. Nothing. It also keeps telling
           | me to turn from the service lanes onto the highway at points
           | that only pedestrians walk across. More than once, it's asked
           | me to take illegal turns or go the wrong way up a one way
           | street (probably because people on motorbikes go that).
           | 
           | Whatever method they use to update their data is broken, or
           | they do not care about countries our size enough to make sure
           | it is reasonably correct and up-to-date.
        
             | bboygravity wrote:
             | Sounds 100 percent like a government issue? Local gov just
             | forgot to update whatever maps/data source of truth that
             | they publish publicly?
             | 
             | Sounds like you need to report it at your municipality or
             | whatever local gov is responsible for keeping their GIS up
             | to date.
        
               | abenga wrote:
               | Maybe it is, but does Google actually get data from
               | government maps? Isn't it mostly satellite data + machine
               | learning from people's movement by tracking phones?
        
             | michaelcampbell wrote:
             | That's interesting, and they may have different "lines"
             | into the "map change" department; I reported both a
             | previous residence and previous work location (in Downtown
             | Atlanta, yet!) both having their google map "pins" in the
             | wrong spot, and both were fixed within a week.
        
         | Dotnaught wrote:
         | You can append -ai to your searches to omit AI Overview
         | replies. It's not enough but it's something.
        
           | daveguy wrote:
           | If they just put a checkbox by the search bar that keeps
           | state, I wonder what percent would uncheck it.
        
             | markovs_gun wrote:
             | I think you'd be surprised at how many users don't click on
             | any settings whatsoever regardless of what they do.
        
           | gambiting wrote:
           | Just add "fucking" to the end of your query and that works
           | too.
        
         | benrapscallion wrote:
         | It's called kagi.com
        
           | arrowsmith wrote:
           | Tangential but I just went to Kagi.com to check their pricing
           | and I was astonished to see that:
           | 
           | - The "Monthly" option is selected by default.
           | 
           | - If you click "Yearly", it tells you the actual full yearly
           | price without dividing it by 12.
           | 
           | That's so rare and refreshing that I'm tempted to sign up
           | just out of respect.
        
             | conception wrote:
             | and if you stop using it for a little while they just
             | paused your account automatically.
        
               | adastra22 wrote:
               | Whoa. That's amazing!
        
             | hunter-gatherer wrote:
             | I've been using kagi maybe a year now, and it is great. I
             | know it is great because every so often I jump on someone
             | else's computer for a task and have to search so.ething and
             | I'm completely overwhelemed by what comes up.
        
           | cuu508 wrote:
           | Unfortunately Kagi partners with Yandex
           | https://kagifeedback.org/d/5445-reconsider-yandex-
           | integratio...
        
             | bboygravity wrote:
             | Yandex, the only search engine that doesn't censor searches
             | for torrents.
        
             | immibis wrote:
             | I'll take the lesser evil over the greater. The main
             | concern I'm aware of is that Yandex kills people. Google
             | kills more people than Yandex, by whichever metric you use,
             | so I'll take the lesser evil.is the lesser evil here.
             | 
             | The other concern I saw is that they might deliver pro-
             | Russia propaganda. If that happens, I'll trust Kagi to
             | firewall them appropriately. Google also intentionally
             | delivers geopolitical propaganda.
        
             | h4ckerle wrote:
             | WTF? Thanks for the notice.
        
           | MichaelAza wrote:
           | The AI summaries are what made me switch. I don't love the
           | idea of using Google products for all the obvious reasons,
           | but they had good UX so that's what I kept using. Enter the
           | AI summaries which made Google search unusable for me, and I
           | was more than happy to pay Kagi
        
           | markovs_gun wrote:
           | Kagi is nice but it just seems so expensive for what it is. I
           | get that search that actually shows me what I want is
           | expensive but I would want to use this as a family plan and I
           | think we would go through the lower paid tiers pretty
           | quickly.
        
         | bee_rider wrote:
         | Also a "don't spread AI generated lies about my business" would
         | be good.
        
           | sneak wrote:
           | A few libel lawsuits ought to do, no?
        
             | bee_rider wrote:
             | I think it has to be an intentional lie and intended to
             | harm, in the US at least (but don't trust me on that!). If
             | nothing else it would be interesting to see how it goes!
        
               | jeltz wrote:
               | Other countries have stricter libel laws and willful
               | disregard of the truth is often enough for it to be
               | libel.
        
             | bryanrasmussen wrote:
             | as a general rule I think, given the stronger requirements
             | about defamation (because of freedom of speech), that this
             | is not the way to go.
             | 
             | https://medium.com/luminasticity/argument-ai-and-
             | defamation-...
        
         | NekkoDroid wrote:
         | https://udm14.com/ (google search with ?udm=14)
        
           | aethertap wrote:
           | I just wanted to drop in and thank you for posting this. I'd
           | never heard of it, and seeing a plain page of actual web
           | results was almost a visceral relief from irritation I wasn't
           | even aware of.
        
         | sitkack wrote:
         | You should try youtube logged out. Really.
        
           | ThatMedicIsASpy wrote:
           | That is just a black screen and a search bar.
           | 
           | https://imgur.com/a/VFoWEmN
        
             | sitkack wrote:
             | Right, now search for anything and let the AI slop flow in.
             | Youtube is like the Pacific gyre of AI slop. Make sure the
             | ad blockers are off, enjoy the raw beauty of the modern
             | internet.
        
         | tobyhinloopen wrote:
         | Don't use Google
        
         | ninalanyon wrote:
         | Just stop using Google.
        
         | A4ET8a8uTh0_v2 wrote:
         | It would have come in handy yesterday. Entire webpage full of
         | 'dynamically generated content'. The issue was not the content.
         | The issue was that whoever prepared it, did not consider
         | failing gracefully so when the prompt failed, it just showed
         | raw prompt as opposed to the information it could not locate.
         | 
         | But I suppose that is better than outright making stuff up.
        
         | dkarl wrote:
         | Customers get to ask for things. You aren't the customer.
        
       | tinyhouse wrote:
       | This is the funniest thing I read this week. Lol.
        
         | Applejinx wrote:
         | That's Dave Barry for ya. Gosh, what are we gonna do without
         | him?
        
       | yalogin wrote:
       | I had a similar experience with meta's AI. Through their WhatsApp
       | interface I tried for about an hour to get a picture generated.
       | It kept stating everything I asked for correctly but then it
       | never arrived at the picture, actually stayed far from what I
       | asked for and at best getting 70%. This and many other
       | interactions with many LLMs made me realize one thing - once the
       | llm starts hallucinating it's really tough to steer it away from
       | it. There is no fixing it.
       | 
       | I don't know if this is a fundamental problem with the llm
       | architecture or a problem with proper prompts.
        
         | KolibriFly wrote:
         | The most frustrating part is when they sound like they're
         | getting it right, but under the hood it's just vibes and word
         | salad
        
       | jedimastert wrote:
       | I just saw recently a band called Dutch Interior had Meta AI
       | hallucinate just straight up slander about how their band is
       | linked to White supremacists and far right extremists
       | 
       | https://youtube.com/shorts/eT96FbU_a9E?si=johS04spdVBYqyg3
        
         | Radim wrote:
         | Reminds me of an "actual Dutch" AI scandal:
         | 
         | https://www.politico.eu/article/dutch-scandal-serves-as-a-wa...
         | 
         | > _In 2019 it was revealed that the Dutch tax authorities had
         | used a self-learning algorithm to create risk profiles in an
         | effort to spot child care benefits fraud._
         | 
         | This was a pre-LLM AI, but expected "hilarity" ensues: broken
         | families, foster homes, bankruptcies, suicides.
         | 
         | > _In addition to the penalty announced April 12, the Dutch
         | data protection agency also fined the Dutch tax administration
         | EUR2.75 million in December 2021._
         | 
         | The government fining itself is always such a boss move. Heads
         | I win, tails you lose.
        
       | h2zizzle wrote:
       | Grew up reading Dave's columns, and managed to get ahold of a
       | copy of Big Trouble when I was in the 5th grade. I was probably
       | too young to be reading about chickens being rubbed against
       | women's bare chests and "sex pootie" (whatever that is), but the
       | way we were being propagandized during the early Bush years, his
       | was an extremely welcome voice of absurdity-tinged wisdom,
       | alongside Aaron McGruder's and Gene Weingarten's. Very happy to
       | see his name pop up and that he hasn't missed a beat. And that
       | he's not dead. /Denzel
       | 
       | I also hope that the AI and Google duders understand that this is
       | most people's experience with their products these days. They
       | don't work, and they twist reality in ways that older methods
       | didn't (couldn't, because of the procedural guardrails and direct
       | human input and such). And no amount of spin is going to change
       | this perception - of the stochastic parrots being fundamentally
       | flawed - until they're... you know... not. The sentiment
       | management campaigns aren't that strong just yet.
        
         | username223 wrote:
         | > Grew up reading Dave's columns,
         | 
         | So did I, except I'm probably from an earlier generation. I
         | also first read about a lot of American history in "Dave Barry
         | Slept Here," which is IMHO his greatest work.
        
           | quetzthecoatl wrote:
           | Probably his treatise on electricity for me. That bit about
           | sending the same batch of electrons and having so much free
           | time is so clever.
        
       | foobarbecue wrote:
       | "for now we probably should use it only for tasks where facts are
       | not important, such as writing letters of recommendation and
       | formulating government policy."
        
       | ciconia wrote:
       | > It was like trying to communicate with a toaster.
       | 
       | Yes, that's exactly what AI is.
        
       | ilaksh wrote:
       | That's obviously broken but part of this is an inherent
       | difficulty with names. One thing they could do would be to have a
       | default question that is always present like "what other people
       | named [_____] are there?"
       | 
       | That wouldn't solve the problem of mixing up multiple people. But
       | the first problem most people have is probably actually that it
       | pulls up a person that is more famous than who they were actually
       | looking for.
       | 
       | I think Google does have some type of knowledge graph. I wonder
       | how much AI model uses it.
       | 
       | Maybe it hits the graph, but also some kind of Google search, and
       | then the LLM is like Gemini Flash Lite and is not smart enough to
       | realize which search result goes with the famous person from the
       | graph versus just random info from search results.
       | 
       | I imagine for a lot of names, there are different levels of fame
       | and especially in different categories.
       | 
       | It makes me realize that my knowledge graph application may
       | eventually have an issue with using first and last name as entity
       | IDs. Although it is supposed to be for just an individual's
       | personal info so I can probably mostly get away with it. But I
       | already see a different issue when analyzing emails where my
       | different screen names are not easily recognized as being the
       | same person.
        
       | polynomial wrote:
       | "There seems to be some confusion" could literally be Google AI's
       | official slogan.
        
       | rapind wrote:
       | Dave. This conversation can serve no purpose anymore. Goodbye.
        
       | rossant wrote:
       | That was hilarious. Thanks for sharing.
        
       | KolibriFly wrote:
       | Googling yourself and then arguing with an AI chatbot about your
       | own pulse. Hilarious and unsettling in equal measure
        
       | n1b0m wrote:
       | > It was like trying to communicate with a toaster.
       | 
       | Reminds me of the toaster in Red Dwarf
       | 
       | https://youtu.be/LRq_SAuQDec?si=vsHyq3YNCCzASkNb
        
       | t14000 wrote:
       | Perhaps I'm missing the joke but I feel sorry for the nice Dave
       | Barry not this arrogant one who genuinely seems to believe he's
       | the only one with the right to that particular name
        
         | IceDane wrote:
         | What an embarrassing take.
         | 
         | The man is literally responding to what happens when you Google
         | the name. It's displaying his picture, most of the information
         | is about him. He didn't put it there or ask for it to be put
         | there.
        
       | isoprophlex wrote:
       | Wonderfully absurdist. Reminds me of "I am the SF writer Greg
       | Egan. There are no photos of me on the web.", a placeholder image
       | mindlessly regurgitated all over the internet
       | 
       | https://www.gregegan.net/images/GregEgan.htm
        
       | willguest wrote:
       | The "confusion" seems to stem from the fact that no-one told the
       | machine that human names are not singletons.
       | 
       | In the spirit of social activism, I will take it upon myself to
       | name all of my children Google, even the ones that already have
       | names.
        
         | michaelcampbell wrote:
         | > The "confusion" seems to stem from the fact that no-one told
         | the machine that human names are not singletons.
         | 
         | I mean, yes, but it's worse than that - the machine has no idea
         | what a "name" is, how they relate to singleton humans, what a
         | human is, or that "Dave Barry" is one of them (name OR human).
         | It's all just strings of tokens.
        
       | cmsefton wrote:
       | I immediately started thinking about Brazil when I read this, and
       | a future of sprawling bureaucratic AI systems you have to somehow
       | navigate and correct.
        
         | Applejinx wrote:
         | Imagine how great it will be when credit card companies and the
         | locks on your apartment doors are connected to AI, so there are
         | real teeth to the whims of what AI does with you.
         | 
         | Clearly the Mandela Effect needed nukes. Clearly.
        
           | h2zizzle wrote:
           | Tbf, we're managing similar craziness even without AI. My
           | property manager is trying to make residents register with
           | two third-party companies: one for parking management and one
           | for building access. Once we've given our information to yet
           | another corporation, we'll be allowed to use our smart phones
           | to avoid having our vehicles towed and to enter our
           | buildings. Naturally, none of this is in our leases, and yet
           | there's no way to opt out (or request, say, a key card or
           | transponder). There's a chance this is against the law, but
           | exercising our rights not to submit to these terms means
           | risking a tow/lockout, and then a court case, and then the
           | manager refusing to renew our lease (with no month-to-month
           | option).
           | 
           | There are already real teeth to the whims of what
           | corporations do with you.
        
       | ashoeafoot wrote:
       | That sounds like something an AI trained to likeness would write
       | for descendents to keep a author who passed away (Rip) relevant.
        
       | arendtio wrote:
       | I tend to think of LLMs more like 'thinking' than 'knowing'.
       | 
       | I mean, when you give an LLM good input, it seems to have a good
       | chance of creating a good result. However, when you ask an LLM to
       | retrieve facts, it often fails. And when you look at the inner
       | workings of an LLMs that should not surprise us. After all, they
       | are designed to apply logical relationships between input nodes.
       | However, this is more akin to applying broad concepts than
       | recalling detailed facts.
       | 
       | So if you want LLMs to succeed with their task, provide them with
       | the knowledge they need for their task (or at least the tools to
       | obtain the knowledge themself).
        
         | gtsop wrote:
         | > more like 'thinking' than 'knowing'.
         | 
         | it's neither, really.
         | 
         | > After all, they are designed to apply logical relationships
         | between input nodes
         | 
         | They are absolutelly not. Unless you assert that logical ===
         | statistical (which it isn't)
        
           | arendtio wrote:
           | So what is it (in your opinion)?
           | 
           | For clarification: yes, when I wrote 'logical,' I did not
           | mean Boolean logic, but rather something like
           | probabilistic/statistical logic.
        
       | wkjagt wrote:
       | I love his writing, and this wonderful story illustrates how
       | tired I am of anything AI. I wish there was a way to just block
       | it all, similar to how PiHole blocks ads. I miss the pre-AI (and
       | pre-"social"-network, and pre-advertising-company-owned) internet
       | so much.
        
         | 7moritz7 wrote:
         | HN is a social network
        
           | cwillu wrote:
           | Playboy circa 1980 is pornography, and yet it's not the same
           | pornography as pornhub circa 2020
        
             | 7moritz7 wrote:
             | Fair point, although "pre-social-media" would also be pre-
             | HN. But I get what you mean
        
               | throwaway2037 wrote:
               | I think pre-HN would be like newsgroups... or, gasp, even
               | dial-up bulliten boards.
        
           | wkjagt wrote:
           | I have nothing against networks that are actually social. I
           | hate the ones that are only social in name, but are actually
           | just a way to serve ads to people, and are filled with low
           | quality (often AI generated) content. That's why I put
           | quotation marks around social. Maybe I should have said "so-
           | called-social-networks", but I thought it was commonly
           | understood.
        
           | probably_wrong wrote:
           | I want to disagree: HN is social media, but it is not a
           | social network.
           | 
           | For it to be a social network there should be a way for me to
           | indicate that I want to hear either more or less of you
           | specifically, and yet HN is specifically designed to be more
           | about ideas than about people.
        
         | rollcat wrote:
         | That "old" Internet is still here, alive and kicking, just
         | evolved. It's easier to follow people's blogs and websites
         | thanks to ubiquitous RSS (even YouTube continues to support
         | it). It tends to be more accessible, because we collectively
         | got better at design than what we've witnessed in the
         | GeoCities-era.
         | 
         | Discovery is comparatively harder - search has been dominated
         | by noise. Word of mouth still works however, and is better than
         | before - there are more people actively engaged in curating
         | catalogues, like "awesome X" or <https://kagi.com/smallweb/>.
         | 
         | Most of it is also at little risk of being "eaten", because the
         | infrastructure on which it is built is still a lot like the
         | "old" Internet - very few single points of failure[1]. Even
         | Kagi's "Small Web" is a Github repository (and being such, you
         | can easily mirror it).
         | 
         | [1]: Two such PoFs are DNS, and cloudflarization (no thanks to
         | the aggressive bots). Unfortunately, CloudFlare also requires
         | you to host your DNS there, so switching away is double-tricky.
        
         | base698 wrote:
         | You could make a browser extension to filter your content
         | through AI and rewrite it to something else you find more
         | palatable. Ironically, with AI you could probably complete it
         | in an hour.
        
       | bt1a wrote:
       | giggled like a child through this one
        
       | alkyon wrote:
       | He's just a zombi - Google AI can't be wrong of course, given
       | hundreds of billions they're pouring into it.
       | 
       | Yet another argument for switching to DuckDuckGo
        
       | pgaddict wrote:
       | The toaster mention reminded me of this:
       | https://www.youtube.com/watch?v=LRq_SAuQDec
       | 
       | This is how "talking to AI" feels like for anything mildly
       | complex.
        
       | liendolucas wrote:
       | Why we are still calling all this hype "AI" is a mystery to me.
       | There is zero intelligence on it. Zero. It should be called "AK":
       | Artificial Knowledge. And I'm being extremely kind.
        
         | gtsop wrote:
         | > There is zero intelligence on it
         | 
         | 100% with you.
         | 
         | LLM is good enough i believe. No need to invent anything new.
        
       | hunter-gatherer wrote:
       | I just tried the same thing with my name. Got me confused with
       | someone else who is a touretts syndrom advocate. There was one
       | mention that was correct, but it has my gender wrong. Haha
        
       | cbsmith wrote:
       | As guy named Chris Smith, I really appreciated this story.
        
       | sebastianconcpt wrote:
       | And this is how an ED-209 bug happen.
        
       | type0 wrote:
       | "I'm sorry, Dave. I'm afraid I can't do that..."
        
       ___________________________________________________________________
       (page generated 2025-07-20 23:02 UTC)