[HN Gopher] Google's New AI-Powered Browser Could Mark the End o...
___________________________________________________________________
Google's New AI-Powered Browser Could Mark the End of the Human
Internet
Author : leotravis10
Score : 58 points
Date : 2024-01-26 19:51 UTC (3 hours ago)
(HTM) web link (nymag.com)
(TXT) w3m dump (nymag.com)
| Giorgi wrote:
| If it is Bard, there is no issue. One can spot ChatGPT generated
| text from miles, Bard is even dumber.
| tonydev wrote:
| Ignoring how ignorant this comment is to rate of improvement,
| Bard and chatGPT are about on par when it comes to text output
| evaluation: https://huggingface.co/spaces/lmsys/chatbot-arena-
| leaderboar...
| Zambyte wrote:
| The level of almost-relavence of this comment is beautiful
| given the context.
| bastardoperator wrote:
| Outside of your own assumptions, how do you verify this?
| tgv wrote:
| If it sounds like a sales person on Adderall, it is ChatGPT.
| 6gvONxR4sf7o wrote:
| This could be the weirdest kind of moat yet. If you crawled all
| the things and built a model before everything became bot-
| generated, you can get clean post-2024 human data from the human
| inputs to your tool. If you haven't, then maybe you're stuck with
| the 2023-and-earlier crawls, limiting your models' relevance.
| We've already seen that the feedback loops of training models on
| model outputs isn't nearly as valuable, and can get wacky fast.
| It'll be weird to see how that plays out.
| HeatrayEnjoyer wrote:
| >We've already seen that the feedback loops of training models
| on model outputs isn't nearly as valuable, and can get wacky
| fast.
|
| IIRC this is less true with the very largest SOTA models, and
| that OpenAI is now using synthetic data with success.
| baq wrote:
| See also a physical analog:
| https://en.m.wikipedia.org/wiki/Low-background_steel
| carlosjobim wrote:
| The shadow libraries are the largest collection of human
| knowledge to date, and completely untainted by AI. Any search
| engine that crawls and indexes them will have a tenfold
| increase in quality and be as revolutionary as the invention of
| the internet. No LLM model needed.
|
| On top of that, there is no incentive for AI generated content
| to enter the shadow libraries at all.
| DaiPlusPlus wrote:
| > On top of that, there is no incentive for AI generated
| content to enter the shadow libraries at all.
|
| I think you underestimate just how many
| people/entities/forces that exist that would love to see
| further decline, division, and discord in the Anglosphere...
| vjulian wrote:
| In seriousness, are other languages faring any better or
| differently in all this?
| saintfire wrote:
| Beyond just western destabilization, there are just flat
| out people who cause issues just because. Not to mention
| people who are anti-AI are motivated to weaken AI.
|
| There's no reason people wouldn't taint any source of AI-
| free information if it became clear that is what it was.
| carlosjobim wrote:
| What does the "anglosphere" have to do with online
| libraries? Will I regret asking that?
|
| There is no incentive for AI content or spam in shadow
| libraries, because why would anybody risk prison to
| illegally copy spam.
| ilaksh wrote:
| What makes you assume they have not already been used by
| OpenAI, Google, or Baidu, etc?
| carlosjobim wrote:
| I don't assume that and I haven't said anything to the
| likeness of it.
| CuriouslyC wrote:
| Except that human generated doesn't really seem to matter, all
| that seems to matter is some basic guard rails on the data you
| choose. Meta has models generating training data then grading
| it and select the best examples to reincorporate into the
| training set, and it's improving benchmarks.
| kromem wrote:
| The problem with model collapse is reinforcing means at the
| costs of the edges of your distribution curve, particularly
| on repeat.
|
| One of the things that is being overlooked is that offsetting
| the job loss from AI replacing mean work is that there's
| going to be new markets for edge case creation and curation.
|
| Jackson Pollock and Hunter S Thompson for the AI generation
| with a primary audience of AI vs humans, sponsored by large
| tech and data companies like the new Renaissance Vatican.
| kjkjadksj wrote:
| Reminds me of how they need to raise sunken wwi ships to get
| clean steel for certain applications after all the nuclear
| weapon testing happened.
| croon wrote:
| In the example screenshot, the assistant takes this input:
|
| > im interested in this place - do you allow dogs?
|
| and writes this output:
|
| > I'm interested in your property. Its exactly what I've been
| looking for. To make it perfect for me, I just need to know if
| the unit is pet-friendly. Thank you for your time and
| consideration. I look forward to your response.
|
| The input is concise and to the point, the latter is
| infuriatingly verbose and formulaic. But I guess it'll be easy to
| filter out humans I would actually be willing to communicate
| with.
| coffeebeqn wrote:
| It's a BSifyer
| jvanderbot wrote:
| My wife makes an living asking people for things.
|
| She writes like the latter example. I find myself continuously
| frustrated by people. She loves them. I find that I'm
| constantly rejected when suggesting things, she isn't.
|
| I'm with you, but I think we're wrong.
| sirspacey wrote:
| This is it. This is why I think AI is a better writer than I
| am.
| mega_dingus wrote:
| I was talking to somebody who worked HR at a multi-
| disciplinary shop, and she said you could always identify the
| emails coming from programmers
|
| It was a complaint, definitely not a compliment. She said
| programmers listed things out in bullet points and bluntly
| to-the-point. She complained they were dry, intimidating, and
| she hated dealing with them
|
| I still write concisely and with bullet points, when writing
| to other programmers. But I now expand things when talking to
| everybody else. And I've found I get better responses
| croon wrote:
| If the HR person wanted to recruit programmers, I feel like
| that's a feature.
| kristjansson wrote:
| It shouldn't be terribly surprising that humans incorporate
| signals beyond pure denotational content of message? Text is
| a pretty low-bandwidth channel, so we infer as much meaning
| as possible from the bits of information we receive. All the
| stylistic choices encode additional information about the
| sender; part of one's job as an effective communicator is
| evaluating the effect of all those choices and adapting the
| entire message (not just its content) to convey the intended
| impression (not just the meaning).
|
| Incidentally, this is why AI-writing isn't necessarily better
| communication. The robot can help translate intentions into
| prose, but it can't decide what one should actually intend to
| say.
| anon373839 wrote:
| This reminds me of Craigslist. When I get a response that's
| written in a terse and grammatically incorrect style, I
| ignore it. Experience tells me these transactions don't tend
| to go well.
| mega_dingus wrote:
| Why is this downvoted? I consider it and its replies
| interesting and relevant
|
| If there's an HN policy violation in this post, I'm legit
| curious what it is
| JohnFen wrote:
| The latter also says quite a lot that was just made up and
| wasn't even implied by the original.
| RandomLensman wrote:
| When I take the output apart: The first sentence is to the
| point and short. The second is potentially redundant but might
| increase the likelihood of a reply. The third one is perhaps a
| bit over the top and could be merged shorter with the second
| (e.g., "... looking for, but I was wondering if ..."). Next one
| is just basic politeness. Last one feels optional but might at
| the margin increase likelihood/speed of reply.
|
| Not perfect but not bad either (assuming a human reader on the
| receiving side).
| achrono wrote:
| Well, we obviously then need a de-verbosifier. In which case,
| how _do_ you filter for your aforementioned humans?
| stonogo wrote:
| It's not only pointlessly verbose, it ruins the intention
| behind the input! The user wants to know if they allow _dogs_ ,
| not _pets_. They can get a "yes we allow some pets" response
| and now they have to start all over to figure out _which_ pets
| those are, whether dogs are included, etc.
|
| This is a shitload of computational expenditure to make things
| objectively worse by introducing an entirely new class of
| problem to the original message. It's literally "I had a
| problem, so I used AI, and now I have two problems"
| neilv wrote:
| That screenshot about renting and dogs...
|
| Who would think that's a good idea?
|
| * Is it people who have trouble with reading comprehension, and
| don't understand that other people can read a lot more into
| writing than they do?
|
| * People who are insincere?
|
| * People who think corporate-BS language like "for your
| protection" and "due to unusually high call volumes" is
| professional- and smart-sounding?
|
| * People who want to create more utter BS filler in the world for
| some reason. (See SEO, or the eBay seller feature to create bulk
| of lies like "the total solution for all your computing needs",
| etc.)
|
| The only scenario I can think of to which I'm sympathetic is non-
| native speakers who aren't fluent, and who need a translator, or
| are afraid of politeness faux pas. But even that has pitfalls: a
| reader with basic reading comprehension is going to infer things
| about the 'writer' that simply aren't true. For example, a
| milquetoast LLM like ChatGPT hits some native idioms, and the
| reader doesn't realize that there's a huge cultural disconnect in
| awareness and meaning. Even if the text is even superficially
| saying what the non-fluent person intended (and even that isn't a
| given, since they're not fluent enough to check).
| skywhopper wrote:
| Yeah, nothing about this looks necessary or advisable. The only
| people who want this are the Google PMs who have to "integrate
| AI" by Q2.
| neilv wrote:
| Are we leaving the era when adtech and surveillance by Google
| were things we could look past, because they mitigated it
| with some older-era good things?
|
| (I still love the 3D view in Google Maps.)
| hathawsh wrote:
| Both of the texts are suboptimal in different ways. The
| original text is: im interested in this place -
| do you allow dogs?
|
| Some readers will assume the writer is not well educated
| because "im" should be capitalized and there should be an
| apostrophe. Other readers will notice the use of a hyphen,
| which is not very common in written text; it reveals that the
| writer may actually be educated but writing quickly. A well
| educated reader will see both of those signals and recognize
| that this is too little information to reliably judge the
| writer.
|
| The AI version of the text is overly formal and verbose, making
| it clear that the writer does not wish to reveal their level of
| education. I think that's the reason people might be interested
| in this.
| saintfire wrote:
| Its interesting when all you seem to factor in is level of
| education, implying it's the most important metric for
| selecting tenants. I'd contend its hardly relevant.
|
| My only gut impressions is the first seems rather nonchalant,
| which is sort of strange when entering a presumably expensive
| contract. The longer response just feels very boilerplate to
| the point where I'd question if its not the opening to a
| blanket scam message.
|
| I guess an important takeaway is that everyone perceives
| interactions in different ways and that's really why this
| whole thing is relevant.
| hathawsh wrote:
| I think of it more from the writer's perspective. In my
| experience, a large number of people shy away from writing
| anything because they feel they cannot write in a way that
| makes them sound smart. (And if they're not smart, they
| believe the recipient will not be interested in helping
| them.) I think there's value in tech companies helping
| people overcome that fear.
| neilv wrote:
| I think you've found a legitimate use for this. (As a
| stopgap measure, for better education.)
|
| Sadly, I don't think that will be the majority of the
| use.
|
| Also, if this were the target use case, the use case
| could be adapted to the larger problem of
| tutoring/coaching feedback, to help the person learn and
| improve, not "write my essay for me, I don't much care
| what it says, just make me look smart".
| axegon_ wrote:
| "X could mark the end of Y" is a ridiculously outworn headline.
| It's practically the Betteridge's law of headlines for the tech
| industry.
| ben_w wrote:
| The internal combustion engine could mark the end of buggy whip
| manufacturers.
|
| I jest, but only a bit. New inventions can and do wipe out old
| sectors, but it's hard to tell in advance if you're seeing a
| real transition or a pointless flash in the pan, and people
| make mistakes in both directions.
| tenpoundhammer wrote:
| I find it interesting that the edge browser already has this
| feature. I wonder if chrome feels pressured to have feature
| parity specifically with AI or if they believe this change will
| actually improve their usage metrics?
| kjkjadksj wrote:
| Little keeping up with the joneses moves like these are always
| great for a bump in the stock price, its not always to shoot
| for some metric or business profit
| AlexandrB wrote:
| Assuming that, like ChatGPT, the model runs on Google's servers
| doesn't this vastly increase the cost to Google of offering
| Chrome for free? Now you have to provide AI compute time to every
| 4chan poster and forum warrior.
|
| The economics of AI still seems nuts to me. Feels like another
| bait and switch in the making when all these "free" services need
| to start showing some revenue.
| notaustinpowers wrote:
| We're gonna start getting ads when you open a new tab and a
| 5-second unskippable ad while a website loads! /s
| mulmen wrote:
| Or brands can buy weight in the model.
| cowboyscott wrote:
| This seems like a plausible and powerful business model.
| Hopefully people reject it.
| pixl97 wrote:
| >Hopefully people reject it
|
| With how humanity is going so far with the ad driven web,
| outlook not so good.
| AlexandrB wrote:
| Or maybe users will just get "subtle" product placement in
| their AI assisted output.
| DaiPlusPlus wrote:
| Yes, you make an excellent point, almost as excellent as
| this crisp and refreshing Pepsi I was drinking as I read
| your post.
| EdwardDiego wrote:
| Well one assumes it was the choice of a new generation
| for a reason.
| lawlessone wrote:
| Imaginary products. If you click the advert they forward it
| some gaussian splatting tool and 3d print it.
| ilaksh wrote:
| It's a direct evolution of the search paradigm. You go from
| entering a few keywords roughly related to what you want and
| then clicking on ads to continue the search, to having a short
| conversation with the AI honing in precisely what you need and
| then having the AI complete the transaction or even generate
| the content for you, optionally with a transaction attached.
|
| The direct interactions with AI increase the fidelity of the
| customer model of you that Google has and uses to optimize
| sales to you for it's customers.
|
| Even further, the most common source of inspiration for
| purchases is the behavior of other people. If the AI can
| sufficiently emulate humans and ingratiate itself enough to you
| then it can directly influence your behavior just by suggesting
| that it would make certain decisions in your place or that
| others have already.
|
| This is actually not far removed from the existing situation,
| just the next level of technological capability.
|
| By actually generating responses for you, it starts training
| you to allow it to make decisions on your behalf. This may
| readily extend into purchase decisions.
| rozim wrote:
| With WASM or tf-js the models, or smaller "good enough"
| versions of them might be able to run in the browser.
| jchw wrote:
| The example is that it can make your writing more long-winded
| without adding any important details, so that it takes more
| effort for the person to respond? Why? I'm already overly verbose
| as it is.
|
| > Could Mark the End of the Human Internet
|
| Man... what does that even really mean? Popping over to ChatGPT
| to do this kind of shit is already mainstream enough to have been
| the subject of a South Park episode. There's probably _hundreds_
| of similar browser extensions for Chrome alone. I guess this is
| more convenient, but what problem does it really solve?
|
| Call me crazy but, I somehow imagine this browser feature will
| not lead to some AI Internet singularity. It's just going to
| slide the crap-factor up a few more notches than it already is,
| making the Internet even less enticing to use.
| kirykl wrote:
| > I'm interested in your property. Its exactly what I've been
| looking for.
|
| The AI may be giving up some of the users negotiating leverage
| there
| pixl97 wrote:
| Are younger generations, at least in the US, interested that
| much in negotiating?
|
| I'm kind of in that age gap where the world started converting
| to barcodes and computer driven prices and at least to me it
| seems a lot less haggling occurs now. Again, a lot more of our
| purchases occur with corporate entities where this haggling
| doesn't occur. Transactions now are more based on smoothness
| and speed of transaction. You have X for $Y. Here is $Y. Good
| day.
| CrypticShift wrote:
| It is true that Bard/ChatGPT is just two clicks away. But never
| underestimate the power of defaults. This is definitely not a
| good default for writing anywhere on the web. Google could at
| least have made this an extension instead.
| bluerooibos wrote:
| Huh, maybe this is why big-G hasn't been too concerned about the
| rise of ChatGPT. As long as they have Chrome, they still have
| direct access to a huge portion of web users - even if said users
| have shifted from using their search engine.
| aquajet wrote:
| https://archive.is/3AAQl
| gabev wrote:
| End of human internet is far-fetched.
|
| LLMs won't destroy human thought since LLMs are an average
| approximation of human thought. Sure, this might elevate those
| who are fresh and are just looking for generic copy, though the
| best writers are secretly just the best thinkers, as writing is a
| medium to exercise thought.
|
| I'm a bit biased, having built an AI writing tool myself
| (https://zenfetch.com), though it's for this very reason that we
| aren't interested in generating new content on your behalf. We
| simply want to make it easier for you to recall information to
| augment your work.
| krajzeg wrote:
| I can already see the wonderful cyberpunk future, where people
| writing e-mails use Gmail's AI assistant to add all the polite
| boilerplate, while the recipients trying to get through their
| overflowing inbox use the Gmail-integrated AI summarizer to pare
| it all back down.
| fivre wrote:
| 2001: what is this nonsense plot? why in the hell would anyone
| fill the world with mass-produced nonsense information? what
| purpose would it serve!?
|
| 2015: what is this nonsense plot? how would you even create a
| virus that destroys a language? it's inconceivable! it makes no
| sense! why!?
|
| someone please find whomever it is feeding Hideo Kojima advance
| knowledge of exactly what the next poison trend in the
| information industry will be
| altruios wrote:
| So wait... are you referencing anything other than MGS with the
| 2015 comment... Have I missed a big thing?
___________________________________________________________________
(page generated 2024-01-26 23:01 UTC)