[HN Gopher] I spent an evening on a fictitious web
___________________________________________________________________
I spent an evening on a fictitious web
Author : kinlan
Score : 66 points
Date : 2024-08-29 09:05 UTC (13 hours ago)
(HTM) web link (paul.kinlan.me)
(TXT) w3m dump (paul.kinlan.me)
| GaryNumanVevo wrote:
| No more dead links, instead Chrome will instead hallucinate what
| that website should have looked like.
| nxobject wrote:
| Thought experiment: think of a potential usecase (perhaps
| offline) where some smart-aleck product manager thinks this
| might be useful.
| GaryNumanVevo wrote:
| Come to think of it, I'm kind of surprised that Google Chrome
| doesn't have a "this link is broken, would you like to see
| what we have cached for this url?" feature.
| nxobject wrote:
| Or - some day when this runs on-device - a "smart preload"
| feature from Google Chrome. On slow connections,
| hallucinate a wireframe of a heavy website until it loads.
| gryfft wrote:
| Resulting in web elements bouncing around and changing
| wholesale as the site loads? Hitting the wrong button as
| some UI element loads late is already a painful UX
| stubbed toe as it is.
| gryfft wrote:
| Google Cache used to be so useful. But they've left that
| all to the Internet Archive now.
|
| https://arstechnica.com/gadgets/2024/02/google-search-
| kills-...
| GJim wrote:
| > Chrome will instead _hallucinate_ what that website should
| have looked like
|
| I find the term 'bullshits' to be more apt than 'hallucinate'.
|
| As something of an aside; that the purveyors of AI use the
| latter, whereas those who interact with it use the former,
| speaks volumes.
| sva_ wrote:
| I find the term 'bullshitting' less fitting as it seems to
| anthropomorphize LLM in a way that attributes agency to them
| which they seem to lack. As in, someone who bullshits
| presumably does this for some personal gain, which doesn't
| seem like something an LLM is capable of atm.
|
| It might (currently) be most apt to characterize these
| occurrences as shifts out of the training data distribution
| djeastm wrote:
| >as it seems to anthropomorphize LLM in a way
|
| "hallucinate" does the same thing, fwiw
| krapp wrote:
| It seems impossible to come up with language to describe
| why LLMs are both convincing and unreliable
| ("hallucinate","confabulate","bullshit) or why the
| ability to converse in natural language does not denote
| intelligent cognition ("stochastic parrot") without
| anthropomorphizing them to a degree, given that these
| things are designed to anthropomorphize themselves.
| burner_fyllms wrote:
| Yes true but saying "displaying the inherent flaws in the
| design that make them unsuitable for serious purposes"
| every time gets tedious.
| vundercind wrote:
| "Hallucinate" connotes consciousness and self to me.
| Bullshit does not. Markov chain text generators bullshit,
| they don't hallucinate. I'm not aware of anything in LLM
| tech that warrants implying any sort of awareness,
| understanding, or consciousness. Not even close.
| burner_fyllms wrote:
| About the same time that LLMs were starting to make the
| news, I was spending a lot of time with an elderly relative
| with severe dementia, and was struck by the fact that LLMs
| are doing the same thing she is: the word is
| "confabulating", meaning to come up with stories and
| rationalizations to fill in gaps in knowledge and memory.
| WorldMaker wrote:
| The equal problem with "hallucinate" is that it also has
| far to many anthropomorhic connotations (a person having
| creative fun, a person on some form of drugs, a person in
| some sort of "sleep state").
|
| So far I'm coming around to the growing use of "slop",
| originally meant as an alternative to "spam" and to imply
| spam-like intent, but the great thing about this word
| choice is that the closest anthropomorphic connotation is
| to "pig feeding". Pigs can be highly intelligent, of
| course, but that's not the first image one has when
| thinking of a pig at a slop trough.
| patapong wrote:
| I quite like "confabulation" as a term.
|
| From Merriam webster: to fill in gaps in memory by
| fabrication
| WorldMaker wrote:
| "Confabulation" also sounds too anthropomorphic to my
| tastes. Especially because "fabrication" often implies
| "intent to" by the actor in question. It's the exact same
| problem as "bullshit", just the G-rated grandiloquent
| version. To be fair, human languages were built to
| anthropomorphize almost everything so finding the right
| terms here is hard.
| romanobro56 wrote:
| "WorldMaker" seems to be a little anthropomorphized to
| me.
| hunter2_ wrote:
| > someone who bullshits presumably does this for some
| personal gain
|
| That's one definition of bullshitting, but not the one
| being used here. If someone says "I think you're
| bullshitting me" then yes, you're being accused of
| consciously seeking personal gain. But if someone says "we
| were standing around bullshitting" then no, it refers to
| killing time with mindless communication, which is a quite
| good analogy for LLM output.
| jrm4 wrote:
| I actually _like_ your first definition a bit better;
| very in line with the way the term was academically in
| vogue a few years ago; the idea that you 're expressing
| information intended to appear factual without regard for
| how factual it is.
|
| The LLM does it because it's programmed to, and the human
| does it for some other self gain reason, but both the
| process and the results are very similar.
| hunter2_ wrote:
| > the idea that you're expressing information intended to
| appear factual without regard for how factual it is.
|
| That's my second definition! Sorry if I wasn't clear. My
| first definition (which aligns with the comment I had
| originally quoted) is that the speaker is aware that
| they're saying false things, and therefore has intent to
| deceive, typically for personal gain (they are
| bullshitting another person). My second definition is
| that the speaker has no regard for whether what they're
| saying is true or false (they are bullshitting _with_
| another person).
|
| An LLM does not bullshit you, it bullshits with you. It's
| fluff, not a bluff.
| kinlan wrote:
| It's not Chrome hallucinating, it's the websim that is
| generating the content.
| rng-concern wrote:
| I think they mean a hypothetical future of chrome browser,
| not what websim currently is.
| latexr wrote:
| I went to the website, clicked on the search bar, and was
| immediately stopped from proceeding unless I provided a Google
| login.
|
| Why is that necessary? No idea, they don't say.
|
| Doesn't seem like a "web" I'd want to partake in, with Google as
| a gatekeeper even if they're not the authors of the content.
| GaryNumanVevo wrote:
| They're just using Google for an easy user onboarding, nothing
| to get all up in arms about. Websim is trying to sell plans for
| more than 30 generations a day.
| jeroenhd wrote:
| Makes a little sense to show a "register with Google" button,
| but clicking the main UI element and being redirected to a
| Google sign-in screen is bad form.
|
| Also weird that apparently you can sign in via Discord, but
| you can only sign up via Google?
| GaryNumanVevo wrote:
| Not sure, maybe it's some growth hacking trick I'm not
| aware of.
| sva_ wrote:
| Websim utilizes LLM to generate a fictional website based on a
| domain name you provide, and displays the result in their
| virtual browser.
|
| LLM generations are quite costly so it is difficult to provide
| them without some kind of anti abuse strategies in place, so I
| think that's fair.
| tomcam wrote:
| Thank you. I had no idea what TFA was trying to get across
| ThinkingGuy wrote:
| Yeah, the requirement to create a Google account was a show-
| stopper for me.
| lewispollard wrote:
| So the purpose of the website wasn't clear to you, you then
| figured it out, decided to write an article about it, but didn't
| explain to the reader what the website actually is?
| kinlan wrote:
| That's a fair point. I chose more so to document the experience
| and how I felt.... I can update the article if that helps.
| nuancebydefault wrote:
| Well, if you review a movie you try not to spoil by explaining
| its plot?
| superultra wrote:
| This is interesting but if you're genuinely interested in
| recapturing the feelings many of us had at the beginning of the
| web, I would suggest playing Hypnospace Outlaw. It's of course
| quite different than websim but it's really fun.
| csixty4 wrote:
| I just watched a Youtube video of it and I DEFINITELY need to
| check this out. They seem to have captured the feel of the old
| web perfectly.
| crtasm wrote:
| It's a fantastic game. Just noticing now they added mod
| support so will be trying that later!
|
| https://jay-tholen.itch.io/hypnospace-
| outlaw/devlog/111175/h...
| autokad wrote:
| I struggle with the idea of monetization. In one sense, I think
| its great people can get paid for doing what they like to do
| and it can encourage more content creation. On the other hand,
| everything becomes disingenuous, people become perversely
| incentivized, you got people gluing things to turtles to make
| videos of them rescuing turtles with 'lichens' on their shells.
|
| So I don't know, I am really torn on how I should feel about it
| teg4n_ wrote:
| I wonder why they aren't straightforward that this is just ai
| generating websites based on a url prompt? It seems like they go
| out of their way to not say AI.
| jschveibinz wrote:
| You know, there a lot of negative comments here but I found this
| post to be enlightening, so thank you.
|
| This YT video explains how this simulation could be useful to the
| startup community:
|
| https://youtu.be/pdWS-ZJ3K8Y?feature=shared
___________________________________________________________________
(page generated 2024-08-29 23:02 UTC)