[HN Gopher] PhysicsForums and the Dead Internet Theory
       ___________________________________________________________________
        
       PhysicsForums and the Dead Internet Theory
        
       Author : TheCog
       Score  : 194 points
       Date   : 2025-01-24 19:38 UTC (3 hours ago)
        
 (HTM) web link (hallofdreams.org)
 (TXT) w3m dump (hallofdreams.org)
        
       | Terr_ wrote:
       | Ooof. The idea--or reality--that humans' accounts would be
       | hijacked by site-owners to make impersonating slop (presumably to
       | bring in ad-revenue) is somehow both infuriating and energy-
       | sapping-depressing.
       | 
       | Issues of trust and attribution have always existed for the web,
       | but for many reasons it feels _so much worse now_ --how bad must
       | it get before some kind of sea-change can occur?
       | 
       | I'm not sure what the solution would be here.
       | 
       | * Does one need to establish a friggin' _trademark_ for their own
       | name /handle [0], just so they can threaten to sue using money
       | they probably don't have?
       | 
       | * Is it finally time for PKI and everybody signs their posts with
       | private keys and wastes CPU cycles verifying signatures to check
       | for impersonation?
       | 
       | * Is there some set of implied collective expectations which need
       | to be captured and formalized into the global patchworks of law?
       | 
       | [0] Ex: By establishing a small but plausible "business" selling
       | advice and opinions under that name, and going after the
       | impersonator for harming that brand.
        
         | UltraSane wrote:
         | I exchange public keys with close friends in person. A large
         | scale solution would be very Orwellian. You would need a
         | national ID that is a smart card to connect to an ISP and
         | possible biometric verification.
        
           | hooverd wrote:
           | Do you exchange public keys with your non-computer-toucher
           | close friends?
        
             | arccy wrote:
             | if you convince them to use signal that's close enough...
        
           | afpx wrote:
           | Could I buy a physical device like RSA SecurID from my bank
           | branch or post office and log into a closed VPN-like network
           | where all the servers are run by verified users? I know there
           | are problems with that idea.
        
           | bawolff wrote:
           | We already have e-passports and zero knowledge proofs to show
           | you have one without revealing who you are.
           | 
           | If all else fails, there is always the web of trust (i think
           | web of trust has a lot of issues, but establishing soneone is
           | human seems like a much lower bar than establishing identity)
        
         | m463 wrote:
         | It is sad. I have been putting a copyright notice on my resume
         | at the bottom to prevent some nonsense.
         | 
         | I have always wondered if people could attach some sort of
         | cryptographic marker to their posts, that could link to an
         | archive somewhere. Mostly I was thinking of backups of posts to
         | yelp that couldn't be taken down, but I wonder if it would work
         | that posts someone never made.
        
           | Terr_ wrote:
           | > I have been putting a copyright notice on my resume at the
           | bottom to prevent some nonsense.
           | 
           | I expect the bad-actors will feed it into an LLM and say:
           | "Rephrase this slightly", and they will get away with it
           | because the big-money hucksters will have already convinced
           | courts to declare it transformative or fair-use.
        
         | scotty79 wrote:
         | Shouldn't we invent a protocol that keeps the content you
         | produce under your control so that places like forums or
         | facebook are only discovery devices and interaction
         | facilitators, but not custodians of all communication? Being
         | able to independantly reach the source of piece of information
         | is increasingly important.
        
         | hooverd wrote:
         | Don't sign your posts!
        
           | Terr_ wrote:
           | Are you saying nothing should be key-signed because you want
           | some kind of deniability later?
           | 
           | Or do you mean people should avoid using an pseudonym in
           | favor of posts that are anonymous, so that there's never any
           | created identity to exploit/defend?
        
             | hooverd wrote:
             | -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA512
             | 
             | Sorry, it was a bad joke, there's a phrase "don't sign your
             | posts" used when someone ends one with an insult. I support
             | signing your posts with digital signatures if you want.
             | 
             | -----BEGIN PGP SIGNATURE-----
             | 
             | iHUEARYKAB0WIQQC37hdRRO1LtrTQY8AXxvbqjG5KgUCZ5QRXwAKCRAAXxv
             | bqjG5 Kth4AQCccNygglcSyEiMAqQyw6cXH54fnqBT9rJO9TSIqH14rgEAy
             | UwxiQlV05XV Du2ftMk3DwiUZLKDxVI+ODCn4osf2wM= =XZhX -----END
             | PGP SIGNATURE-----
        
         | bee_rider wrote:
         | Impersonating somebody to make it look like they said something
         | they didn't really ought to be considered defamation or
         | something.
         | 
         | Also there's something really uncomfortable about the phrasing
         | of a lot of those answers. I mean, even as somebody with an
         | engineering degree, I try not to ever answer a question "as a
         | <field> engineer" because when screwing around online I haven't
         | done the correct amount of analysis to provide answers "as an
         | engineer" ethically (acknowledging the irony of using the
         | phrase here, but, clearly this is not a technical statement so
         | I think it is fine). The bot doesn't seem to have this
         | compunction.
         | 
         | This ravenprp guy was an engineering student a couple years
         | ago. I guess it's less of a thing because he wasn't commenting
         | under his real name. But it seems like this site, given the
         | type of content it hosts, could easily end up impersonating
         | somebody "as an engineer" in the field they work and have a
         | professional reputation in. And the site even has a historical
         | record of them asking and answering questions through their
         | education, so it does a really good job of misleading people
         | into thinking an engineer is answering their questions.
         | 
         | I know the idea of an individual professional reputation has
         | taken a beating in the modern hyper-corporate world. But the
         | more I think of it, the more I think... this seems incredibly
         | shitty and actually borderline dangerous, right?
        
       | Rodeoclash wrote:
       | The ShackNews forum: https://www.shacknews.com/chatty was similar
       | - go back in time on on it and you can find posts about 9/11
       | unfolding.
        
         | tomrod wrote:
         | Ars Technica started with comms forums + this new idea to
         | report tech news. The forums are still there but not nearly the
         | camaraderie of the early days.
        
           | r58lf wrote:
           | Same with siliconinvestor.com
           | 
           | It was an early stock discussion forum. It grew rapidly when
           | search engines started indexing everything and this forum had
           | a URL for each message that was easily indexable.
           | 
           | It's still around, but nothing like the old days.
        
             | 0xDEAFBEAD wrote:
             | I find these old school forums fascinating. How does that
             | even work, to have a thread of 192,211 posts about
             | Qualcomm?
             | 
             | https://www.siliconinvestor.com/subject.aspx?subjectid=3603
             | 5
             | 
             | Suppose the average post is about 1 paragraph long. One
             | paragraph is about 150 words. So 192211 * 150 = about 29
             | million words. For comparison, the Lord of the Rings
             | trilogy is only around half a million words.
             | 
             | It wouldn't surprise me if there are more words about
             | Qualcomm in that thread than the total amount of internal
             | and external documentation and financial guidance that
             | Qualcomm itself has ever produced.
             | 
             | Surely users aren't expected to read the entire thread
             | before adding a post? But I think I remember seeing old
             | forums where that basically is the expectation. And
             | honestly... that's pretty cool. It seems better than the
             | new social media, where we keep having low-effort recurring
             | debates. I like the idea of adding to an enormous pile of
             | scholarship in cyberspace. A Ship of Theseus discussion
             | which may outlive any individual participant, but has a
             | semblance of continuity all the same, like an undergraduate
             | college society with a 100+ year history.
             | 
             | Time for a cyberpunk revival. Retro-cyberpunk, we could
             | call it.
        
           | Aurornis wrote:
           | > The forums are still there but not nearly the camaraderie
           | of the early days.
           | 
           | I remember visiting those forums when I was young and feeling
           | like part of a big group of friendly people hanging out
           | online together.
           | 
           | I tried creating a new account recently and it had a very
           | different vibe. Felt like the old guard had been established
           | and the forums I looked at were dominated by a couple of
           | posters who just wanted to talk, but not discuss anything.
           | 
           | Some of the post counts of those people were eye-watering.
        
             | StefanBatory wrote:
             | > Felt like the old guard had been established and the
             | forums I looked at were dominated by a couple of posters
             | who just wanted to talk, but not discuss anything.
             | 
             | I think this is the case for most places, I'm afraid. I use
             | mainly Discord - there are certainly a lot of servers where
             | I'm purely because I'm talking to people I met there, and I
             | don't even play that game anymore.
             | 
             | There solution is simpler - after time we create private
             | servers or channels for the old guard, but even then the
             | places deteriorate.
             | 
             | It's a thing I don't know how to solve.
        
               | sdwr wrote:
               | The problem is when the old guard becomes an exclusive
               | clique. Sometimes it's by accident ("I'm happy with the
               | friends I already have"), but usually there's a portion
               | of the inner circle that validate themselves by
               | gatekeeping newcomers.
               | 
               | There has to be an active commitment to include
               | (annoying, tactless, socially-impoverished) newbies, or
               | the snake eats its tail and collapses under its own
               | weight.
        
           | 0xDEAFBEAD wrote:
           | reddit had camaraderie in the early days too.
           | 
           | Is there anywhere on the internet that still has camaraderie?
        
           | encom wrote:
           | It's been a long time since I visited the Ars forums, but the
           | news article commenters today are absolutely deranged. It
           | makes me want to not engage with the forums again.
        
       | EA-3167 wrote:
       | I don't quite understand the issue of "back-dating" or hijacking
       | accounts. How is this being done exactly? I came away from this
       | article wondering if I was missing something.
        
         | Evidlo wrote:
         | The last section mentions that the PhysicsForums admins are
         | experimenting with LLM-generated responses, so I think the site
         | owners are responsible.
         | 
         | > We reached out to Greg Bernhardt asking for comment on LLM
         | usage in PhysicsForums, and he replied:
         | 
         | > "We have many AI tests in the works to add value to the
         | community. I sent out a 2024 feedback form to members a few
         | weeks ago and many members don't want AI features. We'll either
         | work with members to dramatically improve them or end up
         | removing them. We experimented with AI answers from test
         | accounts last year but they were not meeting quality standards.
         | If you find any test accounts we missed, please let me know. My
         | documentation was not the best."
         | 
         | Why they would recycle old human accounts as AI "test
         | accounts", I have no idea.
        
           | emmelaich wrote:
           | Looks like Greg now does SEO for Shopify. That fits I guess.
           | 
           | https://gregbernhardt.com/
           | 
           | https://www.linkedin.com/in/gregbernhardt
           | 
           | https://www.physicsforums.com/insights/author/greg-
           | bernhardt...
           | 
           | https://x.com/GregBernhardt4/status/1875287174205374533
           | 
           | > _" The dead internet theory is coming to fruition. This is
           | a large reason I'm starting to cut back on social media and
           | take back my time."_
        
             | Gooblebrai wrote:
             | Oh, I thought he would be a physicist
        
         | Terr_ wrote:
         | > How is this being done exactly?
         | 
         | Presumably it's being done by the site-owner, whether that
         | means new-management or original management getting
         | desperate/greedy.
        
           | EA-3167 wrote:
           | Oh that's so disappointing to hear about PhysicsForums.
           | Thanks for the answer to you, and the others who replied.
        
         | roywiggins wrote:
         | Whoever runs the site/database is just inserting rows with fake
         | datestamps under existing (presumably abandoned) account names.
        
           | aaron_m04 wrote:
           | How could anyone possibly think it'd be OK to impersonate
           | real humans?
        
             | jeremyjh wrote:
             | They don't give a fuck if its "ok". They are just trying to
             | scrape up some additional ad revenue, like 99% of the rest
             | of the internet.
        
               | simplicio wrote:
               | I don't really get the revenue angle though. The AI posts
               | don't seem to be trying to drive traffic to ads or
               | anything. I really don't understand the point of auto-
               | generating a bunch of AI gibberish under the name of old
               | users on ones own site?
        
               | roywiggins wrote:
               | A misguided attempt at SEO?
        
         | guynamedloren wrote:
         | Wondering the same. I couldn't make it through the article.
         | Fascinating discovery, but poorly written and difficult to
         | navigate the author's thoughts. The interstitial quotes were
         | particularly disorienting.
        
       | segasaturn wrote:
       | Money quote:
       | 
       | > There's also a social contract: when we create an account in an
       | online community, we do it with the expectation that people we
       | are going to interact with are primarily people. Oh, there will
       | be shills, and bots, and advertisers, but the agreement between
       | the users and the community provider is that they are going to
       | try to defend us from that, and that in exchange we will provide
       | our engagement and content. This is why the recent experiments
       | from Meta with AI generated users are both ridiculous and
       | sickening. When you might be interacting with something
       | masquerading as a human, providing at best, tepid garbage, the
       | value of human interaction via the internet is lost.
       | 
       | It is a disaster. I have no idea how to solve this issue, I can't
       | see a future where artificially generated slop doesn't eventually
       | overwhelm every part of the internet and make it unusable. The
       | UGC era of the internet is probably over.
        
         | lumost wrote:
         | I suspect that the honest outcome will be that platforms where
         | AI content is allowed/encouraged will begin to appear like a
         | video game. If everyone in school is ai-media famous - then no
         | one is. There is most assuredly a market for a game where you
         | are immediately influencer famous, but it's certainly much
         | smaller than the market for social media.
        
         | thatguy0900 wrote:
         | Invite only forums or forums with actual identity checking of
         | some sort. Google and Facebook are in prime position to
         | actually provide real online identity services to other
         | websites, which makes Facebook itself developing bots even
         | funnier. Maybe we'll eventually get bank/government issued
         | online identity verification.
        
           | segasaturn wrote:
           | Online identity verification is the obvious solution, the
           | only problem is that we would lose the last bits of privacy
           | we have on the internet. I guess if everyone was forced to
           | post under our real name and identity, we might treat each
           | other with better etiquette, but...
        
             | thatguy0900 wrote:
             | Optimistically, if all you want to do is prove you are, in
             | fact, a person, and not prove that you are a specific
             | person, there's no real reason to need to lose privacy. A
             | service could vouch that you are a real person, verified on
             | their end, and provide no context to the site owner as to
             | what person you are.
        
               | roywiggins wrote:
               | That doesn't stop Verified Humans(TM) from copying and
               | pasting AI slop into text boxes and pressing "Post." If
               | there's really good pseudonymity, and Verified Humans can
               | have as many pseudonyms as they like and they aren't
               | connected to each other, one human could build an entire
               | social network of fake pseudonyms talking to each other
               | in LLM text but impeccable Verified Human labels.
        
               | thatguy0900 wrote:
               | The identity provider doesn't need to tell the forum that
               | you are 50 different people. They could have a system
               | where if the forum bans you the forum would know it's the
               | same person they banned on reapplication. As far as
               | people making a real person account then using that to do
               | Ai stuff yeah there will have to be a way to persistently
               | ban someone through anonymous verification, but thats
               | possible. Both the identity verifier and forum will be
               | incentivized to play nice with each other. If a identity
               | provider is allowing one person to make 50 spam accounts
               | the forum can stop accepting verification from that
               | provider.
        
               | crdrost wrote:
               | I just want to semi-hijack this thread to note that you
               | can actually peek into the future on this issue, by just
               | looking at the present chess community.
               | 
               | For readers who are not among the _cognoscenti_ on the
               | topic: in 1997 supercomputers started playing chess at
               | around the same level as top grandmasters, and some PCs
               | were also able to be competitive (most notably, Fritz
               | beat Deep Blue in 1995 before the Kasparov games, and
               | Fritz was not a supercomputer). From around 2005, if you
               | were interested in chess, you could have an engine on
               | your computer that was more powerful than either you or
               | your opponent. Since about 2010, there 's been a decent
               | online scene of people playing chess.
               | 
               | So the chess world is kinda what the GPT world will be,
               | in maybe 30ish years? (It's hard to compare two different
               | technology growths, but this assumes that they've both
               | hit the end of their "exponential increase" sections at
               | around the same time and then have shifted to
               | "incremental improvements" at around the same rate. This
               | is also assuming that in 5-10 years we'll get to the
               | "Deep Blue defeats Kasparov" thing where transformer-
               | based machine learning will be actually better at
               | answering questions than, say, some university
               | professors.)
               | 
               | The first thing is, proving that someone is a person, in
               | general, is small potatoes. Whatever you do to prove that
               | someone is a real person, they might be farming some or
               | all of their thought process out to GPT.
               | 
               | The community that cares about "interacting with real
               | humans" will be more interested in continuous
               | interactions rather than "post something and see what
               | answers I get," because long latencies are the places
               | where GPT will answer your question and GPT will give you
               | a better answer anyways. So if you care about real
               | humanity, that's gonna be realtime interaction. The chess
               | version is, "it's much harder to cheat at Rapid or Blitz
               | chess."
               | 
               | The second thing is, privacy and nonprivacy coexist. The
               | people who are at the top of their information-spouting
               | games, will deanonymize themselves. Magnus Carlsen just
               | has a profile on chess.com, you can follow his games.
               | 
               | Detection of GPT will look roughly like this: you will be
               | chatting with someone who putatively has a real name and
               | a physics pedigree, and you ask them to answer physics
               | questions, and they appear to have a _really vast_
               | physics knowledge, but then when you ask them a simple
               | question like  "and because the force is larger the
               | accelerations will tend to be larger, right?" they take
               | an unusually long time to say "yep, F = m a, and all
               | that." And that's how you know this person is pasting
               | your questions to a GPT prompt and pasting the answers
               | back at you. This is basically what grandmasters look for
               | when calling out cheating in online chess; on the one
               | hand there's "okay that's just a really risky way to play
               | 4D chess when you have a solid advantage and can just
               | build on it with more normal moves" -- but the chess
               | engine sees 20 moves down the road beyond what any human
               | sees, so it knows that these moves aren't actually risky
               | -- and on the other hand there's "okay there's only one
               | reason you could possibly have played the last Rook move,
               | and it's if the follow up was to take the knight with the
               | bishop, otherwise you're just losing. You foresaw all of
               | this, right?" and yet the "person" is still thinking
               | (because the actual human didn't understand why the
               | computer was making that rook move, and now needs the
               | computer to tell them that the knight has to be taken
               | with the bishop as appropriate follow-up).
        
               | aleph_minus_one wrote:
               | > you will be chatting with someone who putatively has a
               | real name and a physics pedigree, and you ask them to
               | answer physics questions, and they appear to have a
               | really vast physics knowledge, but then when you ask them
               | a simple question like "and because the force is larger
               | the accelerations will tend to be larger, right?" they
               | take an unusually long time to say "yep, F = m a, and all
               | that." And that's how you know this person is pasting
               | your questions to a GPT prompt and pasting the answers
               | back at you.
               | 
               | Honestly, (even) in my area of expertise, if the
               | "abstraction/skill level" or the kind of wording (in your
               | example: much less scientifically precise wording, "more
               | like a 10 year old child asks"), it often takes me quite
               | some time to adjust (it completely takes me out of my
               | flow).
               | 
               | So, your criterion would yield an insane amount of false
               | positives on me.
        
             | StefanBatory wrote:
             | My parents use a lot of Facebook - and things some people
             | say under their real name are really mindblowing.
        
             | yjftsjthsd-h wrote:
             | > I guess if everyone was forced to post under our real
             | name and identity, we might treat each other with better
             | etiquette, but...
             | 
             | But Facebook already proved otherwise.
        
             | numpad0 wrote:
             | Posting with IRL identity removes the option to back down
             | after a mistake and leads to much worse escalations,
             | because public reputations will be at stake by default.
        
           | 1659447091 wrote:
           | > with actual identity checking of some sort
           | 
           | I am hoping OpenID4VCI[0] will fill this role. It looks to be
           | flexible enough to preserve public privacy on forums while
           | still verifying you are the holder of a credential issued to
           | a person. The credential could be issued from an issuer that
           | can verify you are an adult (banks) for example. Then a site
           | or forum etc, that works with a verifier that can verify
           | whatever combination of data of one or more credentials
           | presented. I haven't dug into the full details of
           | implementation and am skimming over a lot but that appears to
           | be the gist of it.
           | 
           | [0] https://openid.net/specs/openid-4-verifiable-credential-
           | issu...
        
         | kevinventullo wrote:
         | Ironically, on Facebook itself I am only friends with people I
         | actually know in real life. So, most of the stuff I see in my
         | feed is from them.
        
           | bee_rider wrote:
           | I'm only friends with people I know on Facebook, so I'm
           | mostly see ads on that site. There's a feed to just see stuff
           | your friends post, but for some reason the site defaults to
           | this awful garbage ad spam feed (no surprise really).
        
         | jgilias wrote:
         | Oh, there are solutions. One is a kind of a socialized trust
         | system. I know that Lyn Alden that I follow on Nostr is
         | actually her not only because she says so, but also because a
         | bunch of other people follow her too. There are bot accounts
         | that impersonate her, but it's easy to block those, a it's
         | pretty obvious from the follower count. And once you know a
         | public key that Lyn posts under, I'm sure it's her.
         | 
         | She could start posting LLM nonsense, but people will be quick
         | to point it out, and start unfollowing. An important part is
         | that there's no algorithm deciding what I see in my feed
         | (unless I choose so), so random LLM stuff can't really get into
         | my feed, unless I chose so.
         | 
         | Another option is zero knowledge identity proofs that can be
         | used to attest that you're a human without exposing PII, or
         | relying on a some centralized server being up to "sign you in
         | on your behalf"
         | 
         | https://zksync.mirror.xyz/kWRhD81C7il4YWGrkDplfhIZcmViisRe3l...
        
           | roywiggins wrote:
           | How can ZK approaches prevent people from renting out their
           | human identity to AI slop producers?
        
             | jgilias wrote:
             | By just making it more expensive. We're never going to get
             | rid of spam fully, but the higher we can raise the costs,
             | the less spam we get.
             | 
             | EDIT: Sorry, I didn't answer your question directly. So it
             | doesn't, but makes spam more expensive.
        
         | cess11 wrote:
         | If you think about historical parallels like advertising and
         | the industrialisation of entertainment, where the communication
         | is sufficiently human-like to function but deeply insincere and
         | manipulative, I think you'll find that you absolutely can see
         | such a future and how it might turn out.
         | 
         | A lot or most of people will adapt, accept these conditions
         | because compared to the constant threat of misery and precarity
         | of work, or whatever other way to sustenance and housing, it
         | will be very tolerable. Similar to how so called talk shows
         | flourished, where fake personas pretend to get to know other
         | fake personas they are already very well acquainted with and so
         | on, while selling slop, anxieties or something. Like Oprah, the
         | billionaire.
        
         | RiverCrochet wrote:
         | Well, the end of open, public UGC content anyway.
         | 
         | I have heard of Discord servers where admins won't assign you
         | roles giving you access to all channels unless you've
         | personally met them, someone in the group can vouch for you, or
         | you have a video chat with them and "verify."
         | 
         | This is the future. We need something like Discord that also
         | has a webpage-like mechanism built into it (a space for a whole
         | collection of documents, not just posts) and is accessible via
         | a browser.
         | 
         | Of course, depending on discovery mechanisms, this means this
         | new "Internet" is no longer an easy escape from a given reality
         | or place, and that was a major driver of its use in the 90's
         | and 00's - curious people wanting to explore new things not
         | available in their local communities. To be honest, the old,
         | reliable Google was probably the major driver of that.
         | 
         | And it sucks for truly anti-social people who simply don't want
         | to deal with other people for anything, but maybe those types
         | will flourish with AI everywhere.
         | 
         | If the gated hubs of a possible new group-x-group human
         | Internet maintain open lobbies, maybe the best of both worlds
         | can be had.
        
       | Scoundreller wrote:
       | > It had fairly steady growth until 2012, before petering out
       | throughout the 2010s and 2020s in lieu of more centralized sites
       | like StackExchange, and by 2025, only a small community was left
       | 
       | This timeline tracks with my own blogging. Google slowly stopped
       | ranking traditional forum posts and blogs as well around that
       | time, regardless of quality, unless it was a "major".
       | 
       | > But, unlike so many other fora from back in the early days, it
       | went from 2003 to 2025 without ever changing its URLs, erasing
       | its old posts, or going down altogether.
       | 
       | I can also confirm if you have a bookmark to my blog from 2008,
       | that link will still work!
       | 
       | The CMS is no longer, it's all static now... which too few orgs
       | take the short amount of time to bother with when "refreshing"
       | their web presence :(
        
         | Lammy wrote:
         | > Google slowly stopped ranking traditional forum posts and
         | blogs as well around that time
         | 
         | IMO the true inflection point was 2014 when Google first hid
         | (from the UI) and then fully removed (no longer accessible by
         | magic URL) the "Blogs" and especially the "Discussions"
         | filters. Some contemporary discussions on "Discussions":
         | 
         | - https://techcrunch.com/2014/01/23/googles-search-filters-
         | now...
         | 
         | - http://googlesystem.blogspot.com/2014/03/bring-back-forum-
         | se... (details the briefly-working magic URLs)
         | 
         | - https://www.ghacks.net/2014/01/23/search-discussions-
         | blogs-p...
         | 
         | - https://www.seroundtable.com/google-search-filters-
         | gone-1799...
         | 
         | - https://www.webmasterworld.com/google/4687960.htm
         | 
         | - https://www.thecoli.com/threads/i-cant-google-search-by-
         | disc...
         | 
         | - https://www.neogaf.com/threads/anyone-else-annoyed-google-
         | re...
         | 
         | - https://webapps.stackexchange.com/questions/57249/has-the-
         | op...
         | 
         | - https://www.bladeforums.com/threads/how-to-do-google-
         | discuss...
         | 
         | - https://browsermedia.agency/blog/alternatives-discussion-
         | sea...
        
           | layer8 wrote:
           | I recently noticed that there is now a not-visible-by-default
           | "Forums" option in Google Search. It is selected by
           | specifying the query parameter udm=18:
           | 
           | https://www.google.com/search?q=hp+50g&udm=18
        
             | lkramer wrote:
             | This is interesting. I wonder why it's not visible by
             | default.
        
               | layer8 wrote:
               | Maybe it is/was an A/B test, to see if it hurts ad
               | revenue (it probably does).
               | 
               | The option appeared randomly for me on a search, and I
               | took immediately note of the udm number. :)
        
               | Scoundreller wrote:
               | Ran through the lower numbers that hit something
               | interesting:
               | 
               | 8 = jobs (but doesn't return any results) 15 =
               | attractions (but doesn't return any results)
        
         | nateglims wrote:
         | I remember several traditional programming forums I frequented
         | in the 00s getting hit hard by the Google Panda update around
         | 2013. It ruined their SEO and they started to go into decline.
         | Forums and blogs had a culture that isn't replicated by reddit,
         | social media, etc. It's a shame to lose it.
        
       | COAGULOPATH wrote:
       | Something I'm increasingly noticing about LLM-generated content
       | is that...nobody wants it.
       | 
       | (I mean "nobody" in the sense of "nobody likes Nickelback". ie,
       | not _literally_ nobody.)
       | 
       | If I want to talk to an AI, I can talk to an AI. If I'm reading a
       | blog or a discussion forum, it's because I want to see writing by
       | _humans_. I don 't want to read a wall of copy+pasted LLM slop
       | posted under a human's name.
       | 
       | I now spend dismaying amounts of time and energy avoiding LLM
       | content on the web. When I read an article, I study the writing
       | style, and if I detect ChatGPTese ("As we dive into the ever-
       | evolving realm of...") I hit the back button. When I search for
       | images, I use a wall of negative filters (-AI, -Midjourney,
       | -StableDiffusion etc) to remove slop (which would otherwise be
       | >50% of my results for some searches). Sometimes I filter
       | searches to before 2022.
       | 
       | If Google added a global "remove generative content" filter that
       | worked, I would click it and then never unclick it.
       | 
       | I don't think I'm alone. There has been research suggesting that
       | users immediately dislike content they perceive as AI-created,
       | regardless of its quality. This creates an incentive for
       | publishers to "humanwash" AI-written content--to construct a
       | fiction where a human is writing the LLM slop you're reading.
       | 
       | Falsifying timestamps and hijacking old accounts to do this is
       | definitely something I haven't seen before.
        
         | robswc wrote:
         | 100%.
         | 
         | So far (thankfully) I've noticed this stuff get voted down on
         | social media but it is blowing my mind people think pasting in
         | a ChatGPT response is productive.
         | 
         | I've seen people on reddit say stuff like "I don't know but
         | here's what ChatGPT said." Or worse, presenting ChatGPT copy-
         | paste as their own. Its funny because you can tell, the text
         | reads like an HR person wrote it.
        
           | Trasmatta wrote:
           | I've noticed the opposite actually, clearly ChatGPT written
           | posts on Reddit that get a ton of upvotes. I'm especially
           | noticing it on niche subreddits.
           | 
           | The ones that make me furious are on some of the mental
           | health subreddits. People are asking for genuine support from
           | other people, but are getting AI slop instead. If someone
           | needs support from an AI (which I've found can actually
           | help), they can go use it themselves.
        
         | asddubs wrote:
         | I was googling a question about opengraph last week. so many
         | useless AI drivel results now.
        
         | Gracana wrote:
         | Yup, I'm the same, and I love my LLMs. They're fun and
         | interesting to talk to and use, but it's obvious to everyone
         | that they're not very reliable. If I think an article is LLM-
         | generated, then the signal I'm getting is that the author is
         | just as clueless as I am, and there's no way I can trust that
         | any of the information is correct.
        
           | Sharlin wrote:
           | > but it's obvious to everyone that they're not very
           | reliable.
           | 
           | Hopefully to everyone on HN, but definitely not to everyone
           | on the greater Internet. There are plenty of horror stories
           | of people who apparently 100% blindly trust whatever ChatGPT
           | says.
        
             | Gracana wrote:
             | Yeah that's fair, I suppose I see that sort of thing on
             | reddit fairly regularly, especially in the "here's a story
             | about my messed-up life" types of subreddits.
        
         | rapind wrote:
         | > If Google added a global "remove generative content" filter
         | that worked, I would click it and then never unclick it.
         | 
         | It's not just generated content. This problem has been around
         | for years. For example, google a recipe. I don't think the
         | incentives are there yet. At least not until Google search is
         | so unusable that no one is buying their ads anymore. I suspect
         | any business model rooted in advertising is doomed to the
         | eventual enshitification of the product.
        
         | jchw wrote:
         | Exactly. Why in the hell would I want someone to use ChatGPT
         | _for_ me? If I wanted that, I could go use that instead.
        
           | Self-Perfection wrote:
           | I believe most times such responses are made in assumption
           | that people are just lazy, like we used provide links to
           | https://letmegooglethat.com/ before.
        
         | MrPowerGamerBR wrote:
         | > If I'm reading a blog or a discussion forum, it's because I
         | want to see writing by humans. I don't want to read a wall of
         | copy+pasted LLM slop posted under a human's name.
         | 
         | This reminds me of the time around ChatGPT 3's release where
         | Hacker News's comments was filled with users saying "Here's
         | what ChatGPT has to say about this"
        
         | Aerroon wrote:
         | I can understand it for AI generated text, but I think there
         | are _a lot_ of people that like AI generated images. Image
         | sites like get a _ton_ of people that like AI generated images.
         | Civitai gets a lot of engagement for AI generated images, but
         | so do many other image sites.
        
           | earnestinger wrote:
           | I don't understand the problem with AI generated images.
           | 
           | (I very much would like any AI generated text to be marked as
           | such, so I can set my trust accordingly)
        
             | cogman10 wrote:
             | > I don't understand the problem with AI generated images.
             | 
             | Depends on what they are used for and what they are
             | purporting to represent.
             | 
             | For example, I really hate AI images being put into kids
             | books, especially when they are trying to be psuedo-
             | educational. A big problem those images have is from one
             | prompt to the next, it's basically impossible to get
             | consistent designs which means any sort of narrative story
             | will end up with pages of characters that don't look the
             | same.
             | 
             | Then there's the problem that some people are trying to
             | sell and pump this shit like crazy into amazon. Which
             | creates a lot of trash books that squeeze out legitimate
             | lesser known authors and illustrators in favor of this pure
             | garbage.
             | 
             | Quite similar to how you can't really buy general products
             | from amazon because drop shipping has flooded the market
             | with 10 billion items with different brands that are
             | ultimately the same wish garbage.
             | 
             | The images can look interesting sometimes, but often on
             | second glance there's just something "off" about the image.
             | Fingers are currently the best sign that things have gone
             | off the rails.
        
           | tayo42 wrote:
           | Despite what people think there is a sort of art to getting
           | interesting images out of an ai model.
        
             | onemoresoop wrote:
             | That's not the issue though, it should be marked as such or
             | be found in a section people looking for it can easily find
             | it instead of shoving it everywhere. To me placing that
             | generated content in human spaces is a strong signal for
             | low effort. On the other hand generated content can be
             | extremely interesting and useful and indeed there's an art
             | to it
        
               | daveguy wrote:
               | I agree. AI generated text and images should be marked as
               | such. In the US there was a push to set standards on
               | watermarking AI generated content (feasible for
               | images/video, but more difficult for text, because it's
               | easier to delete). Unfortunately, the effort to study
               | potential watermarking standards was rescinded as of Jan
               | 22 2025.
        
               | numpad0 wrote:
               | They know everyone, especially the ones they seek
               | attention from, has such labels in their muted keywords
               | list.
        
           | egypturnash wrote:
           | People who submit blog posts here sure do love opening their
           | blogs with AI image slop. I have taken to assuming that the
           | text is also AI slop, and closing the tab and leaving a
           | comment saying such.
           | 
           | Sometimes this comment gets a ton of upvotes. Sometimes it
           | gets indignant replies insisting it's real writing. I need to
           | come up with a good standard response to the latter.
        
             | daveguy wrote:
             | > I need to come up with a good standard response to the
             | latter.
             | 
             | How about, "I'm sorry, but if you're willing to use AI
             | image slop, how should I know you wouldn't also use AI text
             | slop? AI text content isn't reliable, and I don't have time
             | to personally vet every assertion."
        
               | numpad0 wrote:
               | Trying to gaslight your enemy is certainly an option for
               | something, not always the best nor the one in line with
               | HN guideline. Frankly it just rarely reduce undesirable
               | behaviors even if you're in the mood to be manipulative.
        
         | agumonkey wrote:
         | nobody wants to see other's ai generated images, but most
         | people around me are drooling about generating stuff
         | 
         | wait for the proof-of-humanity decade where you're paid to be
         | here and slow and flawed
        
           | ijk wrote:
           | Most AI generated images are like most dreams: meaningful to
           | you but not something other people have much interest it.
           | 
           | Once you have people sorting through them, editing them, and
           | so on the curation adds enough additional interest...and for
           | many people what they get out of looking at a gallery of AI
           | images is ideas for what prompts they want to try.
        
           | onemoresoop wrote:
           | Most AI genetated visuals have a myriad of styles but you
           | could mostly tell it's something not seen before and thats
           | what people may be drooling for. The same drooling happened
           | for things that have finally found their utility after a long
           | time and are we're now used to. For example 20 years ago
           | Photoshop filters were all the rage and you'd see them
           | expressed out everywhere back then. I think this AI gen phase
           | will lose interest/enthusiasm over time but will enter and
           | stay in toolbox for the right things, whatever people decide
           | to be then.
        
         | ijk wrote:
         | The problem with "provide LLM output as a service," which is
         | more or less the best case scenario for the ChatGPT listicles
         | that clutter my feed, is that if I wanted an LLM result...I
         | could have just asked the LLM. There's maybe a tiny proposition
         | if I didn't have access to a good model, but a static page that
         | takes ten paragraphs to badly answer one question isn't really
         | the form factor anyone prefers; the actual chatbot interface
         | can present the information in the way that works best for me,
         | versus the least common denominator listicle slop that tries to
         | appeal to the widest possible audience.
         | 
         | The other half of the problem is that rephrasing information
         | doesn't actually introduce new information. If I'm looking for
         | the kind of oil to use in my car or the recipe for blueberry
         | muffins, I'm looking for something backed by actual data, to
         | verify that the manufacturer said to use a particular grade of
         | oil or for a recipe that someone has actually baked to verify
         | that the results are as promised. I'm looking for more
         | information than I can get from just reading the sources
         | myself.
         | 
         | Regurgitating text from other data sources mostly doesn't add
         | anything to my life.
        
           | tayo42 wrote:
           | Rephrasing can be beneficial. It can make things clearer to
           | understand and learn from. Like in math something like khan
           | academy or the 3blue 1 brown YouTube channel isn't presenting
           | anything new, just rephrasing math in a different way that
           | makes it easier for some to understand.
           | 
           | If llms could take the giant overwhelming manual in my car
           | and get out the answer to what oil to use, that woukd be
           | useful and not new information
        
             | krapp wrote:
             | >If llms could take the giant overwhelming manual in my car
             | and get out the answer to what oil to use, that woukd be
             | useful and not new information
             | 
             | You can literally just google that or use the appendix
             | that's probably at the back of the manual. It's also
             | probably stamped on the engine oil cap. It also probably
             | doesn't matter and you can just use 10w40.
        
               | tayo42 wrote:
               | I'm just reusing the example in the comment I responded
               | to. Fill in something else then...
        
               | ziddoap wrote:
               | Illustrative examples are illustrative, not literal.
        
             | chowells wrote:
             | I have to protest. A lot of 3b1b _is_ new. Not the math
             | itself, but the animated graphical presentation is. That 's
             | where the value from his channel comes in. He provides a
             | lot of tools to visualize problems in ways that haven't
             | been done before.
        
               | tayo42 wrote:
               | I guess the way I think of the visualizations and video
               | as a whole as a type of rephrasing. He's not the first
               | person to try to visualize math concepts
        
         | carlosjobim wrote:
         | I think a good comparison is when you go to a store and there
         | are salesmen there. Nobody wants to talk to a salesman. They
         | can almost never help a customer with any issue, since even an
         | ignorant customer usually knows more about the products in the
         | store than the salesmen. Most customers hate salesmen and a
         | sustainable portion of customers choose to leave the store or
         | not enter because of the salesmen, meaning the store looses
         | income. Yet this has been going on forever. So just prepare for
         | the worst when it comes to AI, because that's what you are
         | going to get, and neither ethical sense, business sense or any
         | rationality is going to stop companies from showing it down
         | your throat. They don't give a damn if they will lose income or
         | even bankrupt their companies, because annoying the customer is
         | more important.
        
         | Balgair wrote:
         | > (I mean "nobody" in the sense of "nobody likes Nickelback".
         | ie, not literally nobody.)
         | 
         | Reminds me of the old Yogi Berra quote: Nobody goes there
         | anymore, its too crowded.
        
       | elashri wrote:
       | It is sad that this is happening to PhysicsForums. It was one of
       | first websites I was using frequently 15 years ago when I started
       | my physics passion (later career). I was active reader and
       | contributed on few occasions but I still remember some members
       | who I thought that one day I will be smart and knowledgeable like
       | them. With years and the move to social media following Arab
       | spring things started to change (as part of the overall
       | transition from forum being the dominant place for discussions).
       | But I stopped visiting it around 2018 unless I came through
       | google search (later kagi). I still find the archive useful to
       | answer some questions and I would disagree with author of article
       | that because no one is sharing links on twitter that means no one
       | care.
        
       | LordShredda wrote:
       | Don't give out your real name online, the server admin might
       | change your posts.
        
       | paulpauper wrote:
       | Dead internet theory is one of those ideas that keeps resurfacing
       | or being revied with articles like this, even though the evidence
       | is only limited to confirmation bias. It ignores that there are
       | huge parts of the internet that are not dead. I think it's more
       | like the quality of discourse has fallen for reasons that are not
       | clear.
        
         | NotYourLawyer wrote:
         | > there are huge parts of the internet that are not dead
         | 
         | Such as?
        
         | datadrivenangel wrote:
         | The article looked at the PhysicsForums and found that 92% of
         | the text is AI or machine generated...
        
           | paulpauper wrote:
           | The internet is way bigger than PhysicsForums. That was my
           | point, but your response seems to confirm what I said about
           | discourse declining though.
        
       | inasio wrote:
       | Talk about burying the lede! Near the bottom of the story the
       | site owner confirms that it was him that added the backdated AI
       | comments (perhaps it should have been obvious...)
        
         | firesteelrain wrote:
         | I couldn't find it. He was trying to seed the site ?
        
           | pbronez wrote:
           | Experimenting with using AI bots to respond to questions that
           | had been open for a long time with no response.
        
       | econ wrote:
       | I like the assumption that it was a real account originally.
       | 
       | It all seems so unthinkable but when running a forum or a blog
       | with an active comment section.. what would you do/think if your
       | users show up, browse around and not say anything for a week? You
       | start out by making topics in your own name, write helpful
       | replies.. until you look like an idiot talking to yourself.
       | 
       | Forums with good traffic and lots of spammy advertisement no
       | doubt consider it when visitors leave because nothing new
       | happened.
       | 
       | I once upon a time, on a rather stale forum, created two
       | similarly named accounts from the same ip and argued with myself.
       | At first I thought the owner or one of the other users would
       | notice but I quickly learned that no behaviour is weird enough
       | for it to be ever considered.
        
       ___________________________________________________________________
       (page generated 2025-01-24 23:01 UTC)