[HN Gopher] Bots, so many bots
___________________________________________________________________
Bots, so many bots
Author : welder
Score : 244 points
Date : 2024-10-01 14:21 UTC (8 hours ago)
(HTM) web link (wakatime.com)
(TXT) w3m dump (wakatime.com)
| bediger4000 wrote:
| Excellent detective work. The trends for bots vs humans are kind
| of disturbing in that humans (as detected) seem to be doing fewer
| votes and leaving fewer comments with time, while bots are doing
| the opposite. Is this another indication that the Dead Internet
| Theory is true?
| immibis wrote:
| Related, a real human on HN is limited to 5 comments per 6
| hours, while bad actors simply need to create hundreds of
| accounts to avoid this limit.
| tivert wrote:
| > Related, a real human on HN is limited to 5 comments per 6
| hours, while bad actors simply need to create hundreds of
| accounts to avoid this limit.
|
| I don't think that's true. I think that's an anti-abuse mode
| some accounts fall into.
| apercu wrote:
| I have always assumed it's sort of an algorithm that
| manages this, some combination of account duration,
| "points", etc..
| tivert wrote:
| Or just recent activity. I think I've seen it happen to
| accounts that were digging in their heels and vigorously
| defending some wrong/incorrect/unpopular point they made
| upthread. Then all the sudden they're mentioning the post
| limit and editing posts to reply. I'm guessing it's a
| combination of high (recent) post volume and down-votes
| that triggers it.
| unethical_ban wrote:
| I get the rationale for it. My only gripe with it is that
| it says "you're posting too fast, wait a few minutes"
| when I wasn't posting fast, and it blocks account
| activity for hours. I don't like automated messages that
| lie.
| immibis wrote:
| Too fast is more than 5 posts or comments every 6 hours.
| I think it's a rolling window.
| SoftTalker wrote:
| It's part of the philosophy of HN. Arguments that are
| just people repeating the same points back and forth
| aren't interesting or enlightening.
| AnimalMuppet wrote:
| I think there are two things at play here. One is when
| you're rate limited because dang limited you because
| you're violating the site guidelines. It's a less drastic
| step than all your posts showing up as dead.
|
| Second is, the further you are "to the right" in a
| discussion - the more parents you have to go through to
| get to a top-level comment - I thing you eventually get
| to a delay there, just to stop threads from marching off
| to infinity with two people (who absolutely will not stop
| and will not agree, or even agree to disagree) going on
| forever. I'm not sure what the indent level is that
| triggers this, but I would expect some sort of
| exponential backoff.
| reaperducer wrote:
| It's even tougher for some accounts, too.
| pixl97 wrote:
| DIT was misnamed... Dead Internet Prophecy would have been a
| better term, something that hadn't happened yet, but will come
| true in the future.
| 082349872349872 wrote:
| It was misnamed because (centralised sites != internet).
|
| Lie down with social media dogs, get up with fleas.
| tomthecreator wrote:
| I wonder the same about HN. Has anyone done this kind of
| analysis? Me good LLM
| reaperducer wrote:
| In the seven years I've been on HN, it has gone through
| different phases, each with a noticeable change in the quality
| of the comments.
|
| One big shift came at the beginning of COVID, when everyone
| went work-from home. Another came when Elon Musk bought X.
| There have been one or two other events I've noticed, but those
| are the ones I can recall now. For a short while, many of the
| comments were from low-grade Russian and Chinese trolls, but
| almost all of those are long gone. I don't know if it was a
| technical change at HN, or a strategy change externally.
|
| I don't know if it's internal or external or just fed by
| internet trends, but while it is resistant, HN is certainly not
| immune from the ills affecting the rest of the internet.
| diggan wrote:
| How are you so sure these users are actually bots? Just
| because someone disagrees with you about Russia or China
| doesn't mean that's evidence of a bot, no matter how stupid
| their opinion is.
| JohnMakin wrote:
| Anyone who's spent any amount of time in this space can
| spot them pretty quickly/easily. They tend to stick to
| certain scripts and themes and almost never deviate.
| yodon wrote:
| Ten years ago those accounts existed, too. Back then we
| called them "people."
| JohnMakin wrote:
| Not at all - ten years ago russian misinformation
| campaigns on twitter and meta platforms were alive and
| well. There was an entire several hundred page report
| about it, even.
| diggan wrote:
| How is that different from humans? Humans have
| themes/areas they care more about, and are more likely to
| discuss with others. It's not hard to imagine there are
| Russians/Chinese people caring deeply about their
| country, just like there are Americans who care deeply
| about US.
| JohnMakin wrote:
| C'mon. When you have an account that is less than a year
| old and has 542 posts, 541 of which are repeating very
| specific kremlin narratives verbatim, it isn't difficult
| to make a guess. Is your contention that they are
| actually difficult to spot, or that they don't exist at
| all? because both of those views are hilariously false.
| diggan wrote:
| I feel like you're speaking about specific accounts here,
| since it's so obvious and exact. Care to share the HN
| accounts you're thinking about here?
|
| My contention is that people jump to "It's just a bot"
| when they parrot obvious government propaganda they
| disagree with, when the average person is as likely to
| parrot obvious propaganda without involving computers at
| all.
|
| People are just generally stupid by themselves, and
| reducing it to "Robots be robotting" doesn't feel very
| helpful when there is an actual problem to address.
| JohnMakin wrote:
| No, I'm not. And I don't/won't post any specific
| accounts. I'm speaking more generally - and no one is
| jumping to anything here, you're projecting an argument
| that absolutely no one is making. The original claim was
| that russian/chinese bots were on this platform and left.
| I've only been here about 1.5 years, so I don't know the
| validity of that claim, but I have a fair amount of
| experience and research in the last ten years or so on
| the topic of foreign misinformation campaigns on the web,
| so it sounds like a very valid claim, given how
| proliferate these campaigns were across the entire web.
|
| It isn't an entirely new concept or unknown, and that
| isn't what is happening here. You're making a lot of
| weird assumptions, especially given the fact that the US
| government wrote several hundred pages about this exact
| topic years ago.
| diggan wrote:
| > and no one is jumping to anything here, you're
| projecting an argument that absolutely no one is making
|
| You literally claimed "when you have accounts with these
| stats, and they say these specific things, it isn't
| difficult to guess..." which ends with "that they're
| bots" I'm guessing. Read around in this very submission
| for more examples of people doing "the jump".
|
| I'm not saying there isn't any "foreign misinformation
| campaigns on the web", so not sure who is projecting
| here.
| dcminter wrote:
| A human aggressively taking a particular line and a bot
| doing so may be equivalent; do we need to differentiate
| there?
| diggan wrote:
| If the comment is off-topic/breaking the
| guidelines/rules, it should be removed, full stop.
|
| The difference is that the bots comment should be removed
| regardless if the particular comment is breaking the
| rules or not, as HN specifically is a forum for humans.
| The humans comment, granted it doesn't break the rules,
| shouldn't, no matter how shitty their opinion/view is.
| dcminter wrote:
| If posts make HN a less interesting place to converse I
| don't see why humans should get a pass & I don't see
| anything in the guidelines to support that view either.
| dang wrote:
| In my experience, that's not true. Rather, people are
| much too quick to jump to the conclusion that so-and-so
| is a bot (or a troll, a shill, a foreign agent, etc.),
| when the other's views are outside the range of what
| feels normal to them.
|
| I've written a lot of about this dynamic because it's so
| fundamental. Here are some of the longer posts (mini
| essays really):
|
| https://news.ycombinator.com/item?id=39158911 (Jan 2024)
|
| https://news.ycombinator.com/item?id=35932851 (May 2023)
|
| https://news.ycombinator.com/item?id=27398725 (June 2021)
|
| https://news.ycombinator.com/item?id=23308098 (May 2020)
|
| Since HN has many users with different backgrounds from
| all over the world, it has a lot of user pairs (A, B)
| where A's views don't seem normal to B and vice versa.
| This is why we have the following rule, which has held up
| well over the years:
|
| " _Please don 't post insinuations about astroturfing,
| shilling, bots, brigading, foreign agents and the like.
| It degrades discussion and is usually mistaken. If you're
| worried about abuse, email hn@ycombinator.com and we'll
| look at the data._" -
| https://news.ycombinator.com/newsguidelines.html
| JohnMakin wrote:
| In my research and experience, it is. I'm making no
| comment about bots/shills on this site, either, I'm
| responding to the plausibility of the original comment.
| diggan wrote:
| > I'm making no comment about bots/shills on this site,
| either, I'm responding to the plausibility of the
| original comment.
|
| The original comment:
|
| > I wonder the same about HN. Has anyone done this kind
| of analysis? Me good LLM
|
| Slightly disingenuous to argue from the standpoint of
| "I'm talking about the whole internet" when this thread
| is specifically about HN. But whatever floats your boat.
| imiric wrote:
| Hey, I would appreciate if you could address some of my
| questions here[1].
|
| I do think it's unrealistic to believe that there is
| absolutely zero bot activity, so at least some of those
| accusations might be true.
|
| [1]: https://news.ycombinator.com/item?id=41711060
| dang wrote:
| The claim is not "zero bot activity" - how would one even
| begin to support that?
|
| Rather, the claim is that accusations about other users
| being bots/shills/etc. overwhelmingly turn out, when
| investigated, to have zero evidence in favor of them. And
| I do mean overwhelmingly. That is perhaps the single most
| consistent phenomenon we've observed on HN, and it has
| strong implications.
|
| If you want further explanation of how we approach these
| issues, the links in my GP comment
| (https://news.ycombinator.com/item?id=41710142) go into
| it in depth. If you read those and still have a question
| that isn't answered there, I can take a crack at it.
| Since you ask (in your other comment) whether HN has any
| protections against this kind of thing at all, I think
| you should look at those past explanations--for example
| the first paragraph of
| https://news.ycombinator.com/item?id=27398725.
| imiric wrote:
| Alright, thanks. I read your explanations and they do
| answer some of my questions.
|
| I'm still surprised that the percentage of this activity
| here is so low, below 0.1%, as you say. Given that the
| modern internet is flooded by bots--over 60% in the case
| of ProductHunt as estimated by the article, and a third
| of global internet traffic[1]--how do you a) know that
| you're detecting all of them accurately (given that it
| seems like a manual process that takes a lot of effort),
| and b) explain that it's so low here compared to most
| other places?
|
| [1]: https://investors.fastly.com/news/news-
| details/2024/New-Fast...
| intended wrote:
| From what I understand - users accuse others of being
| shills and bots, and are a largely wrong.
|
| Dang and team use other tools to remove the actual bots
| that they can find evidence for.
|
| So yes, there are bots, but human reports, tend to be
| more about disagreements, than actual bot identification.
| dang wrote:
| intended's reply is correct.
|
| Most of the bot activity we know about on HN has to do
| with voting rings and things like that, people trying to
| promote their commercial content. To the extent that they
| post things, it's mostly low-quality stuff that either
| gets killed by software, flagged by users, or eventually
| reported to us.
|
| When it comes to political, ideological, nationalistic
| arguments and the like, that's where we see little (if
| any) evidence. Those are the areas where users are most
| likely to accuse each other of not being human, or
| posting in bad faith, etc., so that's what I've written
| about in the posts that I linked to.
|
| There's still always the possibility that some bad actors
| are running campaigns too sophisitcated for us to detect
| and crack down on. I call this the Sufficiently Smart
| Manipulator problem and you can find past takes on it
| here: https://hn.algolia.com/?dateRange=all&page=0&prefix
| =true&que....
|
| I can't say whether or not this exists (that follows by
| definition--"sufficiently" means smart enough to evade
| detection). All I can tell you is that in specific cases
| people ask us to look into, there are usually obvious
| reasons not to believe this interpretation. For example
| would a sufficiently smart manipulator be smart enough to
| have been posting about Julia macros back in 2017, or the
| equivalent? You can always make a case for "yes" but
| those cases end up having to stretch pretty thin.
| imiric wrote:
| Thank you. I appreciate your positive outlook on these
| things. It helps counteract my negative one. :)
|
| For example, when you say "The answer to the Sufficiently
| Smart Manipulator is the Sufficiently Healthy Community",
| that sounds reasonable, but I see a few issues with it.
|
| 1. These individuals are undetectable by definition. They
| can infiltrate communities and direct conversations and
| opinions without raising any alarms. Sometimes these are
| long-term operations that take years, and involve
| building trust and relationships. For all intents and
| purposes, they may seem like just another member of the
| community, which they partly are. But they have an agenda
| that masquerades as strong opinions, and are protected by
| tolerance and inclusivity, i.e. the paradox of tolerance.
|
| 2. Because they're difficult to detect, they can easily
| overrun the community. What happens when they're a
| substantial percentage of it? The line between fact and
| fiction becomes blurry, and it's not possible to
| counteract bad arguments with better ones, simply because
| they become a matter of opinion. Ultimately those who
| shout harder, in larger numbers, and are in a better
| position to, get heard the most.
|
| These are not some conspiracy theories. Psyops and
| propaganda are very real and happen all around us in ways
| we often can't detect. We can only see the effects like
| increased polarization and confusion, but are not able to
| trace these back to the source.
|
| Moreover, with the recent advent of AI, how long until
| these operations are fully autonomous? What if they
| already are? Bots can be deployed by the thousands, and
| their capabilities improve every day.
|
| So I'm not sure that a Sufficiently Healthy Community
| alone has a chance of counteracting this. I don't have
| the answer either, but can't help but see this trend in
| most online communities. Can we do a better job at
| detection? What does that even look like?
| iterateoften wrote:
| Hackernews isn't the place to bring that up regardless of
| your opinion. So out of context political posts should be
| viewed with at least some scrutiny.
| diggan wrote:
| This I agree with, off-topic is off-topic and should be
| removed/flagged. But I'm guessing we're not talking about
| simple rule/guidelines-breaking here.
| imiric wrote:
| That's true, but maybe there should be a meta section of
| the site where these topics can be openly discussed?
|
| While I appreciate dang's perspective[1], and agree that
| most of these are baseless accusations, I also think that
| it's inevitable that a site with seemingly zero bot-
| mitigation techniques, where accounts and comments can be
| easily automated, doesn't have some or, I would wager _a
| lot_, of bot activity.
|
| I would definitely appreciate some transparency here.
| E.g. are there any automated or manual bot detection and
| prevention techniques in place? If so, can these accounts
| and their comments be flagged as such?
|
| [1]: https://news.ycombinator.com/item?id=41710142
| dang wrote:
| We're not going to have a meta section for reasons I've
| explained in the past:
|
| https://news.ycombinator.com/item?id=22649383 (March
| 2020)
|
| https://news.ycombinator.com/item?id=24902628 (Oct 2020)
|
| I've responded to your other point here:
| https://news.ycombinator.com/item?id=41713361
| intended wrote:
| There are a few horsemen of the online community
| apocalypse,
|
| 1) Politics 2) Religion 3) Meta
|
| Fundamentally - Productive discussion is problem solving.
| A high signal to noise ratio community is almost always
| boring, see r/Badeconomics for example.
|
| Politics, religion are low barrier to entry topics, and
| always result in flame wars, that then proceed to drag
| all other behavior down.
|
| Meta is similar: To have a high signal community, with a
| large user base, you filter out thousands of accounts and
| comments, regularly. Meta spaces inevitably become the
| gathering point for these accounts and users, and their
| sheer volume ends up making public refutations and
| evidence sharing impossible.
|
| As a result, meta becomes impossible to engage with at
| the level it was envisioned.
|
| In my experience, all meta areas become staging grounds
| to target or harass moderation. HN is unique in the level
| of communication from Dang.
| metalliqaz wrote:
| I don't know about anyone else, but to me a lot of bot
| traffic is very obvious. I don't have the expertise to be
| able to describe the feeling that low quality bot text
| gives me, but it sticks out like a sore thumb. It's too
| verbose, not specific enough to the discussion, and so on.
|
| I'm sure there are real pros who sneak automated propaganda
| in front of my eyes with my notice, but then again I
| probably just think they are human trolls.
| diggan wrote:
| > but it sticks out like a sore thumb
|
| Could you give some examples of HN comments that "sticks
| out like a sore thumb"?
|
| > It's too verbose, not specific enough to the
| discussion, and so on.
|
| That to me just sounds like the average person who feels
| deeply about something, but isn't used to productive
| arguments/debates. I come across this frequently on HN,
| Twitter and everywhere else, including real life where I
| know for a fact the person I'm speaking to is not a robot
| (I'm 99% sure at least).
| metalliqaz wrote:
| sorry, I didn't mean to give the impression that I was
| talking about HN comments specifically. I was talking
| about spotting bot content out on the open Internet.
|
| as for verbosity, I don't mean simply using a lot of
| text, but rather using a lot of superfluous words
| sentences.
|
| people tend not to write in comments the way they would
| in an article.
| simion314 wrote:
| [flagged]
| diggan wrote:
| > If the account is new and promoting Ruzzian narrative
| by denying the reality I can be 99% sure it is a paid
| person copy pasting arguments from a KGB manual, 1% is a
| home sovieticus with some free time.
|
| I'm not as certain as you about that. Last time the US
| had a presidential election, it seems like almost half
| the country is either absolutely bananas and out of their
| mind, or half the country are robots.
|
| But reality turns out to be less exciting in reality.
| People are just dumb, and spew whatever propaganda they
| happen to come across "at the right time". Same is true
| for Russians as it is for Americans.
| consteval wrote:
| I think it's mostly a timing thing. It's one thing for
| someone to say something dumb, but it's another for
| someone to say it immediately on a new account. That, to
| me, screams bot behavior. Also if they have a laser
| focus. Like if I open a twitter account and every single
| tweet is some closely related propaganda point.
| simion314 wrote:
| [flagged]
| dang wrote:
| Nationalistic flamewar will get you banned here,
| regardless* of which country you have a problem with. No
| more of this, please.
|
| https://news.ycombinator.com/newsguidelines.html
|
| * https://hn.algolia.com/?dateRange=all&page=0&prefix=tru
| e&que...
| simion314 wrote:
| Will I be allowed to say just provide some links instead
| and let the community inform themselves if I am not
| allowed to share my observations? Or links to real news
| events are also not allowed.
| reaperducer wrote:
| _How are you so sure these users are actually bots?_
|
| I stated nothing about bots. Re-read what I wrote.
| diggan wrote:
| Bots, trolls, foreign agents, a dear child has many
| names. Point is the same, name calling without evidence
| does nothing to solve the problem.
| SoftTalker wrote:
| HN has mechanisms to detect upvotes and comments that seem to
| be promoting a product or coordinated in some other way. I'm
| not sure what they do behind the scenes or how effective it
| is but it's something. Also other readers downvote bot spam.
| Obvious bot/LLM-generated comments seem to be "dead" quite
| often, as are posts that are clearly just content/ad farm
| links or product promotions or way off-topic.
| jperras wrote:
| 16 year HN vet here.
|
| This place has both changed a _lot_ and also very little,
| depending on which axis you want to analyze. One thing that
| has been pretty consistent, however, is the rather minimal
| amount of trolls/bots. There are some surges from time to
| time, but they really don't last that long.
| datadrivenangel wrote:
| HN remains good primarily because of the moderators. Thanks
| Dang!
| wickedsight wrote:
| Agreed. There's some Dang good work being done here. Hope he
| gets rewarded well for it.
| bediger4000 wrote:
| Wait you mean absolute freedom of speech, countering bad
| speech with more good speech doesn't work?
| imiric wrote:
| The mods certainly do a great job of keeping things running
| smoothly here, but I wouldn't say it's _primarily_ because of
| them.
|
| I think it's primarily due to the self-moderation of the
| community itself, who flag and downvote posts, follow the
| community guidelines, and are still overall relatively civil
| compared to other places.
|
| That said, any community can be overrun by an Eternal
| September event, at which point no moderation or community
| guidelines can save it. Some veteran members would argue that
| it's already happened here. I would say we've just been lucky
| so far that it hasn't. The brutalist UI likely plays a part
| in that. :)
| conductr wrote:
| I think it has happened actually. Early on HN was almost
| purely entrepreneurial although through a tech POV. These
| days, it's much more general or broadly tech related. The
| discussion I gather is most people here are tech employees
| and not necessarily entrepreneurs.
|
| It's obviously has not gone to hell like the bot ridden
| examples, but it's drastically different IMO.
| ffsm8 wrote:
| The bots aren't completely dominating here _yet_ ,
| because the price/benefit isn't really there yet.
|
| Twitter is a source of news for some journalists of
| varying quality, which gives them a motivation to
| influence.
|
| On HN, who are you going to convince and what for?
|
| The only thing that would come to mind would be to
| convince venture capital to invest in your upstart, but
| you'd have to keep it up while convincing the owners of
| the platform that you're not faking it - which is gonna
| be extra hard as they have all usage data available,
| making it significantly harder to fly under the radar.
|
| Honestly, I just don't see the cost/benefit of spamming
| HN to change until it gets a lot cheaper so that mentally
| ill ppl get it into their head that they want to "win" a
| discussion by drowning out everything else
| pixl97 wrote:
| >On HN, who are you going to convince and what for?
|
| Eh, following individuals and giving them targeted
| attacks may well be worth it. There are plenty of tech
| purchasing managers here that are responsible for
| hundreds of thousands/millions in product buys. If you
| can follow their accounts and catch posts where they are
| interested in some particular technology it's possible
| you could bid out a reply to it and give a favorable
| 'native review' for some particular product.
| imiric wrote:
| > On HN, who are you going to convince and what for?
|
| There are plenty of things bots would be useful for here,
| just as they are on any discussion forum. Mainly,
| whenever someone wants to steer the discussion away from
| or towards a certain topic. This could be useful to
| protect against bad PR, to silence or censor certain
| topics from the outside by muddying up the discussion, or
| to influence the general mindset of the community. Many
| people trust comments that seem to come from an expert,
| so pretending to be one, or hijacking the account of one,
| gets your point across much more easily.
|
| I wouldn't be so sure that bots aren't already dominating
| here. It's just that it's frowned upon to discuss such
| things in the comments section, and we don't really have
| a way of verifying it in any case.
| 0cf8612b2e1e wrote:
| Plenty of early stage startups who would love to
| discretely shrill their tool. Even better, bad mouth the
| competition.
| Loughla wrote:
| This feels like a chat got response.
|
| Restatement of op's point. Small reason of agreement based
| on widely public information. Last paragraph indicating the
| future cannot be predicted and couching the entire thing in
| terms of a guess or self-contradiction.
|
| This is how chatgpt responds to generic asks about things.
| aio2 wrote:
| daddy dang to the rescue
| yodon wrote:
| Real question for those convinced HN is awash in HN bots: What
| actual value do you believe there is, other than curiosity,
| that is driving people to build the HN spam bots you think
| you're seeing?
|
| Karma doesn't help your posts rank higher.
|
| There is no concept of "friends" or "network."
|
| Karma doesn't bring any other value to your account.
|
| My personal read is it's just a small steady influx of clueless
| folks coming over from Reddit and thinking what works there
| will work here, but I'm interested in your thoughts.
| hashmap wrote:
| Karma is a number that can go up. Numbers going up is a
| supernormal stimulus for humans.
| criddell wrote:
| They should get rid of the number and change it to be only
| "low" or "high".
| diggan wrote:
| Get rid of karma + get rid of ranking comments at all.
| Just render them in a tree-format with oldest/newest
| first, everyone has equal footing :)
| vunderba wrote:
| True but at least for Hacker News you have to at least
| click through to the member profile to see how many banana
| stickers and external validation they've accrued.
| jppope wrote:
| In theory you could Show HN and have your bots upvote it...
| that would indeed be good marketing.
| diggan wrote:
| Vote-rings are trivial to detect though, automated or
| manual. I'd be surprised if HN hasn't figured out ways
| against it during the time it's been online.
| BobbyJo wrote:
| Social media accounts with high engagement, and a long life,
| have monetary value. This is true of most social media
| platforms.
|
| HN generally does a good job of minimizing the value of
| accounts, thus discouraging these kinds of games, but I
| imagine it still happens.
| Narhem wrote:
| The type of engagement and audience arguably matters more.
| zoeysmithe wrote:
| To promote political views and startups and scams, and other
| things that benefit the bot operators.
|
| This is a small but highly influential forum and absolutely
| is gamed. Game theory dictates it will be.
| lompad wrote:
| Hype. HN is _the_ platform to create hype among the early
| adopter, super-spreader/tech exec kind of people and because
| of that has an absolutely massive indirect reach.
|
| Just look how often PR reps appear here to reply to
| accusations - they wouldn't bother at all if this was just
| some random platform like reddit.
| DowagerDave wrote:
| maybe 10 years ago, but this is not the case today.
| panarky wrote:
| I'm not convinced HN is awash in bots, but there are
| certainly some inauthentic accounts here.
|
| What if you want to change public opinion about $evilcorp
| or $evilleader or $evilpolicy? You could explain to people
| who love contrarian narratives how $evilcorp, $evilleader
| and $evilpolicy are actually not as bad as mainstreamers
| believe, and how their competitors and alternatives are
| actually more evil than most people understand.
|
| HN is an inexpensive and mostly frictionless way to run an
| inception campaign on people who are generally better
| connected and respected than the typical blue check on X.
|
| Their objective probably isn't to accumulate karma because
| karma is mostly worthless.
|
| They really only need enough karma to flag posts contrary
| to their interests. Even if the flagged posts aren't
| flagged to death, it doesn't take much to downrank them off
| the front page.
| duckmysick wrote:
| I've hardly seen here any proselytizers from Oracle,
| Salesforce, IBM and they are dong just fine. Ditto for
| Amazon/Google/Microsoft/Facebook - they used to be
| represented more here, but their exodus hardly made any
| difference.
|
| Gartner has more influence on tech than Hacker News.
| bediger4000 wrote:
| If you turn on show dead, you'll see that some accounts just
| post spam or weird BS that ends up instantly dead. I think
| Evon LaTrail is gone now, but for years posted one or more
| links to his/her/their YouTube videos about personal
| sanitation and abortion per day.
|
| There is a stream of clueless folks, but there are also
| hardcore psychos like LaTrail. The Svelte magazine spammer
| fits in this category.
| vunderba wrote:
| I've definitely seen comments that feel very authentically
| posted (not LLM generated) but are a weird mixture of
| vitriol and spite, and when you list other comments from
| that user it's 90% marked dead.
|
| I often wonder if the user is even aware that they're just
| screaming into the void.
| cryptonector wrote:
| > What actual value do you believe there is, other than
| curiosity, that is driving people to build the HN spam bots
| you think you're seeing?
|
| Testing.
|
| And as siblings say, karma is more valuable than you might
| think. If you can herd a bunch of karma via botting, you can
| then [maybe] use that karma to influence all sorts of things.
| yodon wrote:
| > If you can herd a bunch of karma via botting, you can
| then [maybe] use that karma to influence all sorts of
| things.
|
| How? Karma on HN is not like Karma elsewhere. The idea of
| [maybe] monetizing HN Karma reads like the old southpark
| underpants gnome meme[0].
|
| [0]https://imgflip.com/memetemplate/49245705/Underpants-
| Gnomes
| cryptonector wrote:
| Assuming karma helps you get posts on the front page
| (does it?) then karma helps spam.
|
| At any rate, HN attracts trolls. I'm sure it will also
| attract trolls who use AI to increase their rate of
| trolling.
| yodon wrote:
| >Assuming karma helps you get posts on the front page
| (does it?)
|
| No, karma does not help you get posts on the front page.
| vunderba wrote:
| I'd like to think we have enough of a proactive community to
| mitigate this issue for the most part - just set your profile
| back to Show Dead / etc. if you want to see the amount of chaff
| that gets discarded.
|
| Also, wasn't the initial goal of lobste.rs to be a sort of even
| more "mensa card carrying members only" exclusive version of
| Hacker News?
| dom96 wrote:
| I expect that nowadays many online are speaking with GenAI
| without even realising it.
|
| It's already been bad enough that you may be unknowingly
| conversing with the same person pretending to be someone else via
| multiple accounts. But GenAI is crossing the line in making it
| really cheap for narratives to be influenced by just building
| bots. This is a problem for all social networks and I think the
| only way forward is to enforce validation of humanity.
|
| I'm currently building a social network that only allows
| upvote/downvote and comments from real humans.
| VyseofArcadia wrote:
| Good bot (/s)
|
| I don't know that "real humans" is good enough. You can do
| plenty of manipulation on social networks by hiring lots of
| real humans to upvote/downvote/post what you want. It's not
| even expensive.
| dom96 wrote:
| Yeah. But the cost is significantly higher than ramping up
| GenAI to make you thousands of accounts.
|
| There is no fool proof solution to this. But perfect is the
| enemy of the good. Right now social media is pretty far from
| being "good".
| pixl97 wrote:
| >I'm currently building a social network that only allows
| upvote/downvote and comments from real humans.
|
| And how exactly do you do that? At the end of the day there is
| no such thing as a comment/vote from a real human, it's all
| mediated by digital devices. At the point the signal is
| digitized a bot can take over the process.
| dom96 wrote:
| By validating each account with a passport and ensuring that
| only one account per passport is allowed.
| lobsterthief wrote:
| What about people who buy stolen passports on the dark web?
| Or passport details that get leaked in data breaches
| unethical_ban wrote:
| We need a German to chime in with whatever word describes
| this scenario, when someone suggests an action is not
| worth doing because of corner cases or inability to
| perfectly execute a process.
|
| In this hypothetical, let's say we'd tackle the dark web
| passport market issue when we get there.
| dom96 wrote:
| hehe yeah, it's funny how many are focusing on the small
| edge case that makes a certain solution not 100% perfect.
|
| There is also another issue: people can have more than
| one passport if they're dual citizens. But you know
| what... I think that's fine.
| mike_hearn wrote:
| Passports have digitally signed certificates in them,
| readable by any device with an NFC radio. It's easy
| enough to extract that data with a mobile app, and now
| you have an unforgeable file that can be hashed. Of
| course whether users will bother to sign up for something
| like a social network if they have to go rummage around
| and find a passport, install a mobile app etc, I don't
| know. But it's technically possible and I've proposed
| such a scheme years ago. For bonus points do the hashing
| inside a secure enclave so the ePassport data is never
| upload to a remote server in the clear.
| metalliqaz wrote:
| who is going to upload their passport to use social media?
| dom96 wrote:
| People already upload their passport for lots of things,
| why not social media?
| metalliqaz wrote:
| > People already upload their passport for lots of things
|
| I sure don't.
|
| > why not social media?
|
| privacy?
| dom96 wrote:
| For me, sacrificing some privacy is worth it to be in a
| community with users who I know are real people. If
| that's not something that is important to you then that's
| fine.
|
| In my case, right now it's very easy for you to figure
| out my real name by just googling my nickname.
| Registering on a website like the one I am implementing
| won't sacrifice much more of my privacy.
| kjkjadksj wrote:
| Why would I ever give my passport to your website or anyone
| elses?
| dom96 wrote:
| Because it enables you to be a part of a social media
| that is guaranteed to be majority human-centred. Why
| wouldn't you give your passport to a website? Don't you
| already do so for crypto or other banking?
| SoftTalker wrote:
| Aside from the practical and technical problems you're
| greatly limiting your audience. Most people who don't
| travel internationally don't have a passport. In Europe
| this might be a smaller number but in the USA and Canada I
| would guess this is a majority of people. Non-citizens
| won't have a passport. Most young adults will not have one.
| Many older people will have let theirs expire.
| dom96 wrote:
| Yeah, that's the challenge. My bet is that there are
| enough people out there with passports to make an
| interesting social network.
|
| Of course, getting someone to share their passport will
| be another filter. But I hope that I can convince people
| that the benefits are worth it (and that I will be able
| to keep their data safe, by only storing hashes of
| everything).
| pixl97 wrote:
| Ok, so you get some critical amount of 'humans' to share
| "a" passport. Now you've built an expensive and high
| quality lead finder for scammers/spammers. You've
| increased the value floor for people submitting fake
| passports. Also, how are you paying for verification of
| the passports?, VC money to get from the loss phase to
| the making enough to support itself on ads? How are you
| dealing with government data requests, especially in the
| case where said data can be attributed to a real human?.
|
| Maybe I'm wrong, but just a social network of 'real
| people' doesn't seem like enough in itself. What is going
| to bring people there with the restrictions and potential
| risks you're creating.
| dom96 wrote:
| You could very well be right. I'm willing to give it a
| shot and see.
|
| All I can say is that I personally see huge value in a
| social network for real people. Personally I am sick of
| arguing with what are likely legions of duplicate
| accounts/bots/russian trolls online. I want some
| reassurance that I'm not wasting my time and am actually
| reaching real humans in my interactions online.
|
| Success to me is 1000 MAU. There are companies out there
| that do passport verification for a reasonable fee with a
| fair amount of free verifications to start with (which
| will handle 1000 MAU just fine). If the number of users
| wishing to take part is significantly higher then I will
| explore either ads or charging a small fee during
| registration.
|
| I'm still very far from needing to cross that bridge
| though. Same for some of the other questions you've
| raised. I'd have to do a lot more research to come to a
| solid stance of what to do when government data requests
| come in. But I would guess that there isn't much choice
| but to abide by the request. If you want true anonymity
| from the government then this place will not be for you
| (but then I'd say not many social networks are for you in
| that case)
| ddtaylor wrote:
| I see a lot of value in a network where I can be
| confident I am talking to real people.
|
| As a user I can't do anything related to the passport
| stuff and I know many people who likewise wouldn't be
| interested in doing that, because we live in the states.
| A more "normal" approach here would be to use one of the
| government ID verification systems for your state ID.
| Most of us are willing to expose that information, since
| it's what you show when you go to the store to prove your
| age/identity.
| cynicalpeace wrote:
| No passport, but perhaps face picture that is also encoded
| with on device verifiable token.
|
| I think there is actually a use case for blockchain (don't
| pile on!) for this. I have a vague idea of a unique token
| that goes on every camera and can be verified from the
| blockchain. If a picture has the token you know it's
| real... like i said, it's vague idea but i think it's
| coming
| dom96 wrote:
| No need for blockchain. All you need is this:
| https://contentauthenticity.org/.
|
| The problem with this is that it's still easy to forge.
|
| I'll certainly consider playing with ways to identify
| human uniqueness that don't require passports. But
| passports are the most obvious route to doing so.
| cynicalpeace wrote:
| Seems like it's sorta what I'm talking about? Hard to
| judge because they need to work on their communication
| skills. It reads like techno-corpo-babble and I'm a
| professional software engineer.
| n_ary wrote:
| How do I know that, you will handle my passport data with
| care? Banks I can trust(despite numerous leaks), you as a
| random social media or online service with zero regulation,
| I won't. Plus this opens up immense ways to sue you for
| collecting unnecessary data and personal information,
| unless you are massive and have an army or lawyer or have a
| KYC requirement.
| dom96 wrote:
| I plan to outline exactly what data I store. I don't plan
| to store raw passport number/name details, rather a hash
| so I can verify the same passport isn't used twice.
|
| So even if the DB leaks no one (except the government)
| will be able to tie your real life identity to the
| account.
| gregw134 wrote:
| Why not require a non-voip phone number instead of
| passport?
| dom96 wrote:
| SIM cards are as cheap as PS1 where I live. PS1 per
| account is nothing and I wouldn't be surprised if there
| are ways to get unique numbers for even less.
| wavemode wrote:
| I agree with the other commenters that you'll face a lot of
| challenges, but personally I hope you succeed. Sounds like an
| idea worth pursuing.
| dom96 wrote:
| Thank you :)
| tomalaci wrote:
| This is pretty much progress on dead internet theory. The only
| thing I think that can stop this and ensure genuine interaction
| is with strong, trusted identity that has consequences if
| abused/misused.
|
| This trusted identity should be something governments need to
| implement. So far big tech companies still haven't fixed it and I
| question if it is in their interests to fix it. For example, what
| happens if Google cracks down hard on this and suddenly 60-80% of
| YouTube traffic (or even ad-traffic) evaporates because it was
| done by bots? It would wipe out their revenue.
| joseda-hg wrote:
| This still breaks some parts of the internet, where you
| wouldn't want to associate your identity with your thoughts or
| image
| cryptonector wrote:
| Think attribute certificates.
| AnthonyMouse wrote:
| There are only two real ways to implement that. One is the
| "attribute certificate" is still tied to your full identity
| and then people won't be willing to associate them. The
| other is that the attribute certificates are fully generic
| (e.g. everyone over 18 gets the same one) and then someone
| will post it on the internet and, because there is no way
| to tie it back to a specific person, there is no way to
| stop them and it makes the system pointless.
| brookst wrote:
| > It would wipe out their revenue.
|
| Disagree. YouTube's revenue comes from large advertisers who
| can measure real impact of ads. If you wiped out all of the
| bots, the actual user actions ("sign up" / "buy") would remain
| about the same. Advertisers will happily pay the same amount of
| money to get 20% of the traffic and 100% of the sales. In fact,
| they'd likely pay more because then they could reduce
| investment in detecting bots.
|
| Bots don't generate revenue, and the marketplace is somewhat
| efficient.
| Veuxdo wrote:
| > In fact, they'd likely pay more because then they could
| reduce investment in detecting bots.
|
| A lot more. Preventing bots from eating up your entire
| digital advertising budget takes a lot of time and money.
| speed_spread wrote:
| Making advertising more efficient would also open up
| opportunities for smaller players. Right now only the big
| guys have the chops to carpet-bomb the market regardless of
| bots. Noise benefits those who can afford to stand above
| it.
| netcan wrote:
| Yes... but maybe also no. Well measured advertising budgets
| are definitely part of the game. But so are poorly measured
| campaigns. Type B often cargo cult A. It's far from a perfect
| market.
|
| In any case, Adwords is at this point a very established
| product... very much an incumbent. Disruption generally, does
| not play to their favor by default.
| mbesto wrote:
| > YouTube's revenue comes from large advertisers who can
| measure real impact of ads.
|
| Not necessarily. First, attribution is not a solved problem.
| Second, not all advertisement spend is on direct
| merchandising, but rather for branding/positioning where
| "sign up" / "buy" metrics are meaningless to them.
| kibwen wrote:
| What on Earth has given so many people in this thread the
| confidence to assert that marketing departments actually have
| any real way to gauge the effectiveness of a given ad
| campaign? It's effectively impossible to adjust for all the
| confounding variables in such a chaotic system, so ad spend
| is instead determined by internal politicking,
| pseudoscientific voodoo, and the deftness of the marketing
| department's ability to kiss executive ass. This ain't
| science, it's perversely-incentivized emotion.
| pilgrim0 wrote:
| I think on the same lines. Digital identity is the hardest
| problem we've been procrastinating in solving since forever,
| because it has the most controversial trade offs, which no two
| persons can agree on. Despite the well known risks, it's
| something only a State can do.
| ethbr1 wrote:
| There was a great post on HN about this problem, about a year
| ago.
|
| Think this was it:
| https://news.ycombinator.com/item?id=37092319
|
| Interesting paper and exploration of the "pick two" nature of
| the problem.
| solumunus wrote:
| > For example, what happens if Google cracks down hard on this
| and suddenly 60-80% of YouTube traffic (or even ad-traffic)
| evaporates because it was done by bots? It would wipe out their
| revenue.
|
| Nonsense. Advertisers measure results. CPM rates would simply
| increase to match the increased value of a click.
| romanovcode wrote:
| > This trusted identity should be something governments need to
| implement.
|
| I rather live with dead-internet than this oppressive trash.
| dom96 wrote:
| What do governments need to implement? They already give you a
| passport which can be used as a digital ID.
| JimDabell wrote:
| Services need the ability to obtain an identifier that:
|
| - Belongs to exactly one real person.
|
| - That a person cannot own more than one of.
|
| - That is unique per-service.
|
| - That cannot be tied to a real-world identity.
|
| - That can be used by the person to optionally disclose
| attributes like whether they are an adult or not.
|
| Services generally don't care about knowing your exact
| identity but being able to ban a person and not have them
| simply register a new account, and being able to stop people
| from registering thousands of accounts would go a long way
| towards wiping out inauthentic and abusive behaviour.
|
| I think DID is one effort to solve this problem, but I
| haven't looked into it enough to know whether it's any good:
|
| https://www.w3.org/TR/did-core/
| dom96 wrote:
| Agreed that offering an identifier like this would be
| ideal. We should be fighting for this. But in the meantime,
| using a passport ticks most of the boxes in your list.
|
| I'm currently working on a social network that utilises
| passports to ensure account uniqueness. I'm aware that
| folks can have multiple passports, but it will be good
| enough to ensure that abuse is minimal and real humans are
| behind the accounts.
| JimDabell wrote:
| The main problem with this is that a hell of a lot of
| people don't want to give sensitive personal documents to
| social media platforms.
| dom96 wrote:
| Yeah. That will be the challenge.
|
| I hope that enough are willing to if the benefits and
| security are explained plainly enough. For example, I
| don't intend to store any passport info, just hashes. So
| there should be no risk, even if the DB leaks.
| fwip wrote:
| First, not everyone has passports - there are roughly
| half as many US passports as Americans.
|
| Second, how much of the passport information do you hash
| that it's not reversible? If you know some facts about
| your target (imagine a public figure), could an attacker
| feasibly enumerate the remaining info to check to see if
| their passport was registered in your database? For
| example, there are only 2.6 billion possible American
| passport numbers, so if you knew the rest of Taylor
| Swift's info, you could conceivably use brute-force to
| see if she's in your database. As a side effect, you'd
| now know her passport number, as well.
| AnthonyMouse wrote:
| > Second, how much of the passport information do you
| hash that it's not reversible?
|
| That doesn't even matter. You could hash the whole
| passport and the passport could contain a UUID and the
| hash db would still be usable to correlate identities
| with accounts, because the attacker could separately have
| the victim's complete passport info. Which is
| increasingly likely the more sites try to use passports
| like this, because some won't hash them or will get
| breached sufficiently that the attackers can capture
| passport info before it gets hashed and then there will
| be public databases with everybody's complete passport
| info.
| mrybczyn wrote:
| Passport might be a bit onerous - it's expensive and painful
| process and many don't need it.
|
| But it's a hilarious sign of worldwide government
| incompetence that social insurance or other citizen
| identification cards are not standard, free, and uniquely
| identifiable and usable for online ID purposes (presumably
| via some sort of verification service / PGP).
|
| Government = people and laws. Government cannot even reliably
| ID people online. You had one job...
| secabeen wrote:
| In the United States, the lack of citizen identification
| cards is largely due to Republican opposition. People who
| lack ID are more likely to be democratic voters, so there
| is an incentive to oppose getting them ID. There's also a
| religious element for some people, connected to Christian
| myths about the end of the world.
| cryptonector wrote:
| This is utter nonsense.
| consteval wrote:
| It's kind of half true - there is an association between
| not having an ID and being blue. Because people without
| IDs are more likely to be people of color or of other
| marginalized groups, which then are more likely to be
| blue.
|
| In addition, there's a strong conservative history of
| using voter id as a means of voter suppression and
| discrimination. This, in turn, has made the blue side
| immediately skeptical of identification laws - even if
| they would be useful.
|
| So, now the anti-ID stuff is coming from everywhere.
| cryptonector wrote:
| It's absolutely not true. People have to supply IDs for
| tons of activities. They have IDs. We know who they are.
| They are registered to vote -- how did that happen w/o
| ID? Of course they have IDs.
| consteval wrote:
| The statistics just don't back this up. Plenty of,
| predominantly poor, people don't have driver's licenses.
| And that's typically the only ID people have. Also,
| poorer people may work under the table or deal in cash.
| cryptonector wrote:
| Link the stats please. There are ID types other than
| driver's licenses. In fact, the DMVs around the country
| issue non-driver IDs that are every bit as good as driver
| licenses as IDs.
| fwip wrote:
| Where do you get this idea that you need to have an ID
| card in order to register to vote? It's certainly not a
| federal requirement.
|
| In NY, you can register with ID, last 4 digits of your
| social, or leave it blank. If you leave it blank, you
| will need to provide some sort of identification when
| voting, but a utility bill in your name and address will
| suffice.
| JimDabell wrote:
| > But it's a hilarious sign of worldwide government
| incompetence that social insurance or other citizen
| identification cards are not standard, free, and uniquely
| identifiable and usable for online ID purposes (presumably
| via some sort of verification service / PGP).
|
| Singapore does this. Everybody who is resident in Singapore
| gets an identity card and a login for Singpass - an OpenID
| Connect identity provider that services can use to obtain
| information like address and visa status (with user
| permission). There's a barcode on the physical cards that
| can be scanned by a mobile app in person to verify that
| it's valid too.
| int_19h wrote:
| When it comes to government-issued IDs, "standard" and
| "free" is a solved problem in almost every country out
| there. US is a glaring exception in this regard,
| particularly so among developed countries. And it is
| strictly a failure of policy - US already has all the
| pieces in place for this, they just need to be put together
| with official blessing. But the whole issue is so
| politicized that both major parties view it as unacceptable
| deviation from their respective dogmas on the subject.
| secabeen wrote:
| Less than half of Americans have passports, and of the
| remaining half, a significant fraction do not have the
| necessary documents to obtain one. Many of these people are
| poor, people of color, or marginalized in other ways.
| Government ID is needed, but you generally find the GOP
| against actually building a robust, free, ubiquitous system
| because it would largely help Americans who vote Democratic.
| This is also why the GOP pushes Voter ID, but without
| providing any resources to ensure that Americans can get said
| ID.
| int_19h wrote:
| To be fair, you generally don't see Dems pushing for such a
| free and ubiquitous system, either - "voter ID is bad" is
| so entrenched on that side of the aisle that any talk about
| such a system gets instant pushback, details be damned.
| datadrivenangel wrote:
| You would assume that Advertising companies with quality ad
| space would be able to show higher click through rates and
| higher impression to purchase rates -- overall cost per
| conversion -- by removing bots that won't have a business
| outcome from the top of the funnel.
|
| But attribution is hard, so showing larger numbers of
| impressions looks more impressive.
| carlosjobim wrote:
| Attribution is extremely easy, it is a solved problem.
|
| Companies keep throwing away money on advertising for bots
| and other non-customers because they either:
|
| A) Are small businesses where the owner doesn't care about
| what he's doing and enjoys the casino like experience of
| buying ads online and see if he gets a return, or
|
| B) Are big businesses where the sales people working with
| online ads are interested in not solving the problem, because
| they want to keep their salaries and budget.
| nxobject wrote:
| I think that's also part of Facebook's strategy of being as
| open with llama as possible - they can carve out the niche as
| the "okay if we're going to dive head first into the dead
| internet timeline, advertisers will be comforted by the fact
| that we're a big contributor to the conversation on the harms
| of AI - by openly providing models for study."
| kjkjadksj wrote:
| On the other hand I think the best social media out there today
| is 4chan. Entirely anonymous. Also, the crass humor and nsfw
| boards act as a great filter to keep out advertising bot
| networks from polluting the site like it did with reddit. No
| one one wants to advertise on 4chan or have their brand
| associated with it, which is great for quality discussion on
| technical topics and niche interests.
| dom96 wrote:
| 4chan is actually one of the worst social media out there.
| They are responsible for a hell of a lot of hate campaigns
| out there. Anonymity breeds toxicity.
| AnthonyMouse wrote:
| Anonymity breeds veracity. As soon as you force people to
| identify themselves they start lying to you whenever the
| truth would be controversial. They refuse to concede when
| someone proves them wrong because now they're under
| pressure to save face. It's why Facebook's real name policy
| causes the place to be so toxic.
| bityard wrote:
| I've been thinking about how AI will affect ad-supported
| "content" platforms like YouTube, Facebook, Twitter, porn
| sites, etc. My prediction is that as AI-generated content
| improves in quality, or at least believability, they will not
| prohibit AI-generated content, they will embrace it whole-
| heartedly. Maybe not at first. But definitely gradually and
| definitely eventually.
|
| We know that these sites' growth and stability depends on
| attracting human eyeballs to their property and KEEPING them
| there. Today, that manifests as algorithms that analyze each
| person's individual behavior and level of engagement and uses
| that data to tweak that user's experience to keep them latched
| (some might say addicted, via dopamine) to their app on the
| user's device for as long as possible.
|
| Dating sites have already had this down to a science for a long
| time. There, bots are just part of the business model and have
| been for two decades. It's really easy: you promise users that
| you will match them with real people, but instead show them
| only bots and ads. The bots are programmed to interact with the
| users realistically over the site and say/do everything short
| of actually letting two real people meet up. Because whenever a
| dating site successfully matches up real people, they lose
| customers.
|
| I hope I'm wrong, but I feel that social content sites will
| head down the same path. The sites will determine that users
| who enjoy watching Reels of women in swimsuits jump on
| trampolines can simply generate as many as they need, and tweak
| the parameters of the generated video based on the user's
| (perceived) preferences: age, size, swimsuit color, height of
| bounce, etc. But will still provide JUST enough variety to keep
| the user from getting bored enough to go somewhere else.
|
| It won't just be passive content that is generated, all those
| political flamewars and outrage threads (the meat and potatoes
| of social media) could VERY well ALREADY be LLM-generated for
| the sole purpose of inciting people to reply. Imagine happily
| scrolling along and then reading the most ill-informed, brain-
| dead comment you've ever seen. You know well enough that
| they're just an idiot and you'll never change their mind, but
| you feel driven to reply anyway, so that you can at LEAST point
| out to OTHERS that this line of thinking is dangerous, then
| maybe you can save a soul. Or whatever. So you click Reply but
| before you can type in your comment, you first have to watch a
| 13-second ad for a European car.
|
| But of course the comment was never real, but you, the car, and
| your money definitely are.
| cryptonector wrote:
| > This trusted identity should be something governments need to
| implement.
|
| Granting the premise for argument's sake, why should
| governments do this? Why can't private companies do it?
|
| That said, I've long thought that the U.S. Postal Service (and
| similarly outside the U.S.) is the perfect entity for providing
| useful user certificates and attribute certificates (to get
| some anonymity, at least relative to peers, if not relative to
| the government).
|
| The USPS has: - lots of brick and mortar
| locations - staffed with human beings - who are
| trained and able to validate various forms of identity
| documents for passport applications
|
| UPS and FedEx are also similarly situated. So are grocery
| stores (which used to, and maybe still do have bill payment
| services).
|
| Now back to the premise. I want for anonymity to be possible to
| some degree. Perhaps AI bots make it impossible, or perhaps
| anonymous commenters have to be segregated / marked as
| anonymous so as to help everyone who wants to filter out bots.
| consteval wrote:
| I think the main argument for having the government do it as
| opposed to the private sector is that the gov has a lot more
| restrictions and we, the people, have a say. At least
| theoretically.
|
| Imagine if Walmart implemented an identity service and it
| really took off and everyone used it. Then, imagine they ban
| you because you tweeted that Walmart sucks. Now you can't get
| a rental car, can't watch TV, maybe can't even get a job. A
| violation of the first amendment in practice, but no such
| amendment exists for Walmart.
| cryptonector wrote:
| We're already there. Apple and Google know who we all are
| because we had to pay for our devices.
|
| The government has no real restrictions.
| throwway120385 wrote:
| I used to think that, but recently had a really bad
| experience with a lot of runaround with them when we had to
| have our mail held for a few weeks while we sorted out a
| mailbox break-in. We would go to one post office that was
| supposed to have our mail and be told to go to another post
| office, then get redirected back to the first post office
| multiple times. And they kept talking about how they had to
| work out the logistics and everything was changing over and
| over. Some of the managers seemed to give my wife the wrong
| information to get rid of her.
|
| There were a few managers who tried to help and eventually we
| got our mail but the way everything worked out was absurd. I
| think they could handle national digital identity except that
| if you ever have a problem or need special treatment to
| address an issue buckle up because you're in for a really
| awful experience.
|
| The onboarding and day-to-day would probably be pretty good
| given the way they handle passport-related stuff though.
| rurp wrote:
| > why should governments do this? Why can't private companies
| do it?
|
| A private company will inevitably be looking to maximize
| their profit. There will always be the risk of them
| enshittifying the service to wring more money out of citizens
| and/or shutting it down abruptly if it's not profitable.
|
| There's also the accountability problem. A national ID system
| would only be useful if one system was widely used, but free
| markets only function well with competition and choice. It
| could work similar to other critical services like power
| companies, but those are very heavily regulated for these
| same reasons. A private system would only work if it was
| stringently regulated, which I don't think would be much
| different from having the government run it internally.
| internet101010 wrote:
| It could be done similar to how car inspections are done in
| Texas: price is set statewide, all oil change places do the
| service, and you redeem a code after.
|
| The problem with this though is the implications of someone
| at whatever the private entity is falsely registering
| people under the table - this would need to be considered a
| felony in order for it to work.
| AnthonyMouse wrote:
| > A national ID system would only be useful if one system
| was widely used, but free markets only function well with
| competition and choice.
|
| Isn't this also a problem with having the government do it?
| E.g. it's supposed to prevent you from correlating a
| certification that the user is over 18 with their full
| identity, but it's insecure and fails to do so, meanwhile
| the government won't fix it because the administrative
| bureaucracy is a monopoly with limited accountability or
| the corporations abusing it for mass surveillance lobby
| them to keep the vulnerability.
| gregw134 wrote:
| What's best practice for preventing bot abuse, for mere mortal
| developers? Would requiring a non-voip phone number at
| registration be effective?
| AnthonyMouse wrote:
| There is no such thing as a "non-VoIP phone number". All
| phone numbers are phone numbers. Some people try to ban
| blocks assigned to small phone providers, but some actual
| humans use those. Meanwhile major carriers are leasing
| numbers to anyone who pays from the same blocks they issue to
| cellular customers. Also, number portability means even
| blocks don't mean anything anymore.
|
| Large companies sometimes claim to do this "to fight spam"
| because it's an excuse to collect phone numbers, but that's
| because most humans only have one or two and it serves as a
| tracking ID, not because spammers don't have access to a
| million. Be suspicious of anyone who demands this.
| changing1999 wrote:
| Unfortunately, every anti-bot feature also harms real people.
| As a voip user, I wouldn't be able to sign up for your app.
| spacebanana7 wrote:
| If it's really important to you then use Apple / Google /
| GitHub login.
|
| Obviously this has many downsides, especially from a privacy
| perspective, but it quickly allows you to stop all but the
| most sophisticated bots from registering.
|
| Personally I just stick my sites behind Cloudflare until
| they're big enough to warrant more effort. It prevents most
| bots without too much burden on users. Also relatively simple
| to move away from.
| gregw134 wrote:
| Does that really work? I'm trying to build a site with
| upvotes--wouldn't it be really easy for someone with 100
| bought Google accounts to make 100 accounts on my site?
| mike_hearn wrote:
| Well you're about to find out, because YouTube is doing a
| massive bot/unofficial client crackdown right now. YTDL,
| Invidious etc are all being banned. Perhaps Google got tired of
| AI competitors scraping YouTube.
|
| In reality, as others have pointed out, Google has always
| fought bots on their ad networks. I did a bit of it when I
| worked there. Advertisers aren't stupid, if they pay money for
| no results they stop spending.
| internet101010 wrote:
| yt-dlp works just fine for me. Or are you saying that are
| limiting those that do downloads in bulk?
| mike_hearn wrote:
| Probably the latter. yt-dlp can be detected and it yields
| account / IP bans, it seems. They've been going back and
| forth around the blocks for weeks but only by claiming to
| be different devices, each time they do the checks are
| added for the new client and they have to move onto the
| next. There's a finite number of those.
|
| Here's a comment from Invidious on the matter:
|
| https://github.com/iv-
| org/invidious/issues/4734#issuecomment...
| jenny91 wrote:
| > This trusted identity should be something governments need to
| implement.
|
| I have been thinking about this as well. It's exactly the kind
| of infrastructure that governments should invest in _to enable
| new opportunities for commerce_. Imagine all the things you
| could build if you could verify that someone is a real human
| somehow with good accuracy (without necessarily verifying their
| identity).
| ddoolin wrote:
| What is the primary point(s) of building bots that do this kind
| of thing, that seemingly flood the internet with its own Great
| Internet Garbage Patch?
| corytheboyd wrote:
| It is always money. You can sell "I will get your launch to top
| 10 in ProductHunt." Yes if/when taken too far, everyone will
| ditch ProductHunt and it will die, but until then, a quick buck
| can be made.
| metalliqaz wrote:
| by the looks of the graphs in the linked article, it appears
| ProductHunt is already a zombie
| criddell wrote:
| It isn't always money. Sometimes it's just lulz.
|
| Reply All had did a podcast (ep #178) about people who are
| running bots on Counter-Strike that ruin the game. They
| tracked down a person who does this and they just basically
| do it to be annoying.
|
| > [... ]what's the point of running them? Like, what do you
| get out of the exercise?
|
| > There are many reasons to run them. Most if not [all]
| casual players dislike bots (which is a reason to run them)
| datadrivenangel wrote:
| Existing businesses did this with live humans.
|
| Ranking/review sites for B2B services would work with paying
| customers to solicit interviews and reviews from their
| customers, and of course only the 5 star reviews get posted.
|
| Heck, a lot of these "bots" may actually be a real human
| working a table of 100 cell phones in some cheaper country.
| intended wrote:
| Why not do it? It's essentially spam / pollution, and there are
| no consequences.
| oaklander wrote:
| Since I know you personally, I know how much work you put into
| this and it shows. Nicely done
| welder wrote:
| Thanks Siri! Yep back to normal work now.
| api wrote:
| We are in the twilight of the open Internet, at least for useful
| discourse. The future is closed enclaves like private forums,
| Discord, Slack, P2P apps, private networks, etc.
|
| It won't be long before the entire open Internet looks like
| Facebook does now: bots, AI slop, and spam.
| xnorswap wrote:
| The second histogram looks more human than the "not bot" first
| one?
|
| Second user clearly takes a look before work, during their lunch-
| break and then after work?
| welder wrote:
| There frequency of "game-changing" in comments would say
| otherwise. It's probably cron running at those intervals, not a
| work schedule.
| rozab wrote:
| I hate game-changing as much as the next guy, and was ranting
| about it on here just the other day, but some people really
| do talk like that.
|
| Have you tried running any network analysis on these bots? I
| would expect to see strong clustering, and I think that's
| usually the primary way these things are identified. The
| prompt injection is an awesome approach though!
| welder wrote:
| Yes I did on subsets of the data because cupy and cudf
| haven't implemented intersection functions yet for the GPU.
| But the clustering is weak because new signups are cheap so
| they burn/throwaway accounts after one targeted vote.
| Normally clustering works with more than one common vote
| between users?
| ImPostingOnHN wrote:
| There's also the point that one histogram is an order or two of
| magnitude larger than the other one. Larger samples of normally
| distributed data will tend to better resemble a normal
| distribution.
| cynicalpeace wrote:
| I wonder how much of Meta and other social media ad revenue is
| based on bot activity.
|
| You can setup a campaign where you pay for comments and you're
| actually paying Meta to show your ad to a bunch of bots.
|
| Does anyone have more resources/inside info that confirms/denies
| this suspicion?
| jajko wrote:
| If people are paying lets say these morally questionable
| companies like meta ad campaigns, they deserve what they get. I
| don't want to condone any criminal behavior, but this whole
| business of people's mass manipulation is vastly immoral bunch
| of white (or not so white) lies.
| cynicalpeace wrote:
| Who do you suggest as an alternative for paid ads?
| kjkjadksj wrote:
| Building relationships with clients, same as it ever was.
| There are companies today that have been selling for
| example very specific machined parts for 100 years. You
| have never heard of them. They don't advertise on facebook.
| Yet they bring in enough work to stay in business without
| these paid campaigns. The secret sauce? The rolodex and
| actually calling potential clients directly.
| cynicalpeace wrote:
| This answer is a good answer for some companies, but for
| other companies it's very hand-wavy. Paid ads have value
| and you can make a pretty penny even on Meta (actually,
| especially on Meta compared to others) if you do it
| right.
|
| Still curious about alternatives for paid ads
| jajko wrote:
| Sure you can make penny, you can do a lot of pennies on
| various amoral businesses, often the deeper this shit
| goes the more gold is in it.
|
| I call it amoral, nobody even trying to object it since
| we all know reality, and I stand by it. It slowly but
| surely destroys future of our kids and makes it bleaker,
| and objectively worse. Maybe not massively (and maybe
| yes, I don't know and neither do you), and its hard to
| pinpoint a single actor, so lets point it in ad business.
|
| But I guess as long as you have your 'pretty penny' thats
| all you care about? I don't expect much sympathy on a
| forum where better half of participants work for the
| worst offenders, 'pretty penny' it is as we all know, but
| curious about a single good argument about that pesky
| morality.
| cynicalpeace wrote:
| That response was not to your comment, but a different
| comment.
|
| I don't see why advertising is particularly moral or
| immoral. Depends on the platform, content, product, etc.
| Which is why I asked you for suggestions about other ad
| platforms.
| 9dev wrote:
| Advertising is amoral because it's end game is always
| sacrificing things humans generally regard as valuable--
| our attention, leisure time, savings--for shareholder
| revenue. Advertising always has an incentive to increase
| revenue by being ever more invasive, corrupting anything
| it touches. As it goes, it shifts our perception of
| normal--just imagine asking someone from the 1920ies
| whether they're okay with ads blasting from gas station
| pumps, elevators, or toilets. Or if they would be okay
| with someone watching your every move and deduct what
| they could offer you when you're exhausted or miserable
| and easy prey. Advertisers have convinced us this is
| normal. It's not. And it will only ever get worse.
| FredPret wrote:
| What?
|
| How do you meet these clients in the first place?
|
| How do you get them to answer their phone?
|
| How do you get word-of-mouth if you're just starting out?
|
| Edit: reduced level of snark
| n_ary wrote:
| Usually they cold call or email with
| prices/quotes/offers(see my previous comment to parent),
| and somehow they harvest of buy the contacts of
| businesses and employees.
|
| I sometimes suspect that, there are some ways to collect
| these from linkedin or the business card printers sell
| the contact info in black(due to strict data privacy act
| in EU). Because only two places my work email & work
| phone number being available are at the business card
| printer and linkedin(we need to use work email to access
| some elearning things, don't ask).
| n_ary wrote:
| Well, about that thing. Some of our local companies in
| machining and other things somehow buy private emails and
| phone numbers. While I work at a place that do not
| directly need such services, my spam box and my work
| phone(mobile) blocklist is full of services calling me to
| offer their latest price and if I can forward them to my
| boss or whatever. So, either online ads or other forms of
| spamming.
| dzink wrote:
| Facebook has become full of generic name accounts posting
| generic AI generated content. Despite all flagging it is
| incessant, which tells me it's likely sanction or even created
| by the company to fill content gaps. I'd say 30-50% or content
| that shows for me is suspicious.
| doctorpangloss wrote:
| > You can setup a campaign where you pay for comments
|
| You cannot setup a campaign where you pay for comments
| (https://www.facebook.com/business/help/1438417719786914#). But
| maybe you mean other user generated content like messages. You
| ought to be able to figure out pretty quickly if those are
| authentic.
| kjkjadksj wrote:
| Do you care if they are authentic? Probably not. You are in
| the business of getting clicks on an ad. Your client has to
| worry about actually converting that to product sales. Thats
| their problem and not yours as the ad firm.
| doctorpangloss wrote:
| Meta and ad agencies are definitely incentivized that your
| ads convert and that engagement is authentic. Otherwise why
| deal with them at all? SMBs are like 40% of Meta's
| revenues, the entire world finds the product valuable. It
| is a little glib to try to make the contrarian case on
| conversion.
|
| The toxic part of their incentives is that they want
| businesses to commit large budgets on campaigns whose
| performance you can measure with very little money spent
| and very little data.
| cynicalpeace wrote:
| Yes definitely. But then that last sentence sorta leads
| to the first question you had- why deal with them at all?
|
| I think it's because despite that toxicity it still seems
| to be the best ad platform in town. Haven't seen anybody
| suggest a better alternative. Feels almost monopolistic.
| cynicalpeace wrote:
| I've heard it said that Meta prioritizes like this:
|
| 1. Users 2. Meta 3. Advertisers
|
| I have a feeling it's actually:
|
| 1. Meta 2. Users 3. Advertisers
|
| But in the end, advertisers always end up on the bottom.
| Especially since advertisers need Meta more than Meta needs
| any one of them.
| cynicalpeace wrote:
| My bad, I guess I was thinking of engagement campaigns and
| yes messaging campaigns
| ToastBackwards wrote:
| I often wonder this as well. Would love to have more insight
| into that
| FredPret wrote:
| I do - I think the effect on Meta's ad revenue is nil.
|
| Advertisers measure ad campaigns by ROAS (return on ad spend).
| This is driven by actual dollars spent, cutting out all bots
| right away.
|
| Clicks / views / comments are irrelevant except as far as
| getting the ad to show for actual buyers.
| cynicalpeace wrote:
| I've advertised quite a bit on Meta, and also hired people to
| do it for me. ROAS, for me, is the most important metric, but
| it's not the only metric people look at. Speaking from
| experience, plenty of people like to look at and optimize
| other metrics.
| FredPret wrote:
| For interest's sake, what other metrics do you find most
| valuable?
| smileybarry wrote:
| I've said this in another thread here, but Twitter is borderline
| unusable because of this. I have 5,000+ blocked accounts by now
| (not exaggerating), and the first few screenfuls of replies are
| still bots upon bots upon bots. All well-behaved $8-paying
| netizens, of course.
| dewey wrote:
| Why bother if you have to spend that much time curating a good
| feed?
| smileybarry wrote:
| Unfortunately it's still the main "shortform" social network
| here for local stuff. Not enough in my country moved to
| Mastodon or Bluesky. (Referenced HN comment:
| https://news.ycombinator.com/item?id=41586643)
|
| And no, it's _definitely_ not worth it if you 're joining/new
| enough. Anyone who asks me about Twitter I immediately tell
| them to not bother and that I'm just "stuck" there. My
| Following feed and most of the algorithmic feed is fine, it's
| just the replies & interaction that took a huge hit.
| evantbyrne wrote:
| I'm curious what kind of engagement people who aren't
| prolific posters are seeing on Twitter these days. Before I
| left I noticed that engagement went off a cliff to near
| zero immediately following the aggressive algorithm changes
| with blue check spam being promoted, but remained normal on
| my other social channels. It didn't seem like there were
| any normal people talking with each other on Twitter.
| smileybarry wrote:
| It's down there, below the bluecheck promotion. Or in
| (non-English) circles where the LLM bots haven't
| proliferated yet, which are also (mostly) why I stuck
| around.
| EasyMark wrote:
| I tried using an extension that filters out blue checks
| and it's still about 90% garbage from troll accounts who
| can't afford $8. The only way is to just follow those who
| you enjoy, although you'll likely have to find them
| outside of twitter, because their useful posts are
| drowning in a sea of trash
| jjkaczor wrote:
| Zero. I post the same things to Mastodon, Threads,
| BlueSky, and other places and get plenty of engagement.
|
| However - because I don't pay for a "blue checkmark",
| that's my best guess as to why I get zero engagement.
|
| That's fine - I have always treated Twitter as a "post-
| only", "fire & forget" medium.
| imiric wrote:
| I haven't used Twitter in ages, but what happened to Musk's
| paywall idea? That might actually deter bots and real users, so
| it seems like a win-win for everyone to stop using it.
|
| Otherwise, I doubt spammers/scammers are really paying $8/mo
| for a verified account. How are they getting them then?
| schmidtleonard wrote:
| Great, so instead of selling me overpriced intimate shaving
| accessories the bots will be trying to convince me that
| Europe will freeze without Russian natural gas.
| vizzier wrote:
| I read this comment as a bit tongue in cheek, but to be
| clear, the aim of the bots isn't generally to advocate for
| as specific viewpoint but to flood the information
| landscape with bad information making it challenging for
| normal people to discern what is real and useful.
| cdrini wrote:
| That's fascinating to me, I've never blocked a single account
| on Twitter and see very few bots. The most annoying thing about
| twitter for me are the folks monetising the platform, that keep
| posting rage-bait to game the algorithm! But note I also
| generally only use the "for you" tab.
| schmidtleonard wrote:
| You sure about that?
|
| https://youtu.be/WEc5WjufSps?t=193
|
| Dr. Egon Cholakian sends its regards. That is to say, the
| bots are getting good. LLMs made this technologically easy a
| few years ago, it would take a couple years to develop and
| deploy a sophisticated bot network like this (not for you or
| I, but for an org with real money behind it that timeline is
| correct) and now we are seeing them start to appear. The
| video I linked is proof that bots already deployed in the
| wild can take 40 minutes of dedicated effort from a capable
| suspicious person to identify with high conviction. Maybe it
| would have taken you 10, I'm not hear to argue that, but I am
| here to argue that it is starting to take real effort to
| identify the best bots and this is the worst they will ever
| be.
|
| I don't care how smart, capable, or suspicious you are,
| within 3 years it will not be economical for you to curate
| your list of non-bot contacts on the basis of content (as
| opposed to identity).
| cdrini wrote:
| Well on my "for you" page, I also follow a pretty niche
| audience of tech people, which helps :P They're a little
| easier to verify since they generally also have blogs, or
| websites, or github accounts, or youtube videos, etc that
| help verify they're not bots.
|
| I also think people create bots for some purpose --
| instability, political divisiveness, financial gain, etc.
| And I'm kind of inherently not using twitter for any of
| that. I don't think I could find an account on my twitter
| thread that mentions the word "liberal", "trump",
| "conservative", or any of that if I tried! I agree that's a
| muuuuch more likely place to find bots. What sort of bots
| do you notice the most in your twitter?
| schmidtleonard wrote:
| Yeah I suppose if you are already vetting based on
| identity from outside the network that probably does
| scale. Most people aren't as careful about this as you
| are, though, so it'll still be a problem and it will have
| to get much worse before it gets better.
|
| I'm not on twitter. I left when the tidal wave of right-
| wing spam started to outweigh the entertainment value of
| seeing Yann LeCun dunk on Elon Musk.
| intended wrote:
| I have a theory, which accounts like yours would be very
| interesting for.
|
| Instead of looking at it as a per user basis, if you look
| at it as a network or ecosystem, the issue is that the
| network is being flooded with spam.
|
| Since nothing happens all at once, over time different
| filters will get overwhelmed and eventually impact the
| less networked accounts.
|
| It would be VERY interesting to find out when, or if
| ever, you begin to suspect some accounts you follow.
| akomtu wrote:
| That's the Darwin's theory for bots: only the fittest
| survive on the Twitter lands.
| EasyMark wrote:
| Using the "for you" tab is the only way to use twitter these
| days. Their suggest algo is complete garbage. I spent a
| couple days trying various ways to train it and still I got
| was complete garbage, so I accepted reality that twitter
| doesn't really have an algo for the feed, just a firehouse of
| crazy people and engagement trolls
| jrhizor wrote:
| You mean the "following" tab, right?
| cdrini wrote:
| Oh shoot I flipped it!! Darnit, thanks for pointing that
| out. I also meant the following tab on my post!
| blitzar wrote:
| > Using the "for you" tab is the only way to use twitter
| these days
|
| The first 20 posts of my "for you" tab is Elon Musk, then
| it goes on to show me more useful content. I am wondering
| if following him or blocking him will make any difference.
| kimixa wrote:
| I have an account I purely use to follow other accounts. I
| haven't posted anything aside from a "so, this is twitter
| then?" years ago.
|
| I get multiple bots requesting to follow me every day, and
| maybe 10% of my "for you" timeline is right-wing political
| "discourse" engagement bots, despite never having followed or
| interacted with anything similar, aside from slowly
| increasing my block list when I see them.
| Eddy_Viscosity2 wrote:
| There is no twitter, only X.
|
| edit: What I meant by this is not the name thing but more
| fundamentally that what twitter was, is no longer so. It's now
| a different thing now, its has similarities to twitter, but its
| not twitter.
| stronglikedan wrote:
| Seriously. The same people that deadname X would be up in
| arms about deadnaming other things.
| EasyMark wrote:
| If you only use the "following" tab twitter is fine, if you try
| to use the "for you" tab then you are expecting too much of
| someone who posts known nazis and says "hmmm..." or
| "interesting..."
| doctorpangloss wrote:
| Do you think TikTok view counts are real?
|
| Alternatively, is there anything stopping TikTok from making up
| view count numbers?
|
| Facebook made up video view counts. So what?
|
| TikTok can show a video to as many, or as few, people as it
| wants, and the number will go up and down. If the retention is
| high enough, for some users, to show ads, which are the videos
| that the rules I'm describing apply to with certainty, why can't
| it apply those rules to organic videos too?
|
| It's interesting. You don't need bots to create the illusion of
| engagement. Unless you work there, you can't really prove or
| disprove that user activity on many platforms is authentic.
| CalRobert wrote:
| I weep at the thought that every site will require login with sso
| from google (and maybe Apple if you're lucky). We're close to
| that already.
|
| If only micropayments had taken off or been included in the
| original spec. Or there were some way to prove I am human without
| saying _which_ human I am.
| wickedsight wrote:
| > Or there were some way to prove I am human without saying
| _which_ human I am.
|
| I'm sure at some point a sort of trust network type thing will
| take off. Will be hard to find a way to make it both private
| and secure, but I guess some smart people will figure that out!
| n_ary wrote:
| I am still very curious about why the micropayment failed. I
| recall mass outrage at Brave for tying the concept with
| "cryptocurrency" but at the time the concept(minus the crypto
| and brave holding the tip unannounced if the site didn't join-
| in) seemed decent.
|
| Would the concept work, if it was unbundled from cryptocurrency
| and made into something like, Paypal, where you add
| money(prepaid), visit some site, if the site is registered, you
| see a donate button and decide to donate few
| cents/dollars/euros/yens whatever the native currency of the
| author is and at the end of the month, if the donations
| collected was more than enough to cover the fees + excess, it
| would get paid out to author's desired mode of withdrawal?
| joshdavham wrote:
| > I weep at the thought that every site will require login with
| sso from google (and maybe Apple if you're lucky)
|
| I think that's where we're going. Not only is it a decent way
| of filtering out bad accounts, it's also often easier to
| implement on the dev side.
| eikenberry wrote:
| Can't the bots can sign up for google accounts like anyone
| else?
| joshdavham wrote:
| They certainly could, but there's usually a bit of extra
| authentication with some of these third parties. For
| example, they usually request a phone number.
| cryptonector wrote:
| It would be nice if there were identity providers that could
| vend attribute certificates with no PII besides the desired
| attributes, such as: - is_human -
| is_over_18 - is_over_21 - is_over_65 -
| sex/gender? - marital status? - ...? -
| device_number (e.g., you might be allowed N<4 user
| attribute certs, one per- device)
|
| and naturally the issuer would be the provider.
|
| The issuer would have to keep track of how many extant
| certificates any given customer has and revoke old ones when
| the customer wants new ones due to device loss or whatever.
|
| Any company that has widespread physical presence could provide
| these. UPS, FedEx, grocery stores, USPS, etc.
| tobias3 wrote:
| European eID solutions can do some of those (e.g. is over
| 18). Let's see if usage becomes more wide-spread.
| tjalfi wrote:
| Micropayments failed because users hate them[0]. They would
| rather pay more for flat rate plans. Here's an excerpt from
| _The Case Against Micropayments_ [1]. It 's an old paper, but
| human behavior hasn't changed.
|
| _Behavioral economics, the study of what had for a long time
| been dismissed as the economicly irrational behavior of people,
| is finally becoming respectable within economics. In marketing,
| it has long been used in implicit ways. One of the most
| relevant findings for micropayments is that consumers are
| willing to pay more for flat-rate plans than for metered ones.
| This appears to have been discovered first about a century ago,
| in pricing of local telephone calls [13], but was then
| forgotten. It was rediscovered in the 1970s in some large scale
| experiments done by the Bell System [3]. There is now far more
| evidence of this, see references in [13], [14]. As one example
| of this phenomenon, in the fall of 1996, AOL was forced to
| switch to flat rate pricing for Internet access._
|
| _The reasons are described in [19]:_
|
| _What was the biggest complaint of AOL users? Not the widely
| mocked and irritating blue bar that appeared when members
| downloaded information. Not the frequent unsolicited junk
| e-mail. Not dropped connections. Their overwhelming gripe: the
| ticking clock. Users didn't want to pay by the hour anymore.
| ... Case had heard from one AOL member who insisted that she
| was being cheated by AOL's hourly rate pricing. When he checked
| her average monthly usage, he found that she would be paying
| AOL more under the flat-rate price of $19.95. When Case
| informed the user of that fact, her reaction was immediate. 'I
| don't care,' she told an incredulous Case. 'I am being cheated
| by you.'_
|
| _The lesson of behavioral economics is thus that small
| payments are to be avoided, since consumers are likely to pay
| more for flat-rate plans. This again argues against
| micropayments._
|
| [0]
| https://web.archive.org/web/20180222082156/http://www.openp2...
|
| [1] https://www-
| users.cse.umn.edu/~odlyzko/doc/case.against.micr...
| dewey wrote:
| PH has always been a weird place, the comments are always
| Linkedin level of boring (Copy paste positivity about basically
| every product) and it always felt like people were just
| commenting there to funnel people to their own profile.
| WD-42 wrote:
| I feel the same. I'm not surprised by this, a sleazy site is
| going to attract sleazy actors. Me good llm.
| mulhoon wrote:
| Somebody needs to sell t-shirts "me good LLM"
| novoreorx wrote:
| I will wear it everytime I meet AI investors
| welder wrote:
| I'll print some, you can pick them up for free in SF. Hoodies
| and tees.
| jenny91 wrote:
| How will my bot buy one if it has to turn up in person.
| welder wrote:
| Soon the bot will send a human to pick it up for them.
| mrjay42 wrote:
| Partially unrelated: "Me good LLM" is the Post-GPT "Ok boomer" :3
| lofaszvanitt wrote:
| EU needs to regulate this too.
| ChrisArchitect wrote:
| Related:
|
| _Product Hunt isn 't dying, it's becoming gentrified_
|
| https://news.ycombinator.com/item?id=41700517
| delichon wrote:
| I have a year old lurker account on X. I've never made a single
| comment with it. But 35 attractive women are now following me.
| Zero men, zero unattractive women. I doubt that it is the result
| of the animal magnetism of my likes.
|
| It's a microcosm of the whole darned web.
| 93po wrote:
| I feel like looking at this sort of behavior would make it
| really easy to spot bot accounts. I have the same thing
| happening on my account.
| EasyMark wrote:
| I blocked a bunch of those accounts and the number of rando hot
| girls trying to get me to follow them dropped really quickly.
| You may give that a try. Maybe they have some very rough algo
| that does attempt to stop those follow varieties that you block
| a lot of
| bryanrasmussen wrote:
| probably the rando hot girls are all run by the same big bot
| farms, or bot farms sell suckers lists so if you are a sucker
| for one rando hot girl bot the others soon find out.
| sdenton4 wrote:
| They're trying to simulate real accounts by following people.
| Blocks are likely a strong signal for anti spam. So if the
| spammers notice that you block bots, they won't want to burn
| accounts by following you, and will instead follow accounts
| that don't block bots.
| tim333 wrote:
| I expect if you chat to them you'll find they have some
| interesting crypto opportunities to invest in, from my
| experience. There's a lot of pig butchering stuff out there
| (https://news.ycombinator.com/item?id=39778486).
| imiric wrote:
| I do wonder if ProductHunt uses any CAPTCHA solution.
|
| In spite of the flack that CAPTCHAs usually get, I still think
| they have a lot of value in fighting the majority of these spam
| attacks.
|
| The common criticisms are:
|
| - They impact usability, accessibility and privacy. Users hate
| them, etc.
|
| These are all issues that can be improved. In the last few years
| there have been several CAPTCHAs that work without user input at
| all, and safeguard user privacy.
|
| - They're not good enough, sophisticated (AI) bots can easily
| bypass them, etc.
|
| Sure, but even traditional techniques are useful at stopping low-
| effort bots. Sophisticated ones can be fought with more advanced
| techniques, including ML. There are products on the market that
| do this as well.
|
| - They're ineffective against dedicated attackers using
| mechanical turks, etc.
|
| Well, sure, but these are entirely different attack methods.
| CAPTCHAs are meant to detect bots, and by definition, won't be
| effective against attackers who decide to use actual humans.
| Websites need different mechanisms to protect against that, but
| those are also edge cases and not the main cause of the spam we
| see today.
| Terr_ wrote:
| Lately I've been pondering how one might create a "probably a
| human"/skin-in-the-game system. For example, imagine visiting
| an "attestor" site where you can make a one-time donation of $5
| to a charity of your choice, and in exchange it gives you some
| proof-you-spent-money tokens. Those tokens can be spent
| (burned) by some collaborating site (e.g. HN) to mark your
| account there as _likely_ a human, or at least a bot whose
| owner will feel pain if it is banned.
|
| This would be _far_ more privacy-preserving that dozens of
| national-ID lookup systems, and despite the appearance of
| "money for speech" it could actually be _cheaper_ than whatever
| mix of time and bus-fare and paperwork in a "free" system.
|
| ____________
|
| I imagine the big problems would be things like:
|
| * How to handle fraudulent payments, e.g. someone buying tokens
| with a stolen credit card. Easiest fix would be some long
| waiting-period before the token becomes usable.
|
| * How to protect against a fraudulent attestor site that just
| takes your money, or one whose tokens are value-less.
|
| * How to protect against a fraudulent destination site that
| secretly harvests your proof-token for its own use, as opposed
| to testing/burning it properly. Possible social fix: Put in a
| fake token, if the site "accepts" then you know it's
| misbehaving.
|
| * Handling decentralization, where multiple donation sites may
| be issuing their own tokens and multiple account-sites that may
| only want to support/trust a subset of those tokens.
| doctorpangloss wrote:
| > Lately I've been pondering how one might create a "probably
| a human"/skin-in-the-game system.
|
| This has the same energy as the "we need benchmarks for LLMs"
| startups. Like sure it's obvious and you can imagine really
| complex cathedrals about it. But nobody wants that. They
| "just" want Apple and Google to provide access to the same
| APIs their apps and backends use, associating authentic phone
| activity with user accounts. You already get most of the way
| there by supporting iCloud login, which should illuminate to
| you what you are really asking for is to play outside of
| Apple's ecosystem, a totally different ask.
| tim333 wrote:
| There is the much slagged off but maybe effective Worldcoin.
| mandibles wrote:
| Have you checked out the L402[0] protocol?
|
| It's basically using the HTTP 402: Payment Required status
| code and serving up a Lightning Network payment invoice.
|
| Edit to add: it basically solves all of the caveat issues you
| identified.
|
| [0]: https://l402.org/
| m463 wrote:
| I wonder if this is like the recent article about people not
| buying from locked display cabinets:
|
| https://news.ycombinator.com/item?id=41630482
|
| how many humans does captcha send away?
| class3shock wrote:
| As someone that already often runs into them due to vpn use
| being flagged please no more. Think about how much human time
| has been wasted on these things.
| arnaudsm wrote:
| What's the endgame of a dead internet? Everyone leaves and most
| interactions happens in private group chats?
|
| It's the serendipity of the original internet I'll miss the most.
| causal wrote:
| One of two possibilities I foresee, unsure which will play out:
|
| 1) People surrender their perceived anonymity in favor of real
| interactions, embracing some kind of digital ID that ensures
| some platforms are human-only.
|
| or
|
| 2) AI gets good enough that people stop caring whether they're
| real or not.
| 9dev wrote:
| I bet my money on 1). Verifiable credentials are currently in
| the making, I can build products around this in my head
| immediately (a good sign that someone smarter than me has it
| figured out already), and huge platforms know so much about
| you, they're almost there. It's going to make interactions
| online safe, solve fraud, make everything personalised and
| wholesome. At least that's going to be the narrative. Just
| wait for it.
| jen729w wrote:
| It just gets a bit smaller. See Mastodon, and why most people's
| criticisms of it are in fact its strengths. For ~5% of the
| current internet.
| welder wrote:
| I'm reposting this [0] because it got flagged from the HN
| algorithm thinking I'm posting spam [1] -\\_(tsu)_/-
|
| [0] https://news.ycombinator.com/item?id=41711410
|
| [1] https://hnrankings.info/41708837/
| welder wrote:
| Now it's back but the damage was done, lost 4 hrs of votes and
| won't recover
|
| https://hnrankings.info/41708837
| amiantos wrote:
| I have a couple posts on reddit that didn't receive a lot of
| comments but every week or so it'll get a comment that is some
| GPT-powered bot going, "<topic of post on reddit>? Wow! That's
| really thought provoking, I wonder about why <topic of post on
| reddit> is important," and so on, asking me very obvious
| questions in an attempt to get me to feed the system more data.
|
| I wouldn't be surprised to find out these bots are actually being
| run by reddit to encourage engagement.
| akincisor wrote:
| See the history of Reddit. It was manually curated sock puppets
| before bots were viable, and now that bots are viable, I
| strongly believe the bulk of comments and posts in the popular
| subreddits are bots (and many are run by reddit themselves).
| mirekrusin wrote:
| There you go, start AntiAI, ppl will love it.
| catinblack wrote:
| When I posted my product on producthunt (and that was about 5
| years ago) I got dozens of props with a first place guarantee.
| Literally an hour after posting, I was bombarded with messages.
| Now it's probably even worse.
| zurtri wrote:
| When I first launched my SaaS I used one of this online review
| websites to help get testimonials and SEO and backlinks and
| stuff.
|
| Went fine for about 3 months and then the bots came. 2 months
| after that the GPT bots came.
|
| The site didn't do anything about the obviously fake reviews. How
| did I know they were fake? well 95% of my customer base is in
| Australia - so why are there Indians leaving reviews - when they
| are not even customers? (yes I cross referenced the names).
|
| So yeah, I just need to get that off my chest. Thanks for
| reading.
___________________________________________________________________
(page generated 2024-10-01 23:00 UTC)