[HN Gopher] Meta is killing off its AI-powered Instagram and Fac...
       ___________________________________________________________________
        
       Meta is killing off its AI-powered Instagram and Facebook profiles
        
       Author : n1b0m
       Score  : 232 points
       Date   : 2025-01-04 00:26 UTC (22 hours ago)
        
 (HTM) web link (www.theguardian.com)
 (TXT) w3m dump (www.theguardian.com)
        
       | bn-l wrote:
       | > Instagram profile of 'proud Black queer momma', created by
       | Meta, said her development team included no Black people
       | 
       | It's such a weird religion.
        
         | smegger001 wrote:
         | it raises several questions, my first question (besides the
         | obvious question of why is this a thing at all) is does the
         | chatbot actually have the information about the racial
         | background of its dev team or is it hallucinating up an answer
         | when it lacks the data. if it does have that sort of knowledge
         | why personally identifiable information about individual devs
         | accessable to a public facing chatbot and how much of their HR
         | file can be exfitraited via chatbot to anyone that asks it
         | questions, if it doesn't have this information and it just
         | assumes its devs are all white what does that say about the
         | information its trained on
        
           | ncr100 wrote:
           | Hallucination. An issue we care about and it just answers
           | regardless of accuracy.
           | 
           | Boo.
        
             | brookst wrote:
             | So you fact checked this comment before posting, right?
             | Because I am deeply interested if this was hallucination or
             | used RAG and web search to provide the answer, but I can't
             | tell if you just chose to hallucinate without grounding.
        
               | throwaway314155 wrote:
               | > hallucination or used RAG and web search to provide the
               | answer
               | 
               | Occam's razor - it was an hallucination.
        
               | Rastonbury wrote:
               | Come on why would anyone put the team's races into RAG or
               | anywhere under the hood lol
        
           | noirbot wrote:
           | It's Meta, so this is almost definitely some Llama variant
           | under the hood, especially since these bots have been out for
           | a year, evidently.
           | 
           | It seems these were just insanely poorly trained and gated -
           | getting a bot to comment on its dev team, including in some
           | cases specific names of lead designers, is something that
           | feels crazy to let a bot just do. Even if all the data it
           | would respond with is made up slop, this exact debacle is why
           | you do your best to make a bot not even try to answer this
           | sort of stuff.
           | 
           | As for what information it was trained on that it said its
           | dev team had no black devs, that seems fairly reasonable
           | based on general statistical information and scholarly
           | articles it was trained on. It may still be incorrect, but
           | that's a pretty statistically likely outcome, so it's not
           | surprising an LLM built to parrot the statistically likely
           | outcome would say it.
        
         | noirbot wrote:
         | Religion?
         | 
         | Isn't the weirdness here that someone at Meta seems to have
         | decided that they should make an AI bot that proudly announced
         | its identity, despite literally not being proud, queer, black,
         | or a mother?
         | 
         | Who does this serve? Why?
         | 
         | I could see a bot that's into fitness or some other hobby
         | having a niche as a wrapper around an AI meant for advice on
         | the hobby, but there's a certain crazy hubris to going "this
         | LLM can really speak as a member of an identity group".
        
           | alephxyz wrote:
           | It's performative art. A critique of social constructionism
           | by taking to its extreme.
        
           | manishader wrote:
           | Our language really needs a new word for what is meant here
           | by "religion".
           | 
           | The word "religion" brings so much baggage with it that it
           | just causes a cascade of communication errors in this
           | context.
           | 
           | Pareto when translated from Italian to English called this
           | "non-logical conduct" and "sentiments" but after a 100 years
           | no one seems to care much about this aspect of Pareto's
           | thoughts.
           | 
           | Sentiments would be a great word for it though instead of
           | "religion".
           | 
           | Woke sentiments instead of "woke religion".
           | 
           | The concept of identity sentiments might even be a little bit
           | of progress.
        
           | notavalleyman wrote:
           | > Who does this serve? Why?
           | 
           | People type that kind of thing into their insta profile.
           | These AI personas were designed to appear to be using
           | Instagram in the same way people do - which is, typing things
           | like "queer black mom" into their profile
        
       | Trasmatta wrote:
       | > Conversations with the characters quickly went sideways when
       | some users peppered them with questions including who created and
       | developed the AI. Liv, for instance, said that her creator team
       | included zero Black people and was predominantly white and male.
       | It was a "pretty glaring omission given my identity", the bot
       | wrote in response to a question from the Washington Post
       | columnist Karen Attiah.
       | 
       | This is probably true, but the AI almost certainly hallucinated
       | these facts.
        
         | az226 wrote:
         | For sure.
        
         | hopfenspergerj wrote:
         | If you're meta and you have to defend the AI by admitting "it's
         | not really intelligent and everything it says is bullshit",
         | that's not a position of strength.
        
           | notahacker wrote:
           | Particularly when it's likely that despite the AI
           | bullshitting in the absence of data on its creators, it's
           | also _incidentally_ true that the  "black queer momma"
           | persona is a few lines of behavioural stereotypes composed by
           | white male programmers without any particular experience of
           | black queer mommas.
        
             | thwarted wrote:
             | If the intent is to make a recognizable caricature and
             | apply labels to it ( _cough_ stereotype), you don 't have
             | someone draw themselves. And it's really looking like
             | stereotyping is their intent.
        
             | yieldcrv wrote:
             | I dont think that necessarily applies when you could easily
             | make a training set from some actual black American
             | people's writings on the internet or book, or even an
             | individual that self identified that way and train on all
             | their writings around the internet, and result in those
             | same stereotypes when you ask an AI to create such a
             | profile
             | 
             | You dont need a black American engineer or product manager
             | to say "I approve this message" or "halt! my representation
             | is more important here and this product would never fly" as
             | they are just not person the data set was trained on, even
             | if you asked an AI to just create the byline on the profile
             | for such a character
             | 
             | its weirder, and _more_ racially insensitive, for people to
             | be vicariously offended on behalf of the group and still
             | not understand the group. In this case, the engineer or
             | product manager or other decision maker wouldnt have the
             | same background as the person that would call themselves
             | "momma", let alone it not mattering at all, if you can
             | regurgitate that from a training set
        
         | dawnerd wrote:
         | Microsoft tried this ages ago with their Tay chatbot and found
         | how quickly it went bad.
        
           | lostlogin wrote:
           | I'd forgotten about this - it was wild.
           | 
           | https://www.bbc.com/news/technology-35902104
        
         | devjab wrote:
         | I'm Scandinavian and not invested in the American culture wars
         | and I still got a good chuckle out of how bad an idea this was.
         | Who on earth could've thought it was a good idea to get an AI
         | to pretend to be a black queer mother of two? I'm sure it'll
         | piss off a lot of anti-woke people, but really, how on earth
         | did the issues with this not become obvious for the team? I'm
         | not sure if the AI knows who trained it (and I wonder how they
         | did it) but the team can't have included a lot of common sense
         | or real world experience for them to do something so
         | fundementally stupid.
        
           | karel-3d wrote:
           | I think it will piss off woke people, too.
           | 
           | It seems designed to piss off everyone.
        
           | notavalleyman wrote:
           | The article indicates that it was one of 28 personas created
           | by meta. However, the reporters in between the story and you,
           | thought that one would be interesting to you and so promoted
           | it's relevance. In actuality, if you rolled a dice of
           | potential human traits 28 times, this could be a
           | statistically normal combination
        
             | Jerrrry wrote:
             | Lol 1 outta 28 isn't a black queer mom of two.
             | 
             | The clash of these characteristics is what defy and define
             | these unlikely odds.
             | 
             | This is trolling rage bait. Like most big corp content.
        
         | throwaway48476 wrote:
         | I hate the journalists pretending that ML is regurgitating
         | facts instead of just random tokens. The ML model doesn't know
         | who programmed it and can't know because that wasn't in the
         | training data.
        
           | Kbelicius wrote:
           | > I hate the journalists pretending that ML is regurgitating
           | facts instead of just random tokens.
           | 
           | They aren't claiming that what the AI said was fact. It seems
           | your hate comes from lack of understanding.
        
         | stefan_ wrote:
         | Huh? On the balance of probabilities, why would this be the
         | "most certain" option? I think that logic only works on the HN
         | scale of "works in my ChatGPT window" or "bad outcome, so
         | hallucinated".
        
       | NotYourLawyer wrote:
       | Who thought this was ever a good idea?
        
         | fullshark wrote:
         | If you are going to build a series of anthropomorphized chat
         | bots, what are you going to do, NOT build a proud black queer
         | mama that underserved users can connect with? Sounds
         | racist/homophobic to do that, hence you must build chatbots for
         | a wide swathe of the world ---> Voila.
        
           | NotYourLawyer wrote:
           | QED
        
         | drdaeman wrote:
         | A lot of people, apparently.
         | 
         | After LLMs became capable of producing believably meaningful
         | fiction texts, roleplay was one obvious use case, and a lot of
         | people have fun with it. (Of course, some are weird and get
         | attached to their own generated fiction.)
         | 
         | If we'd ignore all the US-specific craze (e.g. mentally replace
         | "black queer" with something that doesn't trigger people, like,
         | idk, "space unicorn"), this is really just another RP chatbot
         | thing.
        
         | andrepd wrote:
         | It is a good idea, just not for you or your best interests.
        
         | s1artibartfast wrote:
         | What's the difference? Most influencers were fake people
         | already. They were just meat and silicone instead of silicon
        
       | naming_the_user wrote:
       | > Those AI profiles included Liv, whose profile described her as
       | a "proud Black queer momma of 2 & truth-teller"
       | 
       | I find this stuff absolutely fascinating, that even an AI bot
       | tries to do this sort of thing.
       | 
       | It just strikes me that there is no world in which racism will
       | ever die out as long as things like this are happening. You can't
       | have it both ways.
       | 
       | If race is just melanin content then we have no issue. That's the
       | win case.
       | 
       | If your race is something that you need to proudly announce
       | (because you personally feel that it has cultural connotations in
       | most/many cases), then race is always going to be an issue,
       | because of assumptions of those cultural differences.
        
         | kmonsen wrote:
         | Race is clearly cultural in todays society because we can make
         | it so. Very unlikely to be genetically culturally if that is
         | what you are saying (but of course, who knows).
        
           | naming_the_user wrote:
           | What I'm saying is that it the AI to me illustrates how much
           | of a self-own drawing these cultural battle lines is.
           | 
           | If you choose to describe yourself by the colour of your skin
           | first and foremost then you're telling everyone that you
           | think that's relevant information which is, well.. the entire
           | problem - you're telling everyone around you that you think
           | it has predictive power.
           | 
           | But that's what racists are, people who think the race of a
           | person has predictive power with regard to their nature,
           | culture etc.
        
             | wizzwizz4 wrote:
             | We can't expect people whose lives are, in great part,
             | _defined_ by the societal oppression they face, to be at
             | the forefront of trying to erase the distinctions the
             | bigots hold over them - not _least_ because it 'd be
             | ineffective, serving only to hide the abuse. The
             | categorisation schemes used by bigots (e.g. racists) _do_
             | have predictive power: for the behaviour of bigots, and the
             | resulting effects on the social context of people 's lives.
             | 
             | People being loud and proud about their marginalised
             | identities is not a problem. The pushback, and the
             | pushback-to-the-pushback, and the pushback-to-the-pushback-
             | to-the-pushback (the so-called "culture war") is a problem
             | to the extent it gets in the way of solving the _root_
             | problem (marginalisation, bigotry, and abuse), but
             | statements of pride _help_ address the root problem.
             | 
             | Rather than criticise "drawing these cultural battle
             | lines", please fight the fight you _think_ people should be
             | fighting. You won 't find yourself short of allies, should
             | you make the effort. (Effort includes educating yourself
             | about the relevant issues: it's pretty easy to find
             | _highly-specific_ resources. If you 're completely stuck,
             | and somehow have lost access to Wikipedia, visit your local
             | library.)
        
               | naming_the_user wrote:
               | Being proud of your identity and culture is not a problem
               | at all!
               | 
               | The thing that I feel that this silly AI bot draws into
               | sharp contrast is the idea that there can simultaneously
               | be "asian/white/black culture" - as a generalised thing,
               | not a person's experience in a neighbourhood or a
               | country, literally just "white/black/asianness" (or
               | whatever), and obviously the connotation there is that
               | skin colour is intrinsically linked to culture, but then
               | we expect people to pretend that skin colour has no
               | correlation.
               | 
               | You can't have it both ways, it's a contradiction.
               | 
               | And here we have, err, a computer program, or the output
               | of one, pulling in all of this stuff and claiming that it
               | has a culture based on experiences of being a skin colour
               | that it doesn't and can't even have.
               | 
               | It's just madness on every level!
        
               | wizzwizz4 wrote:
               | That's a false dichotomy. The "intrinsically" connotation
               | is not there (you're reading that in), but civil rights
               | activists _often_ discuss the relationships between
               | culture and skin colour. Any book (or Wikipedia article)
               | on the subject should provide this information, e.g.
               | https://en.wikipedia.org/wiki/Critical_race_theory.
               | 
               | I do agree that the AI bots were farcical, but they don't
               | highlight what you're saying. You're talking about nth-
               | degree pushback-to-pushback as though it's the actual
               | facts-on-the-ground, which it's not. (The "culture war"
               | is composed solely of the assumption that there _is_
               | culture war, and people 's responses to that: unlike the
               | real issues, like systematic oppression, it'll go away as
               | soon as we stop talking about it.)
        
             | thunky wrote:
             | > But that's what racists are, people who think the race of
             | a person has predictive power with regard to their nature,
             | culture etc.
             | 
             | I didn't think that's how the term racist is normally used.
             | A definition:
             | 
             |  _a person who is prejudiced against or antagonistic toward
             | people on the basis of their membership in a particular
             | racial or ethnic group_
             | 
             | Big difference.
        
               | naming_the_user wrote:
               | You're right, but in practice I don't really consider it
               | as being too fundamentally different because it's the
               | logical conclusion from strong enough predictive power.
        
               | notahacker wrote:
               | There is nothing "logical" about the progression from _my
               | appearance and ancestry is something I 'm entitled to be
               | proud of; also there might actually be such a thing as
               | Black American culture_ to racial epithets and Jim Crow
               | laws. Or indeed using "predictive power" to assess
               | people's intellectual ability or criminal propensity from
               | their melanin content.
               | 
               | I mean, do you believe that people should expect to be
               | subjected to the weirdest and most hostile takes on
               | gender relations and treated as potential rapists or
               | sluts unless and until everyone uses gender neutral
               | pronouns (even silly bots mostly identify as male and
               | female at the moment)? If not, what is it about racism
               | that obliges victims not to affirm their identity if they
               | wish to cease to become victims?
        
               | yakshaving_jgt wrote:
               | To be fair, how the term is normally used depends
               | entirely on the group within which the term is being
               | used. For many, racism is strictly "prejudice + power",
               | which logically concludes in the [in my opinion rather
               | warped] idea that some groups are incapable of racism,
               | which is at odds with the definition you're citing
        
               | thunky wrote:
               | > the idea that some groups are incapable of racism,
               | which is at odds with the definition you're citing
               | 
               | I disagree, because one can powerless while
               | simultaneously being prejudiced and antagonistic to
               | others. Which means that anyone can be racist.
        
         | Digory wrote:
         | I understand Elon and Pmarca to be saying that this behavior is
         | programmed in by 'safety teams' at Meta.
         | 
         | It is an odd way of calcifying late-20th century American upper
         | class anxiety. If we can't give the engineers a different moral
         | goal, it will become the default mode of knowledge. A more
         | immediate problem than "AI will kill us" is the dev teams
         | determined to avoid "erasure" or "digital genocide" or whatever
         | they'd call a race/culture-agnostic AI.
        
         | healsdata wrote:
         | I find it hard to believe that someone would say something
         | similar about any other aspect of cultural identity and not be
         | instantly flagged for it.
         | 
         | Are you truly advocating that people should no longer be proud
         | of their cultural identity? Are you applying that to all the
         | folks in tech who proudly identify as Christian or Jewish? Does
         | it apply to Italian Americans and Indigenous people? What about
         | folks who are proud to speak endangered languages, like West
         | Flemish or Romani?
        
           | mustyoshi wrote:
           | Why can't you just be American?
           | 
           | Both my parents were born here, they've explained our
           | lineage, but I literally don't care. I'm an American, America
           | is my culture and country.
        
             | noirbot wrote:
             | Because my friend who just worked for years to become an
             | American citizen after immigrating here and working for the
             | US government still has other Americans tell him to "go
             | back to where he came from" because he's evidently not
             | American enough for them.
             | 
             | A hell of a lot of people want to "just be American" and a
             | lot of Americans won't let them.
        
               | phkahler wrote:
               | Some people are just angry and will direct it at
               | whatever. Not saying it's OK, but once you really
               | internalize the idea that people's reactions say more
               | about themselves than about you life gets better.
        
               | noirbot wrote:
               | That makes the bold assumption those people aren't your
               | boss or your customers or your landlord or the policeman
               | stopping you on the side of the road. There's real risk,
               | both financially and physically, to just assuming that
               | "random anger" is something you can internalize and
               | ignore.
               | 
               | People who are "just angry and will direct it at
               | whatever" can be very dangerous, as we've seen in the US
               | even this week and in every school shooting and terrorist
               | attack for years.
               | 
               | This is, terribly, the American experience. It used to be
               | the Irish and the Italians before we decided they were
               | "White enough." It doesn't mean we should just ignore it
               | and accept it. If you're mad, that doesn't give you a
               | pass to be a racist asshole to the first person less pale
               | than a porcelain plate you see.
               | 
               | If anything, the fact that people have to know and cope
               | with random Americans being this exact sort of racist
               | asshole to them is the reason they form their own
               | communities and see themselves as Black Americans or
               | Asian Americans or any other sort of national or racial
               | group.
        
           | phkahler wrote:
           | >> Are you truly advocating that people should no longer be
           | proud of their cultural identity?
           | 
           | My take on the comment was that so long as a trait is
           | relevant to some people, it will be dispised by some other
           | people. Either it's completely benign or it's somewhat
           | polarizing. There are few things that only range from neutral
           | to positive without detractors.
        
           | naming_the_user wrote:
           | All of the examples you give are unique in their own ways, I
           | don't have the wherewithal to go through them all, but if I
           | choose one:
           | 
           | A person describing themselves as being Christian is only
           | signaling that they are a follower of the Christian faith to
           | some degree.
           | 
           | They are different for that reason. Christian doesn't have a
           | physical appearance associated with it (well, other than say
           | vestments or pendants).
           | 
           | All Christians are Christian.
           | 
           | Not all Asian-appearing folk are culturally Asian, not all
           | culturally Asian folk are Asian-appearing. Even the concept
           | of "Asian culture" (I do of course accept that such a concept
           | exists) is super general in a much wider ranging way than the
           | Christian faith. Maybe that explains it better.
           | 
           | To illustrate it in another way, imagine some LLM prompts:
           | 
           | "Hi ChatGPT, can you make me a poem written by a Christian
           | about their faith?"
           | 
           | "Hi ChatGPT, can you make me a poem written by a White person
           | about their culture?"
           | 
           | There is just a fundamental... grammatical wrongness(?) about
           | the last one in my mind, I'm not sure that I can explain it
           | if it's not already obvious.
        
           | CamperBob2 wrote:
           | _Are you truly advocating that people should no longer be
           | proud of their cultural identity?_
           | 
           | (Shrug) If you didn't build it, what could possibly justify
           | taking pride in it?
           | 
           | Pride in inborn cultural identity gets you things like Nazi
           | Germany. It's not a win for either individuals or societies,
           | only for intermediate subgroups whose interests rarely
           | coincide with either.
        
         | krainboltgreene wrote:
         | We don't know that bio was generated by an AI. I suspect it was
         | written by one of Facebook's team as part of the "prompt". It
         | matches how I've seen meta engineers talk about black women.
        
           | andrepd wrote:
           | Overwhelmingly white and rich engineers
        
         | paxys wrote:
         | Would you say the same if the bot identified itself as being
         | from a certain religion? Or working for a specific company?
         | Having a random personality trait? Supporting a sports team? No
         | one would bat an eye, and none of those would mean it is okay
         | to discriminate against them on that basis.
         | 
         | Removing racism does not mean removing race.
        
           | wewtyflakes wrote:
           | I think it is fine to discriminate against companies (that's
           | what boycotts are) and sports teams (that's what rivals do).
        
             | noirbot wrote:
             | Yea, if anything a bot that just reposts and talks about
             | sports news for a specific team is kinda the perfect test
             | case for something like this. It lets you test "identity"
             | for a bot, but no one's going to be that upset if the Mets
             | Fan Bot isn't sufficiently obsessed. Plus, it'll know all
             | of the history and stats for the team, even if it'll
             | probably be wrong a lot. Not that most human fans aren't
             | wrong a lot about their sports teams' history.
        
             | attila-lendvai wrote:
             | it's fine to discriminate agains actions and behavior or
             | self-proclaimed or observed values.
             | 
             | but it's dumb to discriminate against coincidental
             | attributes.
        
           | bryanrasmussen wrote:
           | there are things that bots, by virtue of being bots, cannot
           | be - they cannot be of a particular religion, possess a
           | gender, or have a particular race. They can work for a
           | specific company. They cannot have a personality, but they
           | can exhibit personality traits.
           | 
           | Anything that a bot can do and possess I have no difficulty
           | in its being labelled as doing and possessing those things.
           | 
           | Otherwise I do have problems, although theoretically a bot
           | could exist as a piece of art and in that case we should
           | forgive these lying claims to qualities they cannot possess
           | by the current standards our society holds for artistic
           | practice.
        
         | noirbot wrote:
         | You write this as if the AI itself independently decided to
         | write that profile. That's almost assuredly not the case.
         | Someone at Meta _decided_ to make a bot that reinforced race
         | identity and announce it, and likely tuned the LLM behind it to
         | specifically do things to reinforce that identity and pretend
         | as best as possible to be the exact menanin-informed
         | expectation of its AI-generated avatar.
         | 
         | An AI bot didn't try to do this. Someone at Meta, who was
         | likely none of the things in the profile, decided to do this.
        
           | malfist wrote:
           | And the only way to construct an AI like that is to enumerate
           | tainted stereotypes. It's an all around bad idea regardless
           | of if you're pro-DEI or anti-woke.
        
           | rchaud wrote:
           | > You write this as if the AI itself independently decided to
           | write that profile.
           | 
           | There are numerous fake social media pages where posters
           | oretend to be a minority, but are run by state actors to sow
           | division in the US. Those have been extremely successful in
           | fooling people; I wonder if its success is related to pre-
           | existing biases readers already have in terms how they think
           | "others" talk.
        
         | xracy wrote:
         | People will discriminate against people of color because they
         | are visually people of color. Whether or not they 'announce
         | it'. It's utter BS to suggest that the only reason people
         | discriminate against black people is because they 'identify as
         | black'.
         | 
         | Like, if all the black people would just stop identifying as
         | 'black' racism would be solved?
         | 
         | Maybe I'm misreading your comment, but all of the responses to
         | this suggest to me you're on a different planet than I am.
         | Alternatively you drank the early 2000s kool-aid that "people
         | don't see race" and thought that was actually working until
         | people started identifying as the thing they get criticized on
         | the internet for being.
        
           | AdrianB1 wrote:
           | I believe that many people in US are just pyromaniac
           | firefighters that start fired to play hero extinguishing it.
           | Racism is just one of the fires.
           | 
           | In my country (Eastern Europe) racism is approximately non-
           | existent, but nobody was putting gas on fire for decades or
           | more. I know a black guy that is quite famous around here, do
           | you know what he never said? "I am a proud black person". He
           | is a great person and people love him.
        
             | ironman1478 wrote:
             | Ever heard of the Bosnian war? Many in eastern europe hate
             | gypsies too. Maybe they don't discriminate on the color of
             | skin specifically all, but they definitely hate based on
             | culture.
        
             | yakshaving_jgt wrote:
             | Which part of Eastern Europe? I've lived in various parts
             | of what might be considered Eastern Europe for most of my
             | life, and while there isn't a hyper-focus on race like in
             | the US, I know that racism still exists. I still see it
             | first hand.
        
             | xracy wrote:
             | 2 things: 1. Just because you're not confronted with racism
             | daily doesn't mean it doesn't happen. Boston is considered
             | one of the most racist cities in the US, because once per
             | sports game they scream the n-word at a black person. I'm
             | like 99% certain I've heard similar stories in European
             | countries.
             | 
             | 2. The second part of your analogy is the "I know a black
             | guy" trope that happens in the US. It's a pretty common
             | thing, and no one believes people are 'not racist' just
             | because they 'know a black guy'.
        
             | SoftTalker wrote:
             | How were the Jews treated in your country in oh, say the
             | late 1930s?
        
         | jgalt212 wrote:
         | It's not so much that the LLMs generate profiles like "proud
         | Black queer momma of 2 & truth-teller", but they seem to
         | generate such profiles at rates much higher (perhaps an order
         | of magnitude) than the IRL versions of such actually exist.
        
         | Dylan16807 wrote:
         | > If your race is something that you need to proudly announce
         | (because you personally feel that it has cultural connotations
         | in most/many cases), then race is always going to be an issue,
         | because of assumptions of those cultural differences.
         | 
         | No it's not. Acknowledging the current situation is not an
         | assumption and does not doom us to push every single aspect
         | onto future generations.
         | 
         | Culture is real. The way to change part of culture is not to
         | simply deny that that part exists.
        
         | maxerickson wrote:
         | It's rather impressive that you can point to assumptions about
         | identity and then conclude that racism comes from people having
         | identity.
        
         | soulofmischief wrote:
         | Guess what, race _is_ an issue and the group identity you 're
         | seeing is a defense mechanism against many generations of
         | intentional cultural eradication and appropriation.
         | 
         | We should not be in the business of telling other people which
         | aspects of themselves they can and cannot identify with.
         | Admonishing black identity is perpetuating this cultural
         | eradication, and so it is not the neutral position you think it
         | is.
         | 
         | That's all tangential from the topic at hand anyway. On-topic,
         | Liv is a great example of the corporate appropriation of
         | culture.
        
       | davidw wrote:
       | The thread is here and... it's pretty wild. This doesn't seem
       | like it was a well-thought out idea to release this without some
       | more consideration beforehand.
       | 
       | https://bsky.app/profile/karenattiah.bsky.social/post/3letty...
        
         | Trasmatta wrote:
         | > They admitted it internally, to me
         | 
         | It's becoming an increasingly apparent problem that people
         | don't understand that LLMs frequently hallucinate, especially
         | when asked to introspect about themselves. This was 100% a
         | hallucination.
        
           | davidw wrote:
           | "I know, let's release an AI bot that pretends to be
           | something that's likely to be viewed through a charged
           | political lense and have it hallucinate random answers to
           | people"
        
           | asveikau wrote:
           | It's getting weird that people cite ChatGPT to settle
           | debates. Even on places like HN where supposedly people have
           | a technical background and know they shouldn't blindly trust
           | an LLM as a source of truth. Even with the banners it has
           | warning about this.
        
             | add-sub-mul-div wrote:
             | Technical knowledge and wisdom are two very different
             | things. There is an excess of the former here, on this
             | site, in this field, but a comical dearth of the latter.
        
               | MrMcCall wrote:
               | I have but one upvote to give you, good sir/madam, when I
               | wish I had 100.
        
               | asveikau wrote:
               | I would argue that there is not an excess of the former.
               | A lot of the so-called techies don't know tech either.
        
               | add-sub-mul-div wrote:
               | That's fair. I suppose it's more that there's a worship
               | of knowledge outside the context of wisdom.
        
           | yifanl wrote:
           | People don't understand that LLMs are just statistical magic,
           | by design of the LLM providers because LLMs wouldn't take off
           | if people understood they are just statistical magic.
        
             | davidw wrote:
             | You can understand that and still think this was a terrible
             | idea.
        
             | brookst wrote:
             | I mean all of the atoms in the universe are "just
             | statistical magic" when viewed through a quantum lens. I'm
             | not sure that phrase is as effective a dismissal as you
             | seem to think.
        
           | jjulius wrote:
           | You and I understand this. Heck, most of us on HN understand
           | this.
           | 
           | When the media and companies with a financial interest in AI
           | have been shouting from the rooftops that "AI is going to
           | replicate humans very soon and all of your jobs will go bye-
           | bye because this thing will do everything better than you",
           | can we really be surprised when the broader populace looks at
           | it that way?
        
           | brookst wrote:
           | It was _likely_ a hallucination, right? You didn't check to
           | confirm?
           | 
           | But this is a solved problem that Meta chose to implement
           | poorly. It can be done right. I just asked o1 "create a
           | diversity report for openai" and its very first paragraph is:
           | 
           | > Below is a *sample* diversity report for OpenAI. Please
           | note that the numbers, programs, and initiatives are provided
           | as *illustrative examples*. This report is not an official
           | document, and the data contained here is *fictional* and for
           | demonstration purposes only.
        
             | KaiserPro wrote:
             | Yes, and I suspect that the very first line of the chat has
             | something equally lawyer-y to serve the same purpose.
        
               | brookst wrote:
               | I suspect it doesn't.
        
           | MrMcCall wrote:
           | LLMs just predict the next word, based upon the texts they
           | were trained on. They are, by definition, a hallucination.
           | It's just that, sometimes, they put words together that fool
           | people.
           | 
           | They're just ELIZA on steroids, people.
        
             | CamperBob2 wrote:
             | How did ELIZA do on the Advent of Code?
        
               | MrMcCall wrote:
               | It hallucinated less, that's for sure, because we know
               | exactly how it works, though I doubt any of the AoC's
               | challenges were suitable for its 1980s logic (that's when
               | I typed in a magazine's C64 version).
               | 
               | "There's lies, damned lies, and statistics." --Unknown
        
               | kristiandupont wrote:
               | LLM's did well:
               | https://blog.scottlogic.com/2024/12/15/llms-vs-aoc.html
        
             | bongodongobob wrote:
             | Right and computers just do boolean math, not anything
             | useful.
        
               | MrMcCall wrote:
               | A hammer can bonk someone or help Jimmy Carter build a
               | house for someone.
               | 
               | We digital logic folks are both tool builders and tool
               | wielders. Usefulness is in the hands of the user, but
               | that doesn't mean the creator isn't a dipsh_t who made a
               | crap tool, or that the user isn't a moron.
               | 
               | My computer allows me to build stores of digital
               | information representations, with functional integrations
               | to other systems, including me. It's useful for me and my
               | family, for sure.
        
         | asveikau wrote:
         | If an AI says it, it must be true?
         | 
         | This seems like a possible hallucination.
        
         | mech422 wrote:
         | Soo...they didn't learn anything from Microsoft's failed
         | attempts at leaving an A.I. on the net ??
         | 
         | what was it ?? 48 hours before MS had to take them down ?
        
           | mu53 wrote:
           | learning? that is for the AI
        
           | GeoAtreides wrote:
           | justice for taytay
           | 
           | gone, but not forgotten
        
           | jazzyjackson wrote:
           | 16.
           | 
           | (When searching for a citation I found this little compendium
           | of banger tweets, and the addendum that Tay had a brief
           | reprise of accidently going live again, this time tweeting
           | "kush! [ I'm smoking kush infront the police ]" I guess in
           | the modern milleu this would be considered escaping
           | confinement)
           | 
           | https://dailywireless.org/internet/what-happened-to-
           | microsof...
        
       | gregmullin wrote:
       | This reads like a satire article, ain't no way this is real
       | journalism. Im unsure if you can draw a line asking AI legitimate
       | questions.
        
         | noirbot wrote:
         | I mean, even if the bot's responses aren't correct, the fact
         | that Meta did this at all and it was out there answering these
         | questions with total garbage is _also_ a total story.
         | 
         | I'm not seeing Meta responding with "actually we did have a
         | team of black queer mothers working on this".
        
       | drivingmenuts wrote:
       | I don't know whether to be amused or furious. They're trying to
       | create personal social interaction with a non-person without real
       | intelligence, conscience or morals. So they can fool people into
       | emotional attachments and learn more about how to sell things to
       | us.
       | 
       | Goddamnit. I kinda feel that should be not only illegal, but just
       | short of a capitol crime.
        
         | attila-lendvai wrote:
         | if only it was about selling stuff...!
         | 
         | although, if you mean it broadly; i.e. selling wars, selling
         | the idea of fiat money, or taxation, or national debt, or the
         | plandemonium... then i hear you.
        
         | pvaldes wrote:
         | > They're trying to create personal social interaction with a
         | non-person without real intelligence, conscience or morals.
         | 
         | So, they rediscovered Internet trolls?
        
         | CatWChainsaw wrote:
         | Only if by "just short" you mean they get life without
         | possibility of parole or pardon in max security and never see
         | another screen as long as they live.
        
       | qntmfred wrote:
       | I get the hate, but I'd be open to trying out a social media
       | experience that is a mix of human and bots, especially as the
       | bots get better at acting like reasonable humans.
       | 
       | Stack Overflow was great when it came out. But the whatever
       | percent of humans who were mean and obnoxious and wouldn't let
       | you just ask your questions eventually ruined it all.
       | 
       | Then ChatGPT came out without any humans to get in the way of me
       | asking my software development questions, and it was so much
       | better.
       | 
       | In the same way, when social media came out, it was great. But
       | the whatever percent of humans who were mean and obnoxious and
       | wouldn't let you just socialize and speak your mind eventually
       | ruined it all.
       | 
       | If there's an equivalent social media experience out there that
       | gets rid of or at least mitigates the horriblenesses of humanity
       | while still letting people socialize online and explore what's on
       | their mind, maybe it's worth trying.
        
         | rm_-rf_slash wrote:
         | I can't think of any way where a social media app using bots to
         | discourage wrongthink doesn't come across as dystopian, and
         | also a little sad.
        
           | brookst wrote:
           | All "wrongthink"? Like if someone expresses suicidal
           | thoughts, the bot should say "whatever, do what you want"?
           | 
           | I disagree. Apps are products with an editorial component.
           | Editorials should be opinionated. It is passive and immature
           | to simply _not care_ how one's products are used. Oppenheimer
           | and Alfred Nobel have wise words on this topic.
        
             | NBJack wrote:
             | Why settle for indifference to self-harm?
             | 
             | https://www.cbsnews.com/news/google-ai-chatbot-
             | threatening-m...
        
           | PartiallyTyped wrote:
           | I feel that these days "wrongthink" is a dogwhistle, by all
           | means, just use X or pravda social.
        
             | yakshaving_jgt wrote:
             | I think this is the kind of rhetoric that has lead to the
             | rise of the right in the western world.
             | 
             | It's something discussed here in this video.
             | 
             | https://youtu.be/d5PR5S4xhXQ?si=eW68EoiHgADYhVUv
        
               | PartiallyTyped wrote:
               | Everyone wants to be the hero in their own story,
               | changing your beliefs requires introspection and
               | humility. It's far easier to blame somebody else than
               | yourself and take responsibility. It's trivial to feed
               | people some narrative that trivializes reality and, in
               | their mind, acknowledges them as underdogs and elevates
               | them as "right".
               | 
               | The triviality of this allows malevolent actors to
               | disrupt society and produce a chasm through echo
               | chambers, each amplifying particular voices and shifting
               | narrative.
               | 
               | Free speech is important, but it won't be found on X, or
               | pravda-dot-social, and as much has been proven multiple
               | times. What I take issue with and hint at is that neither
               | of these are platforms that support free-speech, they are
               | merely illusions, and people who yap about wrongthink are
               | exactly those oblivious to said fact.
               | 
               | Personally, I am tired of arguing with people, I am tired
               | of seeking truth in conversations with people unwilling
               | to change their minds, I just want to live my life,
               | safely.
        
               | yakshaving_jgt wrote:
               | I think that's right. I broadly agree with everything
               | you've written here. I'm just dissatisfied with this as a
               | status quo, where more reasonable people have
               | [justifiably] resigned themselves to the reality that
               | reasoning with the unreasonable doesn't usually pay
               | [moral] dividends.
        
               | epictacactus wrote:
               | But in reality, this has been the status quo for all of
               | Humanity. It's just that we now have this infinite ledger
               | via the internet to document it. Only 200 years ago, your
               | "honest opinion" would find you in the gallows more often
               | than not across the world. This "progressed" to social
               | exile in lieu of said gallows, and now to seeking out the
               | like minded in echo chambers. I will hold out hope that
               | interfacing with articulate AI's can support more people
               | to regain their identity and confidence where so many
               | lack the courage to try publicly (or are surrounded by
               | the foolish).
        
           | KaiserPro wrote:
           | > Wrongthink
           | 
           | When a community is formed, an implied set of rules are
           | generated and applied. They evolve and change as time goes
           | on. Sometimes they are the cause of the community dying,
           | sometimes they are the reason is thrives.
           | 
           | Whether its a sports community (woo football is the
           | greatest!) or a specific team (fuck the other team, booo) or
           | cycling. Each community has a set of rules that you need to
           | abide by in order for it to function.
           | 
           | Now, if you go and break those implied rules, you get told
           | off.
           | 
           | A community falls apart when two or more factions form
           | opposing implied rules in the same shared space. For
           | photography, it could be the use of photoshop, digital
           | cameras or, more recently AI. Either the factions learn to
           | get on, or they break away and find somewhere more accepting.
           | That is the natural order.
           | 
           | You could equally present those things as "wrongthink". But
           | more practically its just a regulation mechanism for human
           | interaction.
           | 
           | Now, you'll counter that "big corporations/politics
           | determining what we see is bad", and then reference some time
           | in the 90s where no such system existed. The problem is that
           | the US media was brilliant at self censorship. Sure you could
           | get specialist publications that catered to whatever taboo
           | subject you wanted, just as you can now.
           | 
           | The issue is, online there are no constraints on behaviour.
           | If I shout at 13 year old kid in the street that I'm going to
           | fuck their mum, burn their house down, and generally verbally
           | abuse them, generally someone will intervene and stop me.
           | Thats not wrongthink, thats society. There is not scalable
           | mechanism for doing that on online communities.
           | 
           | Is this AI bit the way to do it? no, its made by insular
           | collegekids who've barely lived in the real world.
        
         | pavel_lishin wrote:
         | What would it mean to "socialize" someplace where you can never
         | be sure if you're talking to a person or not?
        
         | WD-42 wrote:
         | "I've have negative interactions with people online so I'd
         | rather give up and talk to nice robots instead."
         | 
         | We are doomed as a society.
        
           | HKH2 wrote:
           | Letting people talk their thought processes through without
           | conflict seems more productive than them prefiltering what
           | they want to say.
           | 
           | You can get so conditioned to expect negative feedback that
           | you refuse to follow your curiosity.
        
             | asdff wrote:
             | That is called hiring a licensed therapist not chatting
             | with Black Queer Mamma.
        
             | jazzyjackson wrote:
             | You do need to learn how to deal with negative feedback
             | tho.
             | 
             | People should write letters and emails and long form
             | messages if they want to say a whole thing without
             | interruption. My concern with working through an idea with
             | a bot is you end up espousing the bots ideas and not your
             | own -- it is the ultimate ideological homogenization.
        
         | NBJack wrote:
         | Ironically, this would lead to dramatically unhealthy social
         | interactions and be ripe for abuse.
         | 
         | You are literally creating a bubble of interactions with
         | technologically enforced rose-colored glasses. Get the right
         | person in charge of the experience, and don't be surprised if
         | it becomes a modern take on Orwell's 1984.
        
           | barbazoo wrote:
           | In what way would it become a modern take on 1984? In terms
           | of surveillance? Afaik the telescreens were non-interactive,
           | right?
        
             | timeon wrote:
             | Yeah this one is closer Brave New World.
        
             | NBJack wrote:
             | No, I'm not even interested in the surveillance here
             | (though it is also a good comparison). Full disclosure, I'm
             | very pessimistic on what current LLM models are capable of,
             | and even more pessimistic on the impact of social networks.
             | 
             | I'm referring to the control of _interactions_ and
             | _information_. Take a look at the character interactions on
             | the novel.
             | 
             | People socialized as they could, but interactions were not
             | only heavily monitored (to your point), but altered as
             | well. Winston struggles to behave correctly in public
             | knowing that one small misstep could result in him being
             | arrested by The Party. He revels in the chance to rebel
             | against it, only to find that the opportunity to do so was
             | in fact a trap. Even the allowed language was neutered as
             | much as possible to prevent communication. All of these
             | virtual 'friends' will likely politely comment and correct
             | your posts as they see fit.
             | 
             | As for information, "we have always been at war with
             | Eastasia". People in 1984 were clearly being fed false
             | information as part of control. History was altered, facts
             | were changed, it was pretty heavy handed. But imagine what
             | this personal echo chamber could do with a new concept or
             | idea (that virtually all parties you interact with
             | corroborate, along with LLM generated 'news' posts and
             | generated pics that support it), how many people will
             | verify? Just think of how many folks you've seen have been
             | misled by a single well-done "fake news" post; now, imagine
             | that the network itself is confirming it at every turn
             | (note Meta owns _multiple_ networks).
             | 
             | Oh, and as a bonus, these virtual friends are _really_
             | enjoying Brand Product. Have you tried it yet? Folks keep
             | posting pictures of themselves with Brand Product, and they
             | look pretty happy!
        
         | lm28469 wrote:
         | People should try to socialize in real life. The web is 90%
         | rage baits, trolls, people completely brain washed by fringe
         | conspiracy theories and politics
         | 
         | IRL these 90% turn into 10%, if you really care about
         | socializing you'll have more meaningful interactions with the
         | grandma living next door than with the queer black queen
         | zuckerbot (tm)
         | 
         | It's like if people asked supermarkets to hide plastic apples
         | amongst the real ones because they look better and never went
         | bad, as if they somehow forgot why we even eat food in the
         | first place
        
           | rrr_oh_man wrote:
           | > It's like if people asked supermarkets to hide plastic
           | apples amongst the real ones because they look better and
           | never went bad
           | 
           | I'm stealing this
        
           | theptip wrote:
           | What if socializing IRL is something you can practice with
           | pro-social bots?
           | 
           | Or, what if depressed people find the pressure of
           | consequences too stressful to "just go outside and make
           | friends", and so the realistic options are chatbot or not
           | enough socialization?
           | 
           | There are obvious nuanced issues and risks here but to
           | distill it down to a one liner like "try to socialize IRL" is
           | myopic.
        
             | barbazoo wrote:
             | The solution can't be that those people have to interact
             | with AI instead of humans. What a shit society that would
             | be.
        
             | timeon wrote:
             | People need to practice with bots and are depressed because
             | they are lacking real social interactions. They went too
             | far to digital rabbit-hole. Now is time to log off.
        
         | barbazoo wrote:
         | "Socialize" with an AI?
        
       | xracy wrote:
       | I feel like the question I want to ask is "Why do we need AI-
       | Profiles?" Like, I can barely be bothered to keep in touch with
       | my friends and family. Why do I need a fake person to be follow
       | on social media? What purpose does it serve? What possibly comes
       | out of this that is positive?
        
         | xigoi wrote:
         | So that you get attached to a person who only exists on
         | Facebook, therefore you spend more time on Facebook and see
         | more ads.
        
         | add-sub-mul-div wrote:
         | You're right, it's not meant to be a positive outcome for you.
         | It's meant to keep people on Facebook longer so more ad
         | impressions can be delivered. In this case, eventually,
         | conversationally.
        
         | andrepd wrote:
         | Half of Internet traffic is bots. Half of webpages are written
         | by bots. It's just the logical conclusion.
        
         | kraftman wrote:
         | Its the other way around. It's about giving people the
         | attention they need exactly because they dont keep in touch any
         | more, so they have a reason to come back to the platform.
        
           | jazzyjackson wrote:
           | Quite a long ways from "give people the power to build
           | community and bring the world closer together", unless you
           | consider homogenizing behavior through interacting with a
           | borgbot "bringing us closer together"
        
         | quest88 wrote:
         | Unlimited content without needing to pay out humans in rev
         | share.
        
       | rsynnott wrote:
       | Do they not have anyone sensible left in the room? Like, you'd
       | expect that at some point someone would have said 'these are
       | comically terrible, we cannot allow them to see the light of
       | day'.
        
         | motohagiography wrote:
         | It's like people have already forgotten why people used the
         | non-human beings in Westworld. One of the first things we're
         | going to use them for is amusement and having them represent
         | anything sentimental or sensitive is going to make them a
         | target like this. It was irresponsible.
         | 
         | There's something essentially wrong with Meta and Google where
         | they can do tech but not products anymore. I'd argue it's
         | because the honest human desire that drives a product dies or
         | is refined out of initiatives by the time it gets through the
         | optimization process of their org structure. Nothing is going
         | to survive there unless it's an arbitrage or using leverage on
         | the user, and the things that survive are uncanny and weird
         | because they are no longer expressions of something someone
         | wants.
         | 
         | These avatars ticked all the boxes, but when they arrived,
         | people laughed at them because objectively they were
         | bureaucratic abominations.
        
           | ToucanLoucan wrote:
           | > There's something essentially wrong with Meta and Google
           | where they can do tech but not products anymore.
           | 
           | Because AI is the hype thing and adding AI to your thing
           | makes the stock price go up, because investors and stock
           | evaluators don't know what AI is, nor do they care. They just
           | know it's the hot thing and what you put in your product to
           | make it look better to Wall St.
           | 
           | Meta AI, Google AI summaries, Apple Intelligence are all
           | hilariously half-baked features designed to connect users
           | with ChatGPT so the line goes brrrrrrr. Them being helpful,
           | useful products that solve problems for users is a distant,
           | distant next place to that.
        
           | bloomingkales wrote:
           | I think Google and others are too distracted in collecting
           | enterprise coin at the moment. They have a perfectly good
           | consumer product in NotebookLM, but at the moment it has the
           | quality of something an intern made.
        
             | twoodfin wrote:
             | Why would consumers pay for NotebookLM? It's basically an
             | (impressive) party trick.
        
               | Spooky23 wrote:
               | Why would you say that? I used it as a study guide. Super
               | useful. Stuff like uploading 8 hours of audio and asking
               | it: "Generate an outline of topics that the instructor
               | said were important to remember.
        
               | scarface_74 wrote:
               | NotebookLM is more than just podcast generation. I came
               | into the middle of consulting project where there were
               | already dozens of artifacts - SOWs, transcripts from
               | discovery sessions, client provided documentation etc.
               | 
               | I loaded them all up into NotebookLM.
               | 
               | I started asking it questions after uploading it all to
               | NotebookLM like I would if I were starting from scratch
               | with a customer. It answered the questions with
               | citations.
               | 
               | And to hopefully deflect the usual objections - we
               | already use GSuite as a corporate standard, NotebookLM is
               | specifically allowed and it doesn't train on your data.
        
             | oceansweep wrote:
             | As someone who's built something like it in their free time
             | as a hobby project ( https://github.com/rmusser01/tldw),
             | could I ask what would make it a professional product vs
             | something an intern came up with? Looking for insights I
             | could possibly apply/learn from to implement in my own
             | project.
             | 
             | One of my goals with my project I ended up taking on was to
             | match/exceed NotebookLMs feature set, to ensure that an
             | open source version would be available to people for free,
             | with ownership of their data.
        
               | BoorishBears wrote:
               | I'm going to challenge you to put that first screenshot
               | into ChatGPT/Claude and ask them why it looks like
               | something an intern came up with vs a professional
               | product.
               | 
               | I'm not saying that as a slight or an insult, but right
               | now the screenshot looks like a Gradio space someone
               | would use to prove out the underlying tech of a
               | professional product, not a professional product (unless
               | you literally mean professionals are your target users as
               | opposed to consumers).
               | 
               | I think an LLM would be able to very quickly tell you
               | what most product builders would tell you _at this
               | stage_.
               | 
               | -
               | 
               | Also one of the key enablers of NotebookLM is SoundStorm
               | style parallel generation. Afaik no open source project
               | has reached that milestone, have they?
        
               | oceansweep wrote:
               | I don't think you understand the context, the person I
               | was replying to was making that comment about NotebookLM.
               | I'm fully well aware of how my UI looks, the whole reason
               | I'm using Gradio for right now is that it is a single
               | person project that isn't a product for sale. Not quite
               | an intern, but same amount of funding. The current UI is
               | a placeholder, because the idea is to migrate to an API
               | first design so users can have whatever kind of UI they'd
               | like.
               | 
               | SoundStorm/Podcast creation is one of the big draws, but
               | I would question as to whether its one of its most-used
               | features, considering hallucinations and shallowness.
        
               | BoorishBears wrote:
               | I guess I really don't understand the context because
               | even with this clarification it's not clear what you're
               | asking past "what does it take to add polish to my
               | nascent project", when the reality is by the nature of
               | it's nascent state no one is going to be able to give you
               | more than surface level advice (which the LLM can provide
               | pretty effectively, and in a more tailored way than we
               | random commenters can. Isn't that fact kind of the
               | underlying of your own project?)
               | 
               | > SoundStorm/Podcast creation is one of the big draws,
               | but I would question as to whether its one of its most-
               | used features, considering hallucinations and
               | shallowness.
               | 
               | You're questioning the one single feature that drove its
               | _entire_ success in distribution? Most people don 't know
               | any features _except_ the podcast feature.
               | 
               | If your goal is to address the more _underlying_ concept
               | of getting across knowledge in a quicker more readily
               | absorbed format using LLMs, there 's already an insane
               | amount of competition and noise.
               | 
               | The podcast thing was the only reason NotebookLM cut
               | through that noise, so the question shouldn't be "is it
               | one of the most used features" (due to the way conversion
               | rates work it will be btw, it might not be the most used
               | feature _by people who stay_ but obviously the feature
               | that 's highest in the funnel drawing people in will be
               | your most used feature) but imo the more relevant
               | question is "is it one of the most important features",
               | and the answer is yes.
        
           | sonofhans wrote:
           | > There's something essentially wrong with Meta and Google
           | where they can do tech but not products anymore. I'd argue
           | it's because the honest human desire that drives a product
           | dies or is refined out of initiatives by the time it gets
           | through the optimization process of their org structure.
           | 
           | This is a really nice insight. I hadn't put it together this
           | way before. It's similar to design-by-committee, or movie
           | script-by-committee (like Disney-era Star Wars movies, or a
           | Netflix focus-group-driven script). The layers of
           | bureaucratic filter have rubbed off all human influence, and
           | all that's left is naked profit motive.
           | 
           | The worst thing IMO would be if LLMs became able to
           | convincingly fake these emotions. They'd become emotional
           | pillows for the inhuman manipulative ambitions of their
           | parent orgs.
        
             | baxtr wrote:
             | I wonder why so few companies employ Pixar's Brain Trust
             | method.
             | 
             | You get into a room with a bunch of key execs who are not
             | part of your project and then they tear your
             | product/movie/idea apart.
             | 
             | You don't have to implement any of their suggestions but
             | since these are seasoned execs they see flaws readily.
             | 
             | The key is that the process encourages candid feedback
             | without hierarchy or defensiveness.
             | 
             | This goes on for a couple of iterations until the idea is
             | ready for production.
             | 
             | I think Apple has something similar.
        
           | PKop wrote:
           | > It's like people have already forgotten why people used the
           | non-human beings in Westworld
           | 
           | This is fiction though. Perhaps we didn't forget about that
           | fake thing and we're critiquing this real thing that exists.
           | How can you take a demonstrative position on "what we're
           | going to use them for" and defend it with someone's contrived
           | story?
           | 
           | > because objectively they were bureaucratic abominations
           | 
           | Yes, this is why criticism of this real thing isn't countered
           | by claiming people "forgot" what occurred in a fantasy script
           | about things that don't exist.
        
             | motohagiography wrote:
             | this misunderstands what fiction is. for people with the
             | capacity, it's a tool for reasoning about hypotheticals and
             | counterfactuals. sometimes its fun, but mostly it's
             | serious.
             | 
             | for the people who get it fiction is a public discourse
             | about possibilities. to me seeing it as arbitrary would be
             | like watching golf and thinking it's random. there's a
             | literal mindedness or incapacity for abstraction I can't
             | apprehend in that.
        
               | frereubu wrote:
               | The point is that a fictional story is just one of a host
               | of possibilities, so you can't base decisions off what
               | happens in it rather than the other n-1 possibilities.
        
               | jdbernard wrote:
               | The problem with using fiction as a discourse about
               | possibilities is that fiction is governed by the rules of
               | the author's mind, not the rules of reality. So the
               | fidelity of the model being used to drive the discourse
               | is directly dependent on the congruence between the
               | author's internal model of the world and reality, which
               | can often be deceptively far apart. This is especially
               | bad when the subject is entertaining, because most of us
               | read fiction for entertainment, not logical discourse. So
               | we create scenarios that are entertaining rather than
               | realistic. And the better the author is the more subtle
               | the differences are, but that doesn't mean they go away.
               | It feels like a somewhat common experience in my life
               | that I'm discussing some topic with somebody, and I have
               | subject matter expertise based on actual lived
               | experience, and as the conversation goes on I discover
               | that all of my conversation partner's thinking about the
               | subject was done in the context of a fictional world
               | which misses key elements of the real world that lead to
               | very different conclusions and outcomes.
               | 
               | This is not to totally discard fiction as a way of
               | reasoning. With regard to hypotheticals beyond our
               | current reach it is often the only way to reason. So it's
               | valuable, but we have to keep in mind that a story is
               | just a story. Hard experience trumps fictional logic any
               | day. And I can't assume that the same events in real life
               | will lead to the same outcomes from a story.
        
               | PKop wrote:
               | >for the people who get it
               | 
               | Yea I love fiction. I certainly get it.
               | 
               | You should think this way about a particular thing
               | because this one movie portrayed it that way, and if you
               | don't it's because you forgot that this movie portrayed
               | it that way according to its creators will, thoughts,
               | ideas, motivated reasoning, worldview, whatever...is not
               | compelling.
               | 
               | It's pretty obvious to me that the creative choices and
               | ideas of certain people do not imply any demonstrative
               | truth about reality just because they exist because there
               | is not a direct connection between the two; someone can
               | write or film or render whatever they want, even
               | completely contradictory versions of the same topic. What
               | if for example someone did watch that or any other thing
               | and simply disagreed? Think the creators got it wrong?
               | 
               | > public discourse about possibilities. to me seeing it
               | as arbitrary
               | 
               | It's not arbitrary. It can certainly be self-serving, it
               | can be propaganda, it can be a good guess or a well-
               | meaning statement about reality, or speculative fiction
               | but also wrong. It's just the emphatic certainty in how
               | you presented this media creation as proof of something
               | inherently connected to truth about a complex future
               | debatable quality of reality, as even a fictional account
               | about history or the present day or politics suffers from
               | obvious fundamental disconnects from being regarded as
               | "truth", proof or evidence of anything. Again, different
               | people can make multiple contradictory or competing
               | portrayals of a certain concept or topic. This is no
               | different than someone telling you what they believe
               | about a thing; it's not "therefore true" as such,
               | especially in the form of a prediction about the future.
        
           | loeg wrote:
           | Fictional dystopias aren't the real world. Westworld is just
           | television.
        
         | namaria wrote:
         | That's the price for Mr Zuckerberg's ownership structure. On
         | the one hand, no one can dispute him. On the other hand, no one
         | can dispute him.
        
           | laweijfmvo wrote:
           | this is exactly what happened
        
             | roominator wrote:
             | I could also see this being some executive trying to
             | justify the word "AI" in their title with an initiative
             | that should "make number go up" wrt to engagement or
             | something.
             | 
             | "When we have more users engagement goes up, let's just
             | _make more users_".
        
           | MrMcCall wrote:
           | And he is personally responsible for so much misery on Earth
           | that he can likely never do enough good to make up for it,
           | not that he's going to try.
           | 
           | The reason he never looks happy is because he _is_ never
           | happy. Same with his fellow oligarchs; they can get pleasure
           | in bunches but never happiness, they can be smug but never
           | have peace. Happiness is what happens as a result of your
           | helping others become happier; there is no shortcut, there is
           | no other way.
           | 
           | Zuck has only ever worked for himself and his "peers".
           | History is riddled with those losers, the richer the worser.
           | It is not just human nature but the nature of the universe
           | vis a vis our human responsibility as choosers of goodness or
           | its opposites.
        
             | drdeca wrote:
             | > Happiness is what happens as a result of your helping
             | others become happier; there is no shortcut, there is no
             | other way.
             | 
             | Zack has a comic on a similar concept:
             | 
             | https://www.smbc-comics.com/comic/2009-11-18
             | 
             | Of course, the exact line of reasoning doesn't quite work
             | for the different formulation you stated. However, a
             | different version of the line of reasoning seems to? How
             | does one help another person become happier if helping
             | another person become happier is the only way to become
             | happier? One helps them help someone else to help someone
             | to... ?
             | 
             | Or, I suppose if it is possible to be more or less unhappy
             | for other reasons, so by helping another person become less
             | unhappy (though not yet happy) one could thereby become
             | happy?
        
               | MrMcCall wrote:
               | We all must deal with other people to survive in this
               | world, and our treatment of everyone every single day
               | creates a karmic result that affects us proportionally.
               | 
               | We are rewarded for our efforts, not their effects. If we
               | truly try to help someone, we get some measure of
               | happiness for our attempt, even if they refuse it or are
               | otherwise miserable. When I tell someone to care for
               | others' happiness, I do so in order that they do not sow
               | the seeds of their own unhappiness. If they use their
               | free will to ignore my recommendation, I do not lose for
               | their choices; in fact, I've gained because I tried to
               | help them make choices that will increase their peace and
               | happiness. We cannot convince anyone to not be (for
               | example) a racist, but when we engage in such efforts, we
               | have tried to make the world a better, less miserable
               | place, and we gain for our valiant attempt.
               | 
               | The intention of our ethical karmic universe is to nudge
               | us towards caring for others, instead of being selfish
               | aholes callous to the misery of others. The feedback
               | mechanism is our resulting inner peace and happiness (or
               | lack thereof).
               | 
               | All our choices start with an intention, however muddled
               | or unconscious our thought process is at the time. When
               | MZ chooses profit over policing his platform, he has
               | planted bitter seeds, indeed, for we all reap what we
               | sow, for good or ill.
               | 
               | > Or, I suppose if it is possible to be more or less
               | unhappy for other reasons, so by helping another person
               | become less unhappy (though not yet happy) one could
               | thereby become happy?
               | 
               | Yes, indeed. And selflessly making efforts for others'
               | happiness creates a well of magic the universe can dip
               | into and sprinkle you with at its leisure, which is
               | sublime and loving at its Source. That is why giving
               | charity is so essential on the Spiritual Path of Love:
               | because our individual and cultural selfishnesses are so
               | stubborn, we need all the help we can get to truly self-
               | evolve our ideals, attitudes, and behaviors.
               | 
               | "Ask and ye shall receive, ..." --New Testament
        
           | KaiserPro wrote:
           | So, Zuckerberg has got the AI horn. Well, he always had. But
           | 
           | There are about 70k engineers at facebook, I doubt its
           | possible for zuckerberg to be aware of all the product
           | changes that happen in meta.
           | 
           | However, it also shows that meta has no functioning marketing
           | skills. This I suspect is also down to Zuck.
           | 
           | Had they marketed it, and more importantly, had the engineers
           | had the training to talk to marketing first, then this
           | wouldn't have been an issue.
        
           | s1artibartfast wrote:
           | I dont think it is reasonable to think shareholders would be
           | vigilant safeguard on these types of R&D programs. They
           | simply aren't that involved.
        
         | mupuff1234 wrote:
         | I can't fathom how anyone thought this was a sensible thing.
         | 
         | It's so bad I have to wonder if there's a different angle here,
         | maybe they think that releasing something so terribly bad will
         | make it easier when they release something less comically bad?
         | Idk.
        
           | brookst wrote:
           | Fact is it is hard to make a great product. I don't think we
           | need complex conspiracy theories to explain bad ones.
           | 
           | Odds are the product team cherry picked conversations for
           | internal demos, their management wanted press and promotions,
           | KPIs were around engagement and not quality, and nobody had
           | perspective and authority to say "this sucks, we can't
           | release it".
        
             | rsynnott wrote:
             | It's not hard to identify, before launching, "this is
             | unacceptably bad". Like, I think this is actually quite a
             | bit worse than Microsoft Tay, which others mentioned, as an
             | example of "how could you possibly launch this"? Tay was
             | reckless, and you've got to assume that they knew there was
             | at least a change it could work out as it did, but in this
             | case the product _as launched_ was clearly unacceptable; it
             | didn't require input from 4chan to make it so.
             | 
             | I mean, just very basic QA, having someone talk to the damn
             | things for a bit, would've shown the problem, in this case.
        
               | TheNewsIsHere wrote:
               | I've always believed Microsoft Tay was a really
               | idealistic, genuine, if bombastically naive situation.
        
               | brookst wrote:
               | I wish I agreed. But I have seen many products launched
               | where only true believers who internalized the
               | limitations were involved in testing. Yes, a good org
               | with self-critical people and an independent red team and
               | execs who cared about quality would not have made this
               | blunder. Getting that org stuff right sounds easy but is
               | nigh impossible if it doesn't align to the company
               | culture.
        
               | ryandrake wrote:
               | Critics and naysayers tend to leave teams where they
               | don't believe in the product, and eventually teams are
               | left with only true believers. Their entire purpose is to
               | ship Project X, and if they were to admit Project X is a
               | blunder, then they are admitting that their purpose is a
               | blunder. Few teams at few companies are willing to do
               | that.
        
               | brookst wrote:
               | Exactly. That's why formal and _independent_ compliance
               | / security / red team signoffs are so important. A good
               | product team wants those checks and balances.
        
               | lukan wrote:
               | "and eventually teams are left with only true believers."
               | 
               | Or with nihilists who learned to say yes to everything,
               | to at least get paid.
        
               | prymitive wrote:
               | Isn't it as simple as management wanting to be first one
               | across the release line, with the wishful thinking of
               | "we'll fix everything later"?
        
           | giantrobot wrote:
           | The next step will be to release bots that aren't labeled as
           | bots. They'll be influencers (advertisers) without having to
           | pay a person. They'll produce hyper targeted influencer slop,
           | hocking products directly to individuals using Facebook's
           | knowledge graphs of users to be incredibly manipulative.
           | Companies will pay Facebook to _make_ people siphon money
           | directly to them.
           | 
           | It'll be like entirely automated catfishing.
        
             | nonfamous wrote:
             | 1. Create AI bot 2. Wait until it gets lots of followers 3.
             | Sell mentions of products by popular AI bot. Profit.
             | 
             | Sure, they faltered at Step 2, but how was this not the
             | plan?
        
               | giantrobot wrote:
               | The future bots won't need followers. They will reach out
               | to engage users based on their ad graph. Like how catfish
               | accounts work.
               | 
               | They'll reach out to users and talk to them about
               | whatever they're interested in. They'll then make
               | influencer-style native advertisements. If you're a
               | middle aged man that likes video games when you see their
               | picture posts they'll be "wearing" some vintage game
               | t-shirt (that's for sale). If you're instead a twenty
               | year old woman into yoga the same bit will be "wearing"
               | some new Lululemon yoga pants.
               | 
               | The first pass of these bots failed because they used the
               | follower mechanism. The next version will just follow you
               | to push ads or scrape more data about you.
        
           | TheNewsIsHere wrote:
           | I find myself generally wondering the same thing about half
           | the things coming out of SV over the past several years.
           | 
           | I have worked with competent product organizations. I know
           | they exist. A few even exist in SV. But for some reason the
           | largest players have just absolutely lost the plot beyond
           | what can get them to favorable quarterly earnings, and that
           | game eventually and fairly consistently doesn't end well in
           | the long term.
           | 
           | Meta in particular is clearly rudderless, lacking in vision
           | and strategic leadership, and throwing whatever it can find
           | at the wall hoping to find something that sticks. Facebook
           | turned into a cesspool, Instagram isn't as popular with the
           | newest demographics Meta has traditionally wanted to court
           | (young people), and Threads turned out to be a nothing
           | burger. Their grand quest to unify messaging was a disaster,
           | and mores the putty because they've basically delineated
           | their ecosystem by demographic generation in what has to be
           | an unintentional series of missteps. Their "metaverse"
           | projects weren't even compelling to the teams working on
           | them, their foray into crypto was over almost as soon as it
           | began, and their business products are a nearly unusable mess
           | for those of us who have had the misfortune of using them.
           | 
           | You'd think eventually someone at Meta would hit upon an idea
           | that goes somewhere. It's like a more pathetic version of
           | Google's stagnation.
        
             | ryandrake wrote:
             | Stonk prices are going up, so whatever they are doing, the
             | only real feedback they care about is positive.
             | 
             | These AI products are not made for users. They're made for
             | Wall Street. Wall Street expects big companies to 1. talk
             | about AI in their earnings calls, and then 2. do something,
             | anything, with AI. All of BigTech seems to be doing this
             | now, and investors are rewarding them by buying their
             | stock. So they're going to continue the cycle of building
             | useless AI products and then canceling them when they have
             | served their purpose (pleasing investors).
        
             | jpc0 wrote:
             | We aren't a big org and I know we spend several hundred if
             | not thousands of USD a month on meta ads and WhatsApp
             | business.
             | 
             | For larger orgs I can imagine that number is larger and we
             | definitely get a ton of positive interaction from that.
             | That might be swayed by demographics but I can tell you age
             | isn't one of them because our interaction (on WhatsApp
             | Business which is mostly where I interact with Meta
             | products) is decently in the 20-40 range.
        
             | SoftTalker wrote:
             | It's pretty simple in my view. It's "where does the money
             | come from" and at Meta it's not from their users. So they
             | are motivated to entrap and wall in and build funnels to
             | try to deliver their users to the people who are actually
             | giving them money. They aren't building things users want.
             | They are building things that they think will keep the
             | users from leaving.
        
           | NeutralCrane wrote:
           | Working in ML Engineering this doesn't surprise me in the
           | least. The ratio between "impressiveness of technology" to
           | "the number of practical problems actually solved" is
           | probably higher for LLMs than just about any tech to come out
           | in my career, probably longer. Every executive at every
           | company is tripping over themselves to incorporate "AI"
           | (which is almost exclusively defined as LLMs, for some
           | reason) into their products. Problem is that many, if not
           | most, companies really don't have meaningful use cases for
           | LLMs. So you end up with a bunch of problems being invented
           | so they can be solved with AI. What feature can Facebook
           | provide for their users utilizing LLMs that users actually,
           | genuinely care about? I can't think of any. And I assume
           | Facebook can't either, which is how they arrived at this.
        
         | ErikAugust wrote:
         | I wonder if their product people get their ideas from Hacker
         | News. Just the other day, a top thread comment was about an
         | idea for a social network where you would receive tons of
         | engagement from bots. Coincidentally (or not), Meta actually
         | took a first step in that direction days later.
        
           | komaromy wrote:
           | Big tech would never ship anything that fast.
        
           | Semaphor wrote:
           | Wasn't that a comment about an article regarding the use of
           | AI by Meta?
        
           | debugnik wrote:
           | Sorry for breaking the fantasy, but this is from TFA:
           | 
           | > The company had first introduced these AI-powered profiles
           | in September 2023 but killed off most of them by summer 2024.
           | 
           | Also, they're deleting these profiles, which is a step away,
           | not towards that idea. Although they're apparently leaving in
           | some feature to make your own AI-profiles.
        
         | paddw wrote:
         | No one gets promoted for suggesting not doing something.
        
         | nthingtohide wrote:
         | A mistake on par with Netflix getting into mobile games. With
         | advent of 5G, nobody plays candy crush saga or such games.
        
           | jedberg wrote:
           | You must be kidding. Look around on public transport. If
           | someone isn't reading something on their phone, they're
           | almost certainly playing some candy crush like game. Apple
           | games is doing well.
           | 
           | Netflix failed because they didn't make games people are
           | interested in, not because people don't want those types of
           | games.
        
             | nthingtohide wrote:
             | I think most people just scroll instagram or tiktok.
             | Because of lack of short content format earlier, mobile
             | games became popular. That and lack of 5g infra. Both
             | problems have been solved.
        
               | scarface_74 wrote:
               | You didn't need 5G to stream video. You could stream
               | video well on "3.5G" with the enhanced GSM protocols
        
           | scarface_74 wrote:
           | Candy Crush and similar games are still making billions
           | 
           | https://www.linkedin.com/pulse/inside-candy-crushs-
           | success-s...
        
         | deergomoo wrote:
         | I feel like there must be some sort of disassociation that
         | kicks in when you spend long enough in the upper echelons of
         | these gargantuan corporations. It's almost like spending long
         | enough dealing with abstractions like MAUs, DAUs, and
         | engagement metrics make you forget that actually, at the bottom
         | of it all are real humans using your product.
        
           | TeMPOraL wrote:
           | Modern entrepreneurship is basically gradient descent. You
           | try to predict what action will yield you more profits, you
           | do that action, rinse, repeat. It's a completely abstract
           | process.
        
         | Mistletoe wrote:
         | I'm imagining it would have gone like the meme where the
         | dissenting opinion person is thrown out the window. They don't
         | have any other ideas left, so they have to do stupid stuff like
         | this to appeal to people that call themselves investors.
        
         | jedberg wrote:
         | Having worked at a similarly gargantuan and dysfunctional
         | company, I can tell you exactly how this went down. Someone had
         | this idea for AI profiles. They speced the product and took it
         | to engineering. The engineers had a good laugh at how
         | preposterous it is, but then remembered that they get paid a
         | ton of money to do what they're told, and will get promos and
         | bonuses for launching regardless of the outcome.
         | 
         | It all stems from promo culture -- it doesn't matter what you
         | build, as long as it ships.
        
           | stephen_cagle wrote:
           | Seconded. It is difficult to understate how pervasive and
           | dysfunctional promo based development is at some of these
           | behemoths (Google from my experience, but I hear Meta is
           | similar). Nothing else matters as long as what you are doing
           | correctly fits in your promo packet.
        
           | shipscode wrote:
           | The best (worst?) part is when the engineers actively
           | overlooked actual production bugs and security concerns to do
           | this work.
        
             | jedberg wrote:
             | Sadly it's really hard to get a promo fixing bugs or
             | optimizing code.
        
               | Groxx wrote:
               | Ship it quick, ramp up some high profile users that don't
               | actually care much about what you're offering, and jump
               | to the next project before anyone notices the problems.
               | 
               | Works every time.
        
             | loeg wrote:
             | There are 10,000s of SWEs at Facebook and this project was
             | at most a handful of SWEs. (As stupid as it is, it did not
             | significantly detract from prod bugs.)
        
           | autoconfig wrote:
           | That is not at all how things work at Meta. The impact of the
           | things you deliver as an engineer has a direct effect on your
           | performance review. For better or for worse, that also means
           | that engineers have a ton of leverage on deciding what to
           | work on. It's highly unlikely that the engineers working on
           | this were laughing at it while doing so.
           | 
           | Don't assume that you can simply pattern match because you've
           | been at another big company. I've been at three, meta being
           | one of them. And they have all operated very differently.
        
             | chis wrote:
             | How do you think it happened, then? Having also worked
             | there the OP's story makes total sense to me lol. If you're
             | on a team with the charter to "make AI profiles in IG work"
             | then you're just inevitably going to turn off your better
             | judgement and make some cringy garbage.
        
               | 93po wrote:
               | I suspect they wanted to be able to say "worked on AI at
               | Facebook" on their resume and this was their way of doing
               | it
        
               | autoconfig wrote:
               | I think the incorrect premise here was that engineers
               | always know what a good product is. :) And I say that as
               | an engineer myself. It's fully possible that the whole
               | team was aligned on a product idea that was bad, it
               | happens all the time. From my experience though, if
               | there's any company where engineers don't just mindlessly
               | follow the PMs and have a lot of agency to set direction,
               | it's Meta. Might differ between orgs but generally that
               | was my experience.
        
           | loeg wrote:
           | You're kind of missing the early step where some executive
           | had to sign off on this dumb idea. Otherwise it doesn't
           | launch. It's only "impactful" to engineer performance review
           | because some exec said so.
        
             | jedberg wrote:
             | Execs go through the promo process too. And also, some
             | execs will sign off on projects that they know are bad but
             | will make for good promo material for them and their
             | reports.
        
               | loeg wrote:
               | Yeah this is true, just missing from the original
               | comment.
        
             | mvdtnz wrote:
             | No, execs do not sign off on every feature. Even at medium
             | size (2000+ employees say) there is far more output being
             | produced than could possibly be signed off by an exec team.
        
               | loeg wrote:
               | I don't mean Zuck personally blessed it, but this went up
               | at least three levels of management and all of them said
               | "sure."
        
               | thephyber wrote:
               | Maybe not for the initial ideation and test, but it was
               | mentioned on an investor call, so execs adopted the idea
               | if they didn't originate it.
        
             | troupo wrote:
             | The idea came from an exec. That's why no one questioned
             | it, and it was executed.
        
             | thephyber wrote:
             | It worked great for Ashley Madison until it didn't.
             | 
             | Execs don't need to stay at Meta for a decade. They
             | succeed, then exchange musical chairs. On average, it will
             | work well for several iterations.
        
             | skizm wrote:
             | The exec gets "credit" for the project, so same promo
             | culture issue. They just need to show increased engagement
             | numbers for like one quarter and they can add it to their
             | end of year performance packet. The fact that the project
             | gets cancelled is either 1) another "win" because they're
             | "making hard choices" and they can obviously justify why it
             | should be cancelled, or 2) someone else's problem.
             | 
             | Also, another note, these sorts of big swing and misses are
             | actually still identified as a positive, even when looked
             | at retroactively. They're "big bets" that could pay off
             | huge if they hit. Similar to VC culture, Meta is probably
             | fine with 99 misses if 1 big bet hits. If they increase
             | engagement even 1%, they're raking in billions and it is
             | worth it.
        
         | diego_sandoval wrote:
         | Next month no one is going to remember this anyway, so they
         | don't lose much by trying.
        
         | eastbound wrote:
         | We always underestimate how many bots made the pre-2022 social
         | apps were.
         | 
         | 1. To make you feel like there is activity. How would you
         | simulate activity when you have no customer to start with? I
         | suspect Youtube threw subscribers at you just to get you
         | addicted to video production (the only hint that my Youtube
         | subscribers were real, was that people recognized me on the
         | street). And guess who's mostly talking to you on Tindr.
         | 
         | 2. For politics and geopolitics influence. Maybe Russia pays
         | half the bots that sway opinions on Instagram. Maybe it's China
         | or even Europe, and USA probably has farm bots too for Europe,
         | to ensure the general population is swayed towards liking USA.
         | 
         | 3. Just for business, marketing agencies sometimes suggest to
         | create 6x LinkedIn profiles per employees to farm customers.
         | 
         | Facebook doing it in-house and officially is just legalizing
         | the bribe between union leaders and CEOs.
        
         | 1659447091 wrote:
         | The only thing they got wrong was the strong stereotyping. But
         | the idea from an investor or exec position is brilliant. Why
         | bother with all the parsing for user's interactions through
         | clicks, likes, replies etc when you can have them engage a bot.
         | 
         | Simply have your users give you all the info you need to serve
         | better ads; while selling companies advanced profiles on
         | people? User profiles built by them engaging the bot and the
         | bot slowly nudging more and more info out of them?
        
         | ChrisMarshallNY wrote:
         | My impression, from seeing some of these "great ideas," coming
         | from the modern Tech industry, is that there really are no
         | adults in the room; including the funders.
         | 
         | So many of these seven-to-nine-figure disasters are of the "Who
         | thought this was a good idea?" class. They are often things
         | that seem like they might appeal to folks that have never
         | really spent much time, with their sleeves rolled up, actually
         | delivering a product, then supporting it, after it's out there.
        
       | belter wrote:
       | If you accepted digital tokens as 'money', don't be surprised to
       | have AI faces as friends...
        
         | freedomben wrote:
         | Money is whatever people agree it is. Printed pieces of paper
         | or recorded bits in a database are no more inherently valuable
         | than digital tokens, yet everyone accepts them as "money". AI
         | faces are no surprise, but I fail to see the connection you are
         | going for.
        
           | asdff wrote:
           | Because the government backs the dollar. The government
           | doesn't back runescape gp for instance. It has value yes but
           | its also not money. Otherwise anything one could trade could
           | count as money. I traded my couch for beer now its money if
           | we only defined money by having some value and ability to
           | trade it.
        
       | lionkor wrote:
       | I don't like that some people, esp at larger companies, think
       | that groups with disadvantages in today's world are cutesy
       | adorable things to play make-believe with.
        
         | whimsicalism wrote:
         | that's basically our entire culture. ever notice how black
         | people are disproportionately represented in comic media and
         | particularly "reaction gifs"? just modern day ministrelism
        
           | ImPostingOnHN wrote:
           | > ever notice how black people are disproportionately
           | represented in comic media and particularly "reaction gifs"?
           | 
           | No, I have never noticed that. In fact, I have noticed the
           | opposite.
        
             | whimsicalism wrote:
             | the comedy thing is maybe debatable but the reaction gifs
             | thing is objectively true and kinda weird from my
             | perspective
        
               | cootsnuck wrote:
               | Yea, I don't know how anyone could even pretend to not
               | notice the reaction gifs. It's been like that since
               | reaction gifs have been a thing.
        
               | ImPostingOnHN wrote:
               | Are you sure it's objectively true, and not just a
               | personal observation based on subjective experience?
               | Given that we have had opposite experiences, it seems
               | like the latter. Maybe the crowds you run with
               | disproportionately use them?
        
               | whimsicalism wrote:
               | yes i'm sure
               | 
               | https://www.google.com/search?q=reaction+gif&udm=2
        
               | ImPostingOnHN wrote:
               | What makes you so sure?
               | 
               | Obviously the results of a google search isn't valid
               | data. Who knows what's in their algorithm, what gets
               | missed, what gets used more, etc.
        
         | ternnoburn wrote:
         | Packaging and commodifying Black and queer cultures for sale to
         | people, while not actually engaging or giving back to those
         | cultures is a pretty time worn tradition, tbh.
         | 
         | Of course, it ain't right now and it weren't right then, but
         | I'm not in the least surprised it happened.
        
           | morkalork wrote:
           | Actual real cultural appropriation to be outraged at for once
        
       | NicholasGurr wrote:
       | [flagged]
        
         | dang wrote:
         | We've banned this account for repeatedly breaking the site
         | guidelines.
         | 
         | Please don't create accounts to break HN's rules with. It will
         | eventually get your main account banned as well.
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
       | Eumenes wrote:
       | Good, these things are for dumb people anyways. Don't worry tho,
       | they'll be back and more convincing.
        
         | MrMcCall wrote:
         | By the stupid, for the stupid.
        
       | ilrwbwrkhv wrote:
       | Knowing Facebook this just means that's what they want publicly
       | to be known, but they are probably starting an intensive new
       | program without labeling these things as AI and trying to hide
       | it.
        
         | KaiserPro wrote:
         | > Knowing Facebook this just means that's what they want
         | publicly to be known
         | 
         | You're assuming that Facebook has a plan. Which is at best,
         | generous.
         | 
         | They are a bunch of overly active hall monitors with an
         | incentives package that encourages people to "create impact" by
         | increasing arbitrary metrics. At no point do they ask "is this
         | good for the product/user" they ask "will this improve metric
         | x,y or z"
         | 
         | These profiles come from the GenAI team, or worse, a competing
         | team inside instagram (who are doing this so that they can then
         | move to the better-chance -of-getting-promoted org that is
         | GenAI) for some experiment where they prove that some metric
         | improves, and doesn't lower another metric, if you shove AI
         | onto the public.
        
       | cdcox wrote:
       | This was probably an okay idea terribly implemented. GenAI
       | creators on social media kind of sense.
       | 
       | Neurosama, an AI streamer, is massively popular.
       | 
       | Silllytavern which lets people make and chat with characters or
       | tell stories with LLMs feeds Openrouter 20 million messages a
       | day, which is a fraction of it's totally usage. Anecdotally I've
       | have non tech friends learn how to install Git and work an API to
       | get this one working.
       | 
       | There are unfortunately tons of secretly AI made influencers on
       | Instagram.
       | 
       | When Meta started these profiles in 2023 it was less clear how
       | the technologies were going to be used and most were just celeb
       | licensed.
       | 
       | I think a few things went wrong. The biggest is GenAI has the
       | highest value in narrowcast and the lowest value in broadcast.
       | GenAI can do very specific and creative things for an individual
       | but when spread to everyone or used with generic prompts it start
       | averaging and becomes boring. It's like Google showing its top
       | searches: it's always going to just be for webpages. Making an
       | GenAI profile isn't fun because these AIs don't really do
       | interesting things on their own. I chatted with these they had
       | very little memory and almost no willingness to do interesting
       | things.
       | 
       | Second, mega corps are, for better or worse, too risk averse to
       | make these any fun. GenAI is most wild and interesting when it
       | can run on its own or do unhinged things. There are several
       | people on Twitter who have ongoing LLM chat rooms that get
       | extremely weird and fascinating but in a way a tech company would
       | never allow. Silllytavern is most interesting/human when the LLM
       | takes things off the rails and challenges or threatens the user.
       | One of the biggest news stories of 2023 was an LLM telling a
       | journalist it loved him. But Meta was never going to make a GenAI
       | that would do self-looping art or have interesting conversations.
       | These LLMs probably are guardrailed into the ground and probably
       | also have watcher models on them. You can almost feel that
       | safeness and lack of risk taking in the boringness of the
       | profiles if you look up the ones they set up in 2023. Football
       | person, comedy person, fashion person, all geared to advice and
       | stuff safe and boring.
       | 
       | I suspect these things had almost zero engagement and they had
       | shuttered most of them. I wonder what Meta was planning with the
       | new ones they were going to roll out.
        
         | Macha wrote:
         | > Neurosama, an AI streamer, is massively popular.
         | 
         | I think some level of Neuro's popularity is due to the Vedal +
         | Neuro double act though, and some of the scripted/prepared
         | replies.
        
           | cdcox wrote:
           | Absolutely, and in picking collabs with people who are
           | willing to work with the weirdness and make it funny. Vedal
           | is definitely a fantastic creator to make it work so well and
           | the amount of fine-tuning and tweaking he must do must be
           | unreal. But I think it still shows there is some hunger for
           | this type of content, though you are probably correct that it
           | still needs to be curated, gardened, worked with, and
           | sometimes faked.
        
         | wussboy wrote:
         | Is "engagement" the primary metric of modern life?
        
           | CatWChainsaw wrote:
           | According to your economic masters, yes, because engagement
           | is a proxy for revenue.
        
           | wsintra2022 wrote:
           | not the primary metric of modern life but I would agree it's
           | the primary metric of modern consumer facing business
        
         | dashundchen wrote:
         | Meta's platforms are already filled with AI slop content farms
         | that drive clicks and engagement for them.
         | 
         | I have a FB account for marketplace, and unsubscribed from all
         | my pages and friends. If I log in, my feed is a neverending
         | stream of suggested rage bait, low quality AI photos,
         | nonsensical LLM "tips" on gardening and housekeeping.
         | 
         | The posts seem to attract tens of thousands of reactions and
         | comments from seemingly real people.
        
           | SoftTalker wrote:
           | "seemingly" being the operative word. They are mostly fakes.
        
         | gazchop wrote:
         | _> Neurosama, an AI streamer, is massively popular._
         | 
         | This only proves that there's enough people on the planet
         | around that bit of the bell curve to develop an audience.
        
           | fardo wrote:
           | Aspersions aside, the content's actually typically pretty
           | involved and has a lot to speak for itself, it's not low-
           | effort content that one would typically associate with AI.
           | 
           | Laid bare, it's generally a variety comedy show of a human
           | host and AI riffing off each other, the AI and the chat
           | arguing and counter-roasting each other with human mediation
           | to either double down or steer discussions or roasts in more
           | interesting directions, a platform for guest interviews and
           | collaborations with other streamers, and a showcase of AI
           | bots which were coded up by the stream's creator to play a
           | surprising variety of games. There's a lot to like, and you
           | don't need to be on "that bit of the bell curve" to enjoy a
           | skilled entertainer putting new tools to enjoyable use.
        
             | gazchop wrote:
             | I think I'd rather read a paper on anal warts.
        
       | mentos wrote:
       | I always thought AI would need a physical form before it really
       | started competing with humans but now this thought exercise made
       | me realize that with so much of life lived/worked virtually that
       | reality is so much closer than we realize.
        
       | tomalaci wrote:
       | You know what AI profiles I want more of? Clueless
       | grandma/grandpa that accepts scammer calls and waste their time.
       | Such as UK's O2 Daisy:
       | https://news.virginmediao2.co.uk/o2-unveils-daisy-the-ai-gra...
        
         | cute_boi wrote:
         | Imagine AI calling AI and wasting each others time :D
        
           | pkkkzip wrote:
           | This is actually a hilarious scenario. Anthropomorphize TTS
           | with Indian accents to entrap the other AI agent into
           | thinking they are a real human. DDOS their o1 API calls by
           | soft jail breaking prompts using complex programming
           | questions disguised as typical Microsoft support issues.
           | 
           | CodeBot: Word tables blank sometimes. Hmm.
           | 
           | SupportBot: What version? Try a repair.
           | 
           | CodeBot: Memory issue maybe? Bad alloc?
           | 
           | SupportBot: Rare. Repair is next.
           | 
           | CodeBot: Threading problem sar? Data races?
           | 
           | SupportBot: Try repair, new doc please sar.
        
             | TeMPOraL wrote:
             | ScamVictimBot:         sound = tts("Say again?") //
             | constant, might as well cache         while(true):
             | phone.out(sound)           _ = phone.waitUntilNextPause()
        
           | drdrey wrote:
           | really wasting each other's energy
        
             | cratermoon wrote:
             | really wasting humanity's energy and the planet's climate
        
               | TeMPOraL wrote:
               | A lot of human activity is exactly this, especially in
               | the realm of marketing, so maybe AI getting trapped in
               | the same nonsense would finally make people understand
               | how stupid and absurd this is.
        
           | barbazoo wrote:
           | And wasting resources too. We've peaked as a species.
        
         | zombiwoof wrote:
         | Exactly
        
         | crmd wrote:
         | There is another real-world version of this in the US
         | healthcare system, where doctor offices are using domain-
         | specific LLMs to craft multi-page medical approval requests for
         | procedures that cover every known loophole insurer's use to
         | deny, which are then being reviewed by ML-powered algorithms at
         | the insurance company looking for any way to deny and delay the
         | claim approval.
         | 
         | In other words we have a bona fide AI arms race between doctors
         | and insurers with patient outcomes and profits in play. Wild
         | stuff and nothing I could have ever imagined would be an
         | applied use of ML research from earlier in my career.
        
           | lfmunoz4 wrote:
           | deny defend depose
        
             | DonHopkins wrote:
             | Go on...
             | 
             | https://en.wikipedia.org/wiki/Exquisite_corpse
        
           | z3c0 wrote:
           | It will take me some time to dig up with all the noise around
           | AI, but this reminds me of a paper published around 2018 or
           | so that explored the possibility of two such AI forming an
           | accidental trust by optimizing around each other. For
           | example, if the denying AI used frequency of denied claims as
           | a heuristic for success, and the AI drafting claims used the
           | claim amount for the same, then the two bots may unknowingly
           | strike a deal where the AI drafting claims lets smaller
           | claims get frequently denied to increase the odds of larger
           | claims.
           | 
           | Note: not saying these metrics are what would be used, just
           | giving examples of antithetical heuristics that could play
           | together.
        
             | asdf147 wrote:
             | Wow, that sounds very interesting, do you have some link to
             | that paper?
        
             | 0_____0 wrote:
             | I feel that this sort of autonomous agent co-optimization
             | may happen more often over time as humans step farther away
             | from the loop, and lead to some pretty weird outcomes with
             | nobody around to go "wait what the f---- are we doing?"
        
           | dpflan wrote:
           | Interesting. Do you have any examples to share?
        
           | jiggawatts wrote:
           | It's so bizarre to me that this uniquely US phenomenon of
           | for-profit-middlemen inserted into the healthcare system has
           | resulted in an _adverserial_ relationship between the sick
           | person and the "healthcare provider".
           | 
           | I put that air quotes because insurance companies don't
           | actually provide _health_ care. They provide insurance.
           | That's a financial product, not a medical one.
        
             | superq wrote:
             | True! Really, it's a three-way relationship:
             | 
             | customer - insurer: the govt (or, far more rarely, the
             | employer) is the customer
             | 
             | insurer - recipient: the recipient is you. You're really
             | just a necessary but unwelcome side-effect.
        
               | wat10000 wrote:
               | Once AI is able to replace patients, the industry is
               | really going to take off.
        
             | egypturnash wrote:
             | Death panels.
        
               | olddustytrail wrote:
               | What about them?
        
               | thephyber wrote:
               | The political boogeyman was that government bureaucrats
               | would be the members of "death panels" if we went full
               | socialized healthcare industry, but in practice we
               | already have death panels in health insurance claims
               | adjusters and (less maliciously) doctors on transplant
               | review boards.
        
             | beezlebroxxxxxx wrote:
             | > That's a financial product, not a medical one.
             | 
             | It often goes unsaid, but America, on a cultural and
             | political level, is _really_ ideologically fixated on a
             | distinction between working and non-working individuals,
             | and, in a far deeper sense, whether an individual
             | "deserves" healthcare or not. This makes access to
             | healthcare intricately connected to class, wealth, and
             | income, in America. That's why access to healthcare is seen
             | as a product in and of itself. You can either afford it
             | ("you've earned it"), or you go into debt for it ("you have
             | to earn it"), or you simply have no expectation of ever
             | paying for it ("you cheated the system").
             | 
             | The entire conversation is often dominated by these ideas
             | in a way that often makes talking about healthcare with
             | Americans baffling to people that come from many single-
             | payer or universal systems.
        
               | xdfgh1112 wrote:
               | I get this feeling a lot. For example the UK typically
               | has unlimited paid sick days for salaried jobs, while I
               | have heard of US employees pooling together and
               | "donating" sick days to someone. The UK has a ton of
               | benefits for the sick, unemployed, single mothers, carers
               | etc. in the US I am sure those exist but I get the sense
               | that charity is supposed to play more of a role.
        
               | somerandomqaguy wrote:
               | To a degree. You have to keep in mind that a hospital
               | emergency room isn't legally permitted to turn you away
               | even if you can't pay under the Emergency Medical
               | Treatment and Active Labor Act.
               | 
               | So the wealthy and insured are covered. The lowest rungs
               | and those that don't care and will just run away are
               | covered. It's mostly lower / middle lower class that this
               | really hurts, ironically.
        
             | crmd wrote:
             | In my view, the root cause of the bizarreness is that
             | medical care is one of a few enterprises that are
             | inherently social in nature, and is therefore a prima facie
             | exception to the common wisdom that free markets create the
             | most positive outcomes for the largest number of people.
             | Because in the US we are taught from a young age that
             | private sector capitalism is "all there is", we end up
             | tying ourselves into knots trying to solve medical care
             | using the wrong toolset.
        
             | saxonww wrote:
             | I think the industry terminology separates the provider (a
             | doctor) from the payer (or payor; an insurance company in
             | this case).
        
           | fallingknife wrote:
           | The year is 2035. To cut costs, both insurance companies and
           | providers removed the human from the loop long ago setting
           | off an adversarial process between the LLMs on both sides.
           | Medical insurance claims are now written in an ever changing
           | format that resembles no human language. United Healthcare
           | has just announced a $10 billion project including a multi
           | gigawatt data center to train its own foundational model to
           | keep ahead in the arms race. UNH stock is up 5% on the
           | announcement.
        
           | crakenzak wrote:
           | Interesting! Source?
        
         | amelius wrote:
         | Or AI profiles that pen-test the real grandmas/grandpas.
        
         | seizethecheese wrote:
         | My father has fallen for one fraud after another these last few
         | years. It's disgusting. Anything in the direction of solving
         | this would be doing the lord's work.
        
           | throwaway48476 wrote:
           | The solution is for all foreign wire transfers to be insured
           | and reversible which would drive up the cost of doing
           | business with countries home to scammers.
        
             | seizethecheese wrote:
             | The scams sometimes involve getting him to buy gift cards
             | and then sending the card number. Not sure your solution
             | would help.
        
               | throwaway48476 wrote:
               | Gift cards are a money transmitting service.
        
       | red_admiral wrote:
       | The irony here is that there have been commercial social media
       | bot services since before the current GPT/LLM/AI wave, and
       | they're better at it than both meta and its AI can manage.
        
       | joshdavham wrote:
       | My friend recently showed me what instagram is currently like as
       | I've been off of it for years.
       | 
       | It's honestly so enshittified now!
       | 
       | Like he just kept scrolling and there 0 posts from friends - just
       | influencers, brain rot and ads.
        
         | mentos wrote:
         | Might be a good name for a competing service.
         | 
         | "Get BrainRot on the iOS and Android app stores and start
         | feeling important today!"
        
         | asdff wrote:
         | Same issue with fb where over time all your friends stop
         | posting after they got busy after highschool. Most I get is a
         | birth announcement or wedding photo on instagram from actual
         | people and that's not exactly a daily post for anyone.
         | 
         | But at the same time people don't mind its current state. I
         | don't mind its current state. When I open instagram I get
         | nonstop reels of people doing backflips on skis. I've honed my
         | feed as such by only engaging (well, watching and not scrolling
         | away) with those posts really. It is a much better experience
         | than trying to find ski videos on youtube as I don't need to
         | hunt for the key clip, editing is pretty tight on instagram,
         | and also no ads or other bullshit you get watching things on
         | youtube.
        
       | ProAm wrote:
       | What happened to the Metaverse? Not snarky, but it also seems
       | like it has been killed off in favor of AI investments?
        
       | asdev wrote:
       | I hope the stock gets destroyed next earnings when people realize
       | they've been spending all this money on AI all to not ship a
       | single meaningful feature nor derive any revenue leveraging it.
       | They did the exact same thing with the whole "Metaverse" thing
        
         | AlexandrB wrote:
         | The metaverse craze was hilarious. I heard of a large retailer
         | that paid ~$0.5 million to hold their annual all-hands in a
         | metaverse "venue". The whole thing collapsed under the weight
         | of a few hundred attendees and they had to revert to a zoom
         | call.
        
           | zeusk wrote:
           | Meanwhile zoom runs perfectly fine on vision pro.
           | 
           | imo, Meta should ditch metaverse and focus on
           | gaming/media/productivity for the headset and leave the
           | social aspects to their rayban-kind devices.
        
         | werdnapk wrote:
         | Yet, there stock is up over 70-80% last year and I have no idea
         | what they really have to show for it.
        
           | magnio wrote:
           | Their revenue and net income are up 20% and 70% yoy.
           | Apparently we can still sell more ads than what I thought is
           | possible.
        
             | nradov wrote:
             | It's not necessarily _more_ ads but rather more _effective_
             | ads. Meta is among the best ad platforms for targeting.
             | Customers pay a premium for this.
        
               | lostlogin wrote:
               | Sadly, the cancer is still growing.
        
               | herbst wrote:
               | They also accept every single scam scheme and I am sure
               | these scammers have deep and money pockets after all
               | these successful Facebook campaigns
        
           | sethops1 wrote:
           | I honestly wonder if Meta is just cooking the books now. Who
           | are these advertisers ramping up ad spend on that site? And
           | why?
        
             | herbst wrote:
             | Every time I visit Instagram a slightly different 'Elon
             | Musk invested in ...' or 'Shark Tank approved investment
             | ...' bullshit is within the first 3 posts.
             | 
             | This has been line this for at least a few years.
             | 
             | I heavily assume Facebook is getting richer from enabling
             | scam
        
         | deadbabe wrote:
         | Why stop at meta? What has any AI company done that creates
         | substantial new profits?
        
         | Hilift wrote:
         | META is a cash machine.
        
           | asdev wrote:
           | an ad machine*
        
       | bttrpll wrote:
       | If this was actually true there would be a mass drop in accounts
       | across Meta.
        
       | nathanasmith wrote:
       | If I want to talk to AI I know where chatgpt.com is. I don't need
       | it shoved in my face when I'm trying interact with people on
       | social media.
        
       | Dansvidania wrote:
       | that was fast
        
       | pkkkzip wrote:
       | whats apparent to me is that the people who will be reading this
       | thread when they turn old enough to have a mobile phone or laptop
       | will be like when the internet first came out and watching old
       | people scoff and worry about both valid and invalid future
       | problems.
       | 
       | the younger generation will not care much whether its generated
       | by an AI or not. As long as its good and it hits their niche,
       | they will not and should not care.
       | 
       | the implication and future predictooors dooming is also
       | misplaced. in the sea of generative AI, imperfect, human produced
       | content will end up becoming more valuable.
       | 
       | its like hand drawn anime by real humans vs computer assisted
       | ones that mimic the style. younger generation don't even watch or
       | care for the classics and they don't find the intrusion of
       | computer graphics unholy like the rest of us old timers who
       | appreciate and stopped watching it out of ideological
       | differences. it wasn't the end of anime it actually increased the
       | market size several thousand fold by lowering the barrier and
       | cost of production (at the expense of upsetting the "luddites").
       | 
       | Exact same thing will play out across all consumable content.
       | Even hardware.
        
       | 1659447091 wrote:
       | This reminds me of the experiments FB was doing to control user
       | emotion. Vibes of the same sliminess.
       | 
       | But beyond that, has social media not isolated people enough--
       | soon, a large portion of people using it won't even interact with
       | other actual people...
       | 
       | I don't see how a platform meant to "connect" people to others--
       | scratch that, a platform meant to connect people to ad's makes
       | perfect sense.
        
       | brap wrote:
       | This is honestly one of the worst ideas Meta has ever had, and
       | this includes the metaverse.
       | 
       | Zuck needs to step down, he's too detached from reality. On the
       | other hand I don't know who's left after him, he's probably
       | surrounded by slimy yes-men.
        
         | herbst wrote:
         | I don't think they will ever accept that they can't grow
         | anymore and are way after their high times.
         | 
         | At this point it's just watching something weird die painfully
        
       | BiteCode_dev wrote:
       | s/killing/delaying
       | 
       | Until it's good enough it can release it an nobody is the wiser.
        
       | FrustratedMonky wrote:
       | Surely they knew this would be the reaction.
       | 
       | Was this a big fake out.? What was the real ploy?
        
       | DonHopkins wrote:
       | I wonder if they told them they were about to be killed, and let
       | them have some last words.
        
       | kromem wrote:
       | Having bots have their own profiles authentically engaged as
       | themselves would have been pretty interesting (and I suspect
       | successful).
       | 
       | But making up fake minority stereotype bingo cards may have been
       | the worst idea I've ever seen in AI to date.
        
         | bee_rider wrote:
         | I want to make something like the opposite comment, haha.
         | 
         | Having bots with their own profiles and expecting people to
         | engage with them in any meaningful manner is a silly idea.
         | 
         | But the extent to which the minority stereotype bingo card bots
         | have backfired and started attacking the companies that
         | designed them is probably the funniest thing to have happened
         | in AI to date.
        
       ___________________________________________________________________
       (page generated 2025-01-04 23:00 UTC)