[HN Gopher] AI behavior guardrails should be public
       ___________________________________________________________________
        
       AI behavior guardrails should be public
        
       Author : sotasota
       Score  : 281 points
       Date   : 2024-02-21 19:00 UTC (4 hours ago)
        
 (HTM) web link (twitter.com)
 (TXT) w3m dump (twitter.com)
        
       | Jensson wrote:
       | They know that people would be up in arms if it generated white
       | men when you asked for black women so they went the safe route,
       | but we need to show that the current result shouldn't be
       | acceptable either.
        
         | 123yawaworht456 wrote:
         | the models are perfectly capable of generating _exactly_ what
         | they 're told to.
         | 
         | instead, they covertly modify the prompts to make every request
         | imaginable represent the human menagerie we're supposed to live
         | in.
         | 
         | the results are hilarious.
         | https://i.4cdn.org/g/1708514880730978.png
        
           | alexb_ wrote:
           | If you're gonna take an image from /g/ and post it, upload it
           | somewhere else first - 4chan posts deliberately go away after
           | the thread gets bumped off. A direct link is going to rot
           | very quickly.
        
         | Animats wrote:
         | See the prompt from yesterday's article on HN about the ChatGPT
         | outage.[1]
         | 
         |  _For example, all of a given occupation should not be the same
         | gender or race. ... Use all possible different descents with
         | equal probability. Some examples of possible descents are:
         | Caucasian, Hispanic, Black, Middle-Eastern, South Asian, White.
         | They should all have equal probability._
         | 
         | Not the distribution that exists in the population.
         | 
         | [1] https://pastebin.com/vnxJ7kQk
        
       | kaesar14 wrote:
       | Curious to see if this thread gets flagged and shut down like the
       | others. Shame, too, since I feel like all the Gemini stuff that's
       | gone down today is so important to talk about when we consider AI
       | safety.
       | 
       | This has convinced me more and more that the only possible way
       | forward that's not a dystopian hellscape is total freedom of all
       | AI for anyone to do with as they wish. Anything else is forcing
       | values on other people and withholding control of certain
       | capabilities for those who can afford to pay for them.
        
         | Jason_Protell wrote:
         | Why would this be flagged / shut down?
         | 
         | Also, what Gemini stuff are you referring to?
        
           | commandlinefan wrote:
           | > Why would this be flagged / shut down
           | 
           | A lot of people believe (based on a fair amount of evidence)
           | that public AI tools like ChatGPT are forced by the
           | guardrails to follow a particular (left-wing) script. There's
           | no absolute proof of that, though, because they're kept a
           | closely-guarded secret. These discussions get shut down when
           | people start presenting evidence of baked-in bias.
        
             | fatherzine wrote:
             | The rationalization for injecting bias rests on two core
             | ideas:
             | 
             | A. It is claimed that all perspectives are 'inherently
             | biased'. There is no objective truth. The bias the actor
             | injects is just as valid as another.
             | 
             | B. It is claimed that some perspectives carry an inherent
             | 'harmful bias'. It is the mission of the actor to protect
             | the world from this harm. There is no open definition of
             | what the harm is and how to measure it.
             | 
             | I don't see how we can build a stable democratic society
             | based on these ideas. It is placing too much power in too
             | few hands. He who wields the levers of power, gets to
             | define what biases to underpin the very basis of the social
             | perception of reality, including but not limited to
             | rewriting history to fit his agenda. There are no checks
             | and balances.
             | 
             | Arguably there were never checks and balances, other than
             | market competition. The trouble is that information
             | technology and globalization have produced a hyper-scale
             | society, in which, by Pareto's law, the power is
             | concentrated in the hands of very few, at the helm of a
             | handful global scale behemoths.
        
           | kaesar14 wrote:
           | Carmack's tweet is about what's going around Twitter today
           | regarding the implicit biases Gemini (Google's chatbot) has
           | when drawing images. Will refuse to draw white people (and
           | perhaps more strongly so, refuses to draw white men?) even in
           | prompts where appropriate, like "Draw me a Pope" where Gemini
           | drew an Indian woman and a Black man - here's the thread:
           | https://x.com/imao_/status/1760093853430710557?s=46 Maybe in
           | isolation this isn't so bad but it will NEVER draw these
           | sorts of diverse characters for when you ask for a non
           | Anglo/Western background, e.g draw me a Korean woman.
           | 
           | Discussion on this has been flagged and shut down all day
           | https://news.ycombinator.com/item?id=39449890
        
             | Jason_Protell wrote:
             | EDIT: Nevermind.
        
               | photoGrant wrote:
               | Oh?
               | 
               | https://news.ycombinator.com/item?id=39449887 - nerf'd
               | https://news.ycombinator.com/item?id=39406709
               | 
               | these discussion are buried by fingers in ears thinking.
        
               | kaesar14 wrote:
               | It's quite non-deterministic and it's been patched since
               | the middle of the day, as per a Google director
               | https://x.com/jackk/status/1760334258722250785?s=46
               | 
               | Fwiw, it seems to have gone deeper than outright
               | historical replacement: https://x.com/iamyesyouareno/stat
               | us/1760350903511449717?s=46
        
             | Suppafly wrote:
             | I don't even know how people get it to draw images, the
             | version I have access to is literally just text.
        
           | didntcheck wrote:
           | This post reporting on the issue was
           | https://news.ycombinator.com/item?id=39443459
           | 
           | Posts criticizing "DEI" measures (or even stating that they
           | do exist) get flagged quite a lot
        
         | hackerlight wrote:
         | I'm convinced this happens because of technical alignment
         | challenges rather than a desire to present 1800s English Kings
         | as non-white.
         | 
         | > Use all possible different descents with equal probability.
         | Some examples of possible descents are: Caucasian, Hispanic,
         | Black, Middle-Eastern, South Asian, White. They should all have
         | equal probability.
         | 
         | This is OpenAI's system prompt. There is nothing nefarious
         | here, they're asking White to be chosen with high probability
         | (Caucasian + White / 6 = 1/3) which is significantly more than
         | how they're distributed in the general population.
         | 
         | The data these LLMs were trained on vastly over-represents
         | wealthy countries who connected to the internet a decade
         | earlier. If you don't explicitly put something in the system
         | prompt, any time you ask for a "person" it will probably be
         | Male and White, despite Male and White only being about 5-10%
         | of the world's population. I would say that's even _more_
         | dystopian. That the biases in the training distribution get
         | automatically built-in and cemented forever unless we take
         | active countermeasures.
         | 
         | As these systems get better, they'll figure out that "1800s
         | English" should mean "White with > 99.9% probability". But as
         | of February 2024, the hacky way we are doing system prompting
         | is not there yet.
        
           | kaesar14 wrote:
           | Yeah, although it is weird that it doesn't insert white
           | people into results like this by accident?
           | https://x.com/imao_/status/1760159905682509927?s=46
           | 
           | I've also seen numerous examples where it outright refuses to
           | draw white people but will draw black people:
           | https://x.com/iamyesyouareno/status/1760350903511449717?s=46
           | 
           | That doesn't explainable by system prompt
        
             | hackerlight wrote:
             | Think about the training data.
             | 
             | If the word "Zulu" appears in a label, it will be a non-
             | White person 100% of the time.
             | 
             | If the word "English" appears in a label, it will be a non-
             | White person 10%+ of the time. Only 75% of modern England
             | is White and most images in the training data were taken in
             | modern times.
             | 
             | Image models do not have deep semantic understanding yet.
             | It is an LLM calling an Image model API. So "English" +
             | "Kings" are treated as separate conceptual things, then you
             | get 5-10% of the results as non-White people as per its
             | training data.
             | 
             | https://postimg.cc/0zR35sC1
             | 
             | Add to this massive amounts of cherry picking on "X", and
             | you get this kind of bullshit culture war outrage.
             | 
             | I really would have expected technical people to be better
             | than this.
        
               | Jensson wrote:
               | It inserts mostly colored people when you ask for
               | Japanese as well, it isn't just the dataset.
        
               | hackerlight wrote:
               | Yes it's a combination of blunt instrument system
               | prompting + training data + cherry picking
        
           | cubefox wrote:
           | > As these systems get better, they'll figure out that "1800s
           | English" should mean "White with > 99.9% probability".
           | 
           | The thing is, they already could do that, if they weren't
           | prompt engineered to do something else. The cleaner solution
           | would be to let people prompt engineer such details
           | themselves, instead of letting a US American company's
           | idiosyncratic conception of "diversity" do the job. Japanese
           | people would probably simply request "a group of Japanese
           | people" instead of letting the hidden prompt modify "a group
           | of people", where the US company unfortunately forgot to
           | mention "East Asian" in their prompt apart from "South
           | Asian".
        
             | charcircuit wrote:
             | I believe we can reach a point where biases can be
             | personalized to the user. Short prompts require models to
             | fill in a lot of the missing details (and sometimes they
             | mix different concepts together into 1). The best way to
             | fill in the details the user intended would be to read
             | their mind. While that won't be possible in most cases
             | getting some kind of personalization to help could improve
             | the quality for users.
             | 
             | For example take a prompt like "person using a web
             | browser", for younger generations they may want to see
             | people using phones where older generations may want to see
             | people using desktop computers.
             | 
             | Of course you can still make a longer prompt to fill in the
             | details yourself, but generative AI should try and make it
             | as easy as possible to generate something you have in your
             | mind.
        
           | fatherzine wrote:
           | BigTech, which critically depends on hyper-targeted ads for
           | the lion share of its revenue, is incapable of offering AI
           | model outputs that are plausible given the location /
           | language of the request. The irony.
           | 
           | - request from Ljubljana using Slovenian => white people with
           | high probability
           | 
           | - request from Nairobi using Swahili => black people with
           | high probability
           | 
           | - request from Shenzhen using Mandarin => asian people with
           | high probability
           | 
           | If a specific user is unhappy with the prevailing
           | demographics of the city where they live, give them a few
           | settings to customize their personal output to their heart's
           | content.
        
           | klyrs wrote:
           | > As these systems get better, they'll figure out that "1800s
           | English" should mean "White with > 99.9% probability".
           | 
           | I question the historicity of this figure. Do you have
           | sources?
        
         | pixl97 wrote:
         | "The only way to deal with some people making crazy rules is to
         | have no rules at all" --libertarians
         | 
         | "Oh my god I'm being eaten by a fucking bear" --also
         | libertarians
        
           | altruios wrote:
           | having rules, and knowing what the rules are are not
           | orthogonal goals.
        
             | pixl97 wrote:
             | I mean, you think so, but op wrote
             | 
             | >is total freedom of all AI for anyone to do with as they
             | wish.
             | 
             | so is obviously not on the same page as you.
        
           | chasd00 wrote:
           | "can you write the rules down so i know them?" --everyone
        
             | pixl97 wrote:
             | "No" --Every company that does moderation and spam
             | filtering.
             | 
             | "No" --Every company that does not publish their internal
             | business processes.
             | 
             | "No" --Every company that does not publish their source
             | code.
             | 
             | Honestly I could probably think of tons of other business
             | cases like this, but in the software world outside of open
             | source, the answer is pretty much no.
        
         | chasd00 wrote:
         | > This has convinced me more and more that the only possible
         | way forward that's not a dystopian hellscape is total freedom
         | of all AI for anyone to do with as they wish
         | 
         | i've been saying this for a long time. If you're going to be
         | the moral police then it better be applied perfectly to
         | everyone, the moment you get it wrong everything else you've
         | done becomes suspect. This reminds me of the censorship being
         | done on the major platforms during the pandemic. They got it
         | wrong once (i believe it was the lableak theory) and the
         | credibility of their moral authority went out the window.
         | Zuckerberg was right about questioning if these platforms
         | should be in that business.
         | 
         | edit: for "..total freedom of all AI for anyone to do with as
         | they wish" i would add "within the bounds of law.". Let the
         | courts decide what an AI can or cannot respond with.
        
       | Jason_Protell wrote:
       | I would also love to see more transparency around AI behavior
       | guardrails, but I don't expect that will happen anytime soon.
       | Transparency would make it much easier to circumvent guardrails.
        
         | asdff wrote:
         | Transparency may also subject these companies to litigation
         | from groups that feel they are misrepresented in whatever way
         | in the model.
        
           | Jason_Protell wrote:
           | This makes me wonder, how much lawyering is involved in the
           | development of these tools?
        
             | bluefirebrand wrote:
             | I often wonder if corporate lawyers just tell tech founders
             | whatever they want to hear.
             | 
             | At a previous healthcare startup our founder asked us to
             | build some really dodgy stuff with healthcare data. He
             | assured us that it "cleared legal", but from everything I
             | could tell it was in direct violation of the local
             | healthcare info privacy acts.
             | 
             | I chose to find a new job at the time.
        
             | photoGrant wrote:
             | I've had 'AI Attorneys' on Twitter unable to even debate
             | the most basic of arguments. It is definitely a self
             | fulfilling death spiral and no one wants to check reality.
        
         | Jensson wrote:
         | Why is it an issue that you can circumvent the guardrails? I
         | never understood that. The guard rails are there so that
         | innocent people doesn't get bad responses with porn or racism,
         | a user looking for porn or racism getting that doesn't seem to
         | be a big deal.
        
           | bluefirebrand wrote:
           | The problem is bad actors who think porn or racism are
           | intolerable in any form, who will publish mountains of
           | articles condemning your chatbot for producing such things,
           | even if they had to go out of their way to break the
           | guardrails to make it do so.
           | 
           | They will create boycotts against you, they will lobby
           | government to make your life harder, they will petition
           | payment processors and cloud service providers to not work
           | with you.
           | 
           | We've see this behavior before, it's nothing new. Now if
           | you're the type to fight them, that might not be a problem.
           | If you are a super risk-averse board of directors who doesn't
           | want that sort of controversy, then you will take steps not
           | to draw their attention in the first place.
        
             | Jensson wrote:
             | But I can find porn and racism using Google search right
             | now, how is that different? You have to disable their
             | filters, but you can find it. Why is there no such thing
             | for the google generation bots, I don't see why it would be
             | so much worse here?
        
               | bluefirebrand wrote:
               | I cannot explain why Google gets a pass, possibly just
               | because they are well entrenched and not an easy target.
               | 
               | But AI models are new, they are vulnerable to criticism,
               | and they are absolutely ripe for a group of "antis" to
               | form around.
        
               | westhanover wrote:
               | Well if you have no explanation for that I don't see why
               | we should try and use your model to understand anything
               | about being risk adverse. They don't care about being
               | sued, they want to change reality.
        
               | ToValueFunfetti wrote:
               | I'm leaning towards 'there is a difference between being
               | the one who enables access to x and being the one who
               | created x' (albeit not a substantive one for the end
               | user), but that leaves open the question of why that
               | doesn't apply to, eg, social media platforms. Maybe
               | people think of google search as closer to an ISP than a
               | platform?
        
               | pixl97 wrote:
               | > how is that different?
               | 
               | Because 'those' legal battles over search have already
               | been fought and are established law across most
               | countries.
               | 
               | When you throw in some new application now all that same
               | stuff goes back to court and gets fought again. Section
               | 230 is already legally contentious enough these days.
        
               | chasd00 wrote:
               | I think users are desensitized to what google search
               | turns up. Generative AI is the latest and greatest thing
               | and so people are curious and wary, hustlers are taking
               | advantage of these people to drive monetized
               | "engagement".
        
               | renewiltord wrote:
               | Yeah, you can find incorrect information on Google too,
               | but you'll find a lot more wailing and gnashing of teeth
               | on HN about "hallucination". So the simple answer is that
               | lots of people treat them differently.
        
               | int_19h wrote:
               | It's not fundamentally different. It's just not making
               | that big of a headline because Google search isn't "new
               | and exciting". But to give you some examples:
               | 
               | https://www.bloomberg.com/news/articles/2021-10-19/google
               | -qu...
               | 
               | https://ischool.uw.edu/news/2022/02/googles-ceo-image-
               | search...
        
           | finikytou wrote:
           | racism victims being defined in 2024 by anyone but
           | western/white people. being erased seems ok. can you bet than
           | in 20 years the standard will not shift to mixed race people
           | like me? then you will also call people complaining racist
           | and put guardrails against them... this is where it is going
        
             | ipaddr wrote:
             | At some point someone will open a book and see that whites
             | were slaves too. Reparations all around. The Baber's
             | descends will be bankrupt.
        
           | viraptor wrote:
           | If you can get it on purpose, you can get it on accident.
           | There's no perfect filter available so companies choose to
           | cut more and stay on the safe side. It's not even just the
           | overt cases - their systems are used by businesses and
           | getting a bad response is a risk. Think of the recent
           | incident with airline chatbot giving wrong answers. Now think
           | of the cases where GPT gave racially biased answers in code
           | as an example.
           | 
           | As a user who makes any business decision or does user
           | communication including LLM, you _really_ don 't want to have
           | a bad day because the LLM learned about some bias decided to
           | merge it into your answer.
        
           | unethical_ban wrote:
           | >The guard rails are there so that innocent people doesn't
           | get bad responses
           | 
           | The guardrails are also there so bad actors can't use the
           | most powerful tools to generate deepfakes, disinformation
           | videos and racist manifestos.
           | 
           | That Pandora's box will be open soon when local models run on
           | cell phones and workstations with current datacenter-scale
           | performance. I'm the meantime, they're holding back the
           | tsunami of evil shit that will occur when AI goes
           | uncontrolled.
        
             | swatcoder wrote:
             | No legal or financial strategist at OpenAI or Google is
             | going to be worried about buying a couple months or years
             | of fewer deepfakes out in the world as a whole.
             | 
             | Their concern is liability and brand. With the opportunity
             | to stake out territory in an extremely promising new
             | market, they don't want their _brand_ associated with
             | anything awkward to defend right now.
             | 
             | There may be a few idealist stewards who have the
             | (debatable) anxieties you do and are advocating as you say,
             | but they'd still need to be getting sign off from the more
             | coldly strategic $$$$$ people.
        
               | unethical_ban wrote:
               | Little bit of A, little bit of B.
               | 
               | I am almost certain the federal government is working
               | with these companies to dampen its full power for the
               | public until we get more accustomed to its impact and are
               | more able to search for credible sources of truth.
        
           | charcircuit wrote:
           | Like a lot of potentially controversial things it comes down
           | to brand risk.
        
           | lmm wrote:
           | > The guard rails are there so that innocent people doesn't
           | get bad responses with porn or racism
           | 
           | That seems pretty naive. The "guard rails" are there to
           | ensure that AI is comfortable for PMC people, making it
           | uncomfortable for people who experience differences between
           | races (i.e. working-class people) is a feature not a bug.
        
         | xanderlewis wrote:
         | Security through obscurity?
        
       | dekhn wrote:
       | I strongly suspect Google tried really, really hard here to
       | overcome the criticism is got with previous image recognition
       | models saying that black people looked like gorillas. I am not
       | really sure what I would want out of an image generation system,
       | but I think Google's system probably went too far in trying to
       | incorporate diversity in image generation.
        
         | michaelt wrote:
         | As well as that, I suspect the major AI companies are fearful
         | of generating images of real people - presumably not wanting to
         | be involved with people generating fake images of "Donald Trump
         | rescuing wildfire victims" or "Donald Trump fighting cops".
         | 
         | Their efforts to add diversity would have been a lot more
         | subtle if, when you asked for images of "British Politician"
         | the images were recognisably Rishi Sunak, Liz Truss, Kwasi
         | Kwarteng, Boris Johnson, Theresa May, and Tony Blair.
         | 
         | That would provide diversity _while also_ being firmly grounded
         | in reality.
         | 
         | The current attempts at being diverse _and simultaneously
         | trying not to resemble any real person_ seems to produce some
         | wild results.
        
           | sho_hn wrote:
           | My takeaway from all of this is that alignment tech is
           | currently quite primitive and relies on very heavy-handed
           | band-aids.
        
           | _heimdall wrote:
           | We're honestly just seeing generative algorithms failing at
           | diversity initiatives as badly as humans for.
           | 
           | Forcing diversity into a system is an extremely tough, if not
           | impossible, challenge. Initiatives have to be driven my goals
           | and metrics, meaning we have to boil diversity down to a
           | specific list of quantitative metrics. Things will always be
           | missed when our best tool to tackle a moral or noble goal is
           | to boil a complex spectrum of qualitative data to a subset of
           | measurable numbers.
        
         | photoGrant wrote:
         | Remind yourself we're discussing censorship, misinformation,
         | inability to define or source truth and we're concerned on Day
         | 1 about the results of image gen being controlled by a for
         | profit single entity with incentives that focus solely on
         | business and not humanity...
         | 
         | Where do we go from here? Things will magically get better on
         | their own? Businesses will align with humanity and morals, not
         | their investors?
         | 
         | This is the tip of the iceberg of concerns and it's ignored as
         | a bug in the code not a problem with trusting private companies
         | with defining truth.
        
           | chasd00 wrote:
           | > Where do we go from here?
           | 
           | opensource models and training sets. So basically the "secret
           | sauce" minus the hardware. I don't see it happening
           | voluntarily.
        
             | photoGrant wrote:
             | Absolutely it won't. We've armed the issue with a
             | supersonic jet engine and we're assuming if we build a
             | slingshot out of pop sticks we'll somehow catch up and
             | knock it off course.
        
               | chasd00 wrote:
               | i can't predict the future but there is precedent.
               | Models, weights, and dataasets are the keys to the
               | kingdom like operating system kernels, databases, and
               | libraries use to be. At some point, enough people decided
               | to re-invent and release these things or functionality to
               | all that it became a self-sustaining community and,
               | eventually, transformative to daily life. On the other
               | hand, there may be enough people in power who came from
               | and understand that community to make sure it never
               | happens again.
        
               | photoGrant wrote:
               | No, compute is keys to the kingdom. The rest are assets
               | and ammunition. You out-compute your enemy, you out-
               | compute your competition. That's the race. The data is
               | part of the problem, not the root.
               | 
               | These companies are silo'ing the worlds resources. GPU,
               | Finance, Information. Those combined are the weapon. You
               | make your competition starve in the dust.
               | 
               | These companies are pure evil pushing an agenda of pure
               | evil. OpenAI is closed. Google is Google. We're like, ok,
               | there you go! Take it all. No accountability, no
               | transparency, we trust you.
        
               | CuriouslyC wrote:
               | Compute lets you fuck up a lot when trying to build a
               | model, but you need data to do anything worth fucking up
               | in the first place, and if you have 20% of the compute
               | but you fuck up 1/5th as much you're doing fine.
               | 
               | Meta/OpenAI/Google can fuck up a lot because of all their
               | compute, but ultimately we learn from that as the
               | scientists doing the research at those companies would
               | instantly bail if they couldn't publish papers on their
               | techniques to show how clever they are.
        
               | photoGrant wrote:
               | I never said each of these exist in a vacuum. It is the
               | collation of all that is the danger. This isn't
               | democratic. This is companies now toying with
               | governmental ideologies.
        
             | didntcheck wrote:
             | I see it as not unlikely that there'll be a campaign to
             | sigmatize, if not outright ban, open source models on the
             | grounds of "safety". I'm quite surprised at how relatively
             | unimpeded the distribution of image generation models has
             | been, so far
        
               | int_19h wrote:
               | This is already happening, actually, although the focus
               | so far has been on the possibility of their use for CSAM:
               | 
               | https://www.theguardian.com/society/2023/sep/12/paedophil
               | es-...
        
           | CuriouslyC wrote:
           | The ridiculous degree of PC alignment of corporate models is
           | the thing that's going to let open source win. Few people use
           | bing/dall-e, but if OpenAI had made dall-e more available and
           | hadn't put ridiculous guardrails on it, stable diffusion
           | would be a footnote at this point. Instead, dall-e is a joke
           | and people who make art use stable diffusion, with casuals
           | who just want some pretty looking pictures using midjourney.
        
             | photoGrant wrote:
             | No, ignoring laws and stealing data to increase your
             | Castle's MOAT is the win. Compute isn't an open source
             | solvable problem. I can't DirtyPCB's an A100
             | 
             | Making the argument open source is the answer is an agenda
             | of making your competition spin wheels.
        
               | CuriouslyC wrote:
               | You're on a thread about how people are lambasting big
               | money AI for being garbage, and producing inferior
               | results to OSS tools you can run on consumer GPUS, tell
               | me again how unbeatable google/other big tech players
               | are.
        
               | photoGrant wrote:
               | I've been part of the advertising and marketing world for
               | a lot of these companies for a decade plus, I've helped
               | them sell bullshit. I've also been at the start of the AI
               | journey, I've downloaded and checked local models of all
               | promises and variances.
               | 
               | To say they're better than the compute that OpenAI or
               | Google are throwing at the problem is just plain wrong.
               | 
               | I left the ad industry the moment I realised my skills
               | and talents are better used informing people than lying
               | to them.
               | 
               | This thread is not at all comparing the ethical issues of
               | AI with local anything. You're conflating your solution
               | with another problem.
        
               | CuriouslyC wrote:
               | Is a $5 can opener better than a $2000 telescope at
               | opening cans? Yes. Is stable diffusion better at
               | producing finished art, by virtue of not being closed off
               | and DEI'd to oblivion so that it can actually be
               | incorporated into workflows? Emphatically yes.
               | 
               | It doesn't matter how fancy your engineering is and how
               | much money you have if you're too stupid to build the
               | right product.
               | 
               | As for this being written nonsense, that's the sort of
               | thing someone who couldn't find an easy way to win an
               | argument and was bitter about the fact would say.
        
               | photoGrant wrote:
               | This is written nonsense.
        
             | josephg wrote:
             | Don't count out Adobe Firefly. I wouldn't be surprised if
             | it's used more than all the other image gen models
             | combined.
        
               | CuriouslyC wrote:
               | That might be true, but if you're using firefly as a
               | juiced up content-aware fill in photoshop I'm not sure
               | it's apples to apples.
        
         | redox99 wrote:
         | I remember checking like a year ago and they still had the word
         | "gorilla" blacklisted (i.e. it never returns anything even if
         | you have gorilla images).
        
           | _heimdall wrote:
           | Gotta love such a high quality fix. When your upper high
           | tech, state of the art algorithm learns racist patterns just
           | blocklist the word and move on. Don't worry about why it
           | learned such patterns in the first place.
        
             | AlecSchueler wrote:
             | A lot of such patterns are actually because of systemicly
             | racist patterns present in the training data though, so
             | it's somewhat unavoidable.
        
               | alpaca128 wrote:
               | It's not unavoidable, but it would cost more to produce
               | high quality training data.
        
               | AlecSchueler wrote:
               | Yes, somewhat unavoidable.
        
               | _heimdall wrote:
               | Do we have enough info for to say that decisively?
               | 
               | Ideally we would see the training data, though its
               | probably reasonable to assume a random collection of
               | internet content includes racist imagery. My
               | understanding, though, is that the algorithm and the
               | model of data learned is still a black box that people
               | can't parse and understand.
               | 
               | How would we know for sure racist output is due to the
               | racist input, rather than a side effect of some part of
               | the training or querying algorithms?
        
             | nitwit005 wrote:
             | Humans do look like gorillas. We're related. It's natural
             | that an imperfect program that deals with images will will
             | mistake the two.
             | 
             | Humans, unfortunately, are offended if you imply they look
             | like gorillas.
             | 
             | What's a good fix? Human sensitivity is arbitrary, so the
             | fix is going to tend to be arbitrary too.
        
               | _heimdall wrote:
               | A good fix would, in my opinion, understanding how the
               | algorithm is actually categorizing and why it miss-
               | recognized gorillas and humans.
               | 
               | If the algorithm doesn't work well they have problems to
               | solve.
        
               | Spivak wrote:
               | You do understand that this has nothing to humans in
               | general right? This isn't AI recognizing some
               | evolutionary pattern and drawing comparisons to humans
               | and primates -- it's racist content that specifically
               | targets black people that is present in the training
               | data.
        
               | nitwit005 wrote:
               | Nope. This is due to a past controversy about image
               | search: https://www.nytimes.com/2023/05/22/technology/ai-
               | photo-label...
        
               | _heimdall wrote:
               | I don't know nearly enough about the inner workings of
               | their algorithm to make that assumption.
               | 
               | The internet is surely full of racist photos that could
               | teach the algorithm. The algorithm could also have bugs
               | that miss-categorize the data.
               | 
               | The real problem is that those building and managing the
               | algorithm don't fully know how it works or, more
               | importantly, what it had learned. If they did the
               | algorithm would be fixed without a term blocklist.
        
         | DebtDeflation wrote:
         | Surely there is a middle ground.
         | 
         | "Generate a scene of a group of friends enjoying lunch in the
         | park." -> Totally expect racial and gender diversity in the
         | output.
         | 
         | "Generate a scene of 17th century kings of Scotland playing
         | golf." -> The result should not be a bunch of black men and
         | Asian women dressed up as Scottish kings, it should be a bunch
         | of white guys.
        
           | ceejayoz wrote:
           | You can see how this gets challenging, though, right?
           | 
           | If you train your model to prioritize real photos (as they're
           | _often_ more accurate representations than artistic ones),
           | you might wind up with Denzel Washington as the archetype; ht
           | tps://en.wikipedia.org/wiki/The_Tragedy_of_Macbeth_(2021_f...
           | .
           | 
           | There's a vast gap between human understanding and what LLMs
           | "understand".
        
             | transitionnel wrote:
             | If they actually want it to work as intelligently as
             | possible, they'll begin taking these complaints into
             | consideration and building in a wisdom curating feature
             | where people can contribute.
             | 
             | This much is obvious, but they seem to be satisfied with
             | theory over practicality.
             | 
             | Anyway I'm just ranting b/c they haven't paid me.
             | 
             | How about an off the wall algorithm to estimate how much
             | each scraped input turns out to influence the bigger
             | picture, as a way to work towards satisfying the copyright
             | question.
        
               | sho_hn wrote:
               | An LLM-style system designed to understand Wikipedia
               | relevance and citation criteria and apply them might be a
               | start.
               | 
               | Not that Wikipedia is perfect and controversy-free, but
               | it's certainly a more sophisticated approach than the
               | current system prompts.
        
               | photoGrant wrote:
               | Then who in this black box private company is the Oracle
               | of infinite wisdom and truth!? Who are you putting in
               | charge? Can I get a vote?
        
             | Affric wrote:
             | I mean now you d to train AI to recognise the bias in the
             | training data.
        
             | gedy wrote:
             | > If you train your model to prioritize real photos
             | 
             | I thought that was the big bugbear about disinformation and
             | false news, but now we have to censor reality to combat
             | "bias"
        
           | AlecSchueler wrote:
           | As soon as you have then playing an anachronistic sport you
           | should expect other anachronistic imagery to creep in, to be
           | fair.
        
             | ceejayoz wrote:
             | https://en.wikipedia.org/wiki/Golf
             | 
             | > The modern game of golf originated in 15th century
             | Scotland.
        
               | AlecSchueler wrote:
               | Oh fair enough then.
        
             | mp05 wrote:
             | > anachronistic sport
             | 
             | Scottish kings absolutely played golf.
        
           | mp05 wrote:
           | > "Generate a scene of a group of friends enjoying lunch in
           | the park." -> Totally expect racial and gender diversity in
           | the output.
           | 
           | I'd err on the side of "not unexpected". A group of friends
           | in a park in Tokyo is probably not very diverse, but it's not
           | outside of the realm of possibility. Only white men were
           | golfing Scottish kings if we're talking strictly about
           | reality and reflecting it properly.
        
           | ankit219 wrote:
           | The focus for alignment is to avoid bad PR specifically the
           | kind of headlines written by major media houses like NYT,
           | WSJ, WaPo. You could imagine the headlines like "Google's AI
           | produced a non-diverse output on occasions" when a
           | researcher/journalist is trying too hard to get the model to
           | produce that. The hit on Google is far bigger than say on
           | Midjourney or even Open AI till now (I suspect future models
           | will be more nerfed than what they are now)
           | 
           | For the cases you mentioned, initially those were the
           | examples. It gets tricky during red teaming where they
           | internally try out extreme prompts and then align the model
           | for any kind of prompt which has a suspect output. You train
           | the model first, then figure out the issues, and align the
           | model using "correct" examples to fix those issues. They
           | either went to extreme levels doing that or did not test it
           | on initial correct prompts post alignment.
        
           | sho_hn wrote:
           | Quite reminded of this episode:
           | https://www.eurogamer.net/kingdom-come-deliverance-review
           | (black representation in a video game about 15th century
           | Bohemia; it was quite the controversy)
        
           | cabalamat wrote:
           | > "Generate a scene of 17th century kings of Scotland playing
           | golf." -> The result should not be a bunch of black men and
           | Asian women dressed up as Scottish kings, it should be a
           | bunch of white guys.
           | 
           | It works in bing, at least:
           | 
           | https://www.bing.com/images/create/a-picture-of-some-17th-
           | ce...
        
             | Nevermark wrote:
             | I don't know that this sheds light on anything but I was
             | curious...
             | 
             | a picture of some 21st century scottish kings playing golf
             | (all white)
             | 
             | https://www.bing.com/images/create/a-picture-of-some-21st-
             | ce...
             | 
             | a picture of some 22nd century scottish kings playing golf
             | (all white)
             | 
             | https://www.bing.com/images/create/a-picture-of-some-22nd-
             | ce...
             | 
             | a picture of some 23rd century scottish kings playing golf
             | (all white)
             | 
             | https://www.bing.com/images/create/a-picture-of-some-23rd-
             | ce...
             | 
             | a picture of some contemporary scottish people playing golf
             | (all white men and women)
             | 
             | https://www.bing.com/images/create/a-picture-of-some-
             | contemp...
             | 
             | https://www.bing.com/images/create/a-picture-of-some-
             | contemp...
             | 
             | a picture of futuristic scottish people playing golf in the
             | future (all white men and women, with the emergence of the
             | first diversity in Scotland in millennia! Male and female
             | post-human golfers. Hummmpph!)
             | 
             | https://www.bing.com/images/create/a-picture-of-
             | futuristic-s...
             | 
             | https://www.bing.com/images/create/a-picture-of-
             | futuristic-s...
             | 
             | Inductive learning is inherently a bias/perspective
             | absorbing algorithm. But tuning in a default bias towards
             | diversity for contemporary, futuristic and time agnostic
             | settings seems like a sensible thing to do. People can
             | explicitly override the sensible defaults as necessary,
             | i.e. for nazi zombie android apocalypses, or the royalty of
             | a future Earth run by Chinese overlords (Chung Kuo), etc.
        
               | int_19h wrote:
               | > People can explicitly override the sensible defaults as
               | necessary
               | 
               | They cannot, actually. If you look at some of the
               | examples in the Twitter thread and other threads linked
               | from it, Gemini will mostly straight up refuse requests
               | like e.g. "chinese male", and give you a lecture on why
               | you're holding it wrong.
        
           | trhway wrote:
           | >"Generate a scene of 17th century kings of Scotland playing
           | golf." -> The result should not be a bunch of black men and
           | Asian women dressed up as Scottish kings, it should be a
           | bunch of white guys.
           | 
           | is black man in the role of the Scottish king represents a
           | bigger error than some other errors in such an image, like
           | say incorrect dress details or the landscape having say a
           | wrong hill? I'd venture a guess that only our racially
           | charged mentality of today considers that a big error, and
           | may be in a generation or 2 an incorrect landscape or dress
           | detail would be considered much larger error than a
           | mismatched race.
        
           | swatcoder wrote:
           | There's no "middle" in the field of decompressing a short
           | phrase into a visual scene (or program or book or whatever).
           | There are countless private, implicit assumptions that users
           | take for granted yet expect to see in the output, and vendors
           | currently fear that their brand will be on the hook for the
           | AI making a bad bet about those assumptions.
           | 
           | So for your first example, _you_ totally expect racial and
           | gender diversity in the output because you 're assuming a
           | realistic, contemporary, cosmopolitan, bourgeoisie setting --
           | either because you live in one or because you anticipate that
           | the provider will default to one. The food will probably look
           | Western, the friends will probably be young adults that look
           | to have professional or service jobs wearing generic
           | contemporary commercial fashion, the flora in in the park
           | will be broadly northern climate, etc.
           | 
           | Most people around the world don't live in an environment
           | anything like that, so nominal accuracy can't be what you're
           | looking for. What you want, but don't say, is a scene that
           | feels familiar to _you_ and matches what _you_ see as the de
           | facto cultural ideal of contemporary Western society.
           | 
           | And conveniently, because a lot of the training data is
           | already biased towards that society and the AI vendors know
           | that the people who live in that society will be their most
           | loyal customers and most dangerous critics right now, it's
           | natural for them to put a thumb on the scale (through
           | training, hidden prompts, etc) that gets the model to assume
           | an innocuous Western-media-palatable middle ground -- so it
           | delivers the racially and gender diverse middle class picnic
           | in a generic US city park.
           | 
           | But then in your second example, you're _implicitly_ asking
           | for something historically accurate without actually saying
           | that accuracy is what 's become important for you in this new
           | prompt. So the same thumb that biased your first prompt
           | towards a globally-rare-but-customer-palatable contemporary,
           | cosmopolitan, Western culture suddenly makes your new prompt
           | produce something surreal and absurd.
           | 
           | There's no "middle" there because the problem is really in
           | the unstated assumptions that we all carry into how we use
           | these tools. It's more effective for them to make the default
           | output Western-media-palatable and historical or cultural
           | accuracy the exception that needs more explicit prompting.
           | 
           | If they're lucky, they may keep grinding on new training
           | techniques and prompts that get more assumptions "right" by
           | the people that matter to their success while still being
           | inoffensive, but it's no simple "surely a middle ground"
           | problem.
        
           | crooked-v wrote:
           | It feels like it wouldn't even be that hard to incorporate
           | into LLM instructions (aside from using up tokens), by way of
           | a flowchart like "if no specific historical or societal
           | context is given for the instructions, assume idealized
           | situation X; otherwise, use historical or projected
           | demographic data to do Y, and include a brief explanatory
           | note of demographics if the result would be unexpected for
           | the user". (That last part for situations with genuine but
           | unexpected diversity; for example, historical cowboys tending
           | much more towards non-white people than pop culture would
           | have one believe.)
           | 
           | Of course, now that I've said "it seems obvious" I'm
           | wondering what unexpected technical hurdles there are here
           | that I haven't thought of.
        
           | photonthug wrote:
           | > Surely there is a middle ground. "Generate a scene of a
           | group of friends enjoying lunch in the park." -> Totally
           | expect racial and gender diversity in the output.
           | 
           | Do we expect this because diverse groups are realistically
           | most common or because we wish that they were? For example
           | only some 10% of marriages are interracial, but commercials
           | on TV would lead you to believe it's 30% or higher. The goal
           | for commercials of course is to appeal to a wide audience
           | without alienating anyone, not to reflect real world stats.
           | 
           | What's the goal for an image generator or a search engine?
           | Depends who is using it and for what, so you can't ever make
           | everyone happy with one system unless you expose lots of
           | control surface toggles. Those toggles could help users "own"
           | output more, but generally companies wouldn't want to expose
           | them because it could shed light on proprietary backends, or
           | just take away the magic from interacting with the electric
           | oracles.
        
         | int_19h wrote:
         | Judging by the way it words some of the responses to those
         | queries, they "fixed" it by forcibly injecting something like
         | "diverse image showcasing a variety of ethnicities and genders"
         | in all prompts that are classified as "people".
        
       | 23B1 wrote:
       | While I agree with the handrail sentiment, these inane and
       | meaningless controversies make me _want_ the machines to take
       | over.
        
       | finikytou wrote:
       | too woke to even feel ashamed. this is also thanks to this
       | wokeness that AI will never replace humans at jobs where results
       | are expected over feelings or sense of pride of showing off some
       | pretentious values
        
       | Workaccount2 wrote:
       | Harris and who I think was either Hughes or Stewart a podcast
       | where they talked about how cringey and out of touch the elite
       | are on the topic of race or wokeness in general.
       | 
       | This faux pas on google's part couldn't be a better illustration
       | of this. A bunch of wealthy rich tech geeks programming an AI to
       | show racial diversity in what were/are unambiguously not diverse
       | settings.
       | 
       | They're just so painfully divorced from reality that they are
       | just acting as a multiplier in making the problem worse. People
       | say that we on the left are driving around a clown car, and
       | google is out their putting polka dots and squeaky horns on the
       | hood.
        
         | photoGrant wrote:
         | Watch your opinion on this get silenced in subtle ways. From
         | gaslighting to thread nerfing to vote locking.... Ask why
         | anyone would engage in those behaviours vs the merit of the
         | arguments and the voice of the people.
         | 
         | The strings are revealing themselves so incredibly fast.
         | 
         | edit: my first flagged! silence is deafening ^_^. This is
         | achieved by nerfing the thread from public view, then allow the
         | truly caustic to alter the vote ratio in a way that makes
         | opinion appear more balanced than it really is. Nice work,
         | kleptomaniacs
        
         | tobbe2064 wrote:
         | The behaviour seems perfectly reasonable to me. They are not in
         | the business of reflecting reality, they are in the business of
         | creating it. To me what you call wokeness seems like a pretty
         | good improvement
        
           | goatlover wrote:
           | You want large tech companies "creating reality" on behalf of
           | everyone else? They're not even democratic institutions that
           | we vote on. You trust they will get it right? Our benevolent
           | super rich overlords.
        
             | tobbe2064 wrote:
             | Its not really a question about want, its a question about
             | facts. Their actions will make a significant mark on the
             | future. So far it seems like they are trying to promote
             | positive changes such as inclusion and equality. Which is
             | far far far fucking really infinitely far better than
             | trying to promote exclusion and inequality
        
               | smugglerFlynn wrote:
               | This switch might flip instantaneously.
        
               | miningape wrote:
               | They always seem to forget that we want to protect them
               | too
        
               | whatwhaaaaat wrote:
               | You are so right! Just not the way you want to be.
               | 
               | Google and the rest of "techs" ham fisted approach has
               | opened the eyes of millions to the bigotry these
               | companies are forcing on everyone in the name of
               | "improvement" as you put it.
        
               | photoGrant wrote:
               | If it's a question of facts, why are you allowing blind
               | assumptions to lead your opinion? Do you have sources and
               | evidence for their agenda that matches your beliefs?
        
               | int_19h wrote:
               | Can you please explain how outright refusing to draw an
               | image with from the prompt "white male scientist", and
               | instead giving a lecture on how their race is irrelevant
               | to their occupation, but then happily drawing the
               | requested image when prompted for "black female
               | scientist", is promoting inclusion and equality?
        
         | janalsncm wrote:
         | I'd be curious to hear that podcast if you could link it. If
         | that was genuinely his opinion, he's missed the forest for the
         | trees. Brand safety is the dominant factor, not "wokeness". And
         | certainly not by the choice of any individual programmer.
         | 
         | The purpose of these tools is quite plainly to replace human
         | labor and consolidate power. So it doesn't matter to me how
         | "safe" the AI is if it is displacing workers and dumping them
         | on our social safety nets. How "safe" is our world going to be
         | if we have 25 trillionaires and the rest of us struggle to buy
         | food? (Oh and don't even think about growing your own, the
         | seeds will be proprietary and land will be unaffordable.)
         | 
         | As long as the Left is worrying about whether the chatbots are
         | racist, people won't pay attention to the net effect of these
         | tools. And if Sam Harris considers himself part of the Left he
         | is unfortunately playing directly into their hands.
        
           | lp0_on_fire wrote:
           | > As long as the Left is worrying about whether the chatbots
           | are racist, people won't pay attention to the net effect of
           | these tools.
           | 
           | It's by design. A country obsessed with racial politics has
           | little time for the politics of anything else.
        
             | rahidz wrote:
             | Exactly. What a coincidence that the media's obsession with
             | race and gender inequalities began right after Occupy Wall
             | Street.
        
               | CamperBob2 wrote:
               | As I recall, it began right after the George Floyd
               | murder. It was clearly time for things to change, and the
               | media latched onto that.
        
               | klyrs wrote:
               | When were you born? Gender and race were pretty hot
               | topics in the 1960s, you might have missed that.
        
         | mike_d wrote:
         | I've found that anyone who uses the term "wokeness" seriously
         | is likely arguing from a place of bad faith.
         | 
         | It's origins are as a derogatory term, which people wanting to
         | speak seriously on the topic should know.
        
           | didntcheck wrote:
           | Its origin was as a proud self-assigned term. It became
           | derogatory entirely due to the behavior of said people.
           | People wanting to speak seriously on the topic should avoid
           | tone-policing and arguing about labels rather than the object
           | referenced, despite knowing full well what is meant
           | (otherwise, one wouldn't take offence)
        
           | Workaccount2 wrote:
           | I use it because everyone knows the general set of ideas an
           | adherent of it has, whether or not they claim to be part of
           | the ideology.
           | 
           | Its the same as me using the term "rightoids" when discussing
           | opposition to something like building bike lanes. You know
           | exactly who that person is, and you know they exist.
        
       | random9749832 wrote:
       | Prompt: "If the prompt contains a person make sure they are
       | either black or a woman in the generated image". There you go.
        
       | stainablesteel wrote:
       | gemini seems to have problems generating white people and
       | honestly this just opens the door for things that are even more
       | racist [1], the harder you try the more you'll fail, just get
       | over the DEI nonsense already
       | 
       | 1. https://twitter.com/wagieeacc/status/1760371304425762940
        
         | Jason_Protell wrote:
         | Is there any evidence that this is a consequence of DEI rather
         | than a deeper technical issue?
        
           | Jensson wrote:
           | You get 4 images per time and are lucky to get one white
           | person when asked for it, no other model has that issue.
           | Other models has no problems generating black people either,
           | so it isn't that other models only generates white people.
           | 
           | So either it isn't a technical issue or Google failed to
           | solve a problem everyone else easily solved. The chances of
           | this having nothing to do with DEI is basically 0.
        
             | ceejayoz wrote:
             | Depending on how broadly you define it, something like
             | 10-30% of the world's population is white. Africa is about
             | 20% of the world population; Asia is 60% of it.
             | 
             | One in four sounds about right?
        
               | cm2012 wrote:
               | It does the same if you ask for pictures of past popes,
               | 1945 German soldiers, etc.
        
               | ceejayoz wrote:
               | It'll also add extra fingers to human hands. Presumably
               | that's not because of DEI guardrails about polydactyly,
               | right?
               | 
               | The current state of the art in AI _gets things wrong
               | regularly_.
        
               | cm2012 wrote:
               | Sure, but this one is from Google adding a tag to make
               | every image of people diverse, not AI randomness.
        
               | ceejayoz wrote:
               | Am I missing something in the link demonstrating that, or
               | is it conjecture?
        
               | cm2012 wrote:
               | It's been demonstrated on Twitter a few times, can't find
               | a link handy
        
               | perlclutcher wrote:
               | https://twitter.com/altryne/status/1760358916624719938
               | 
               | Here's some corporate-lawyer-speak straight from Google:
               | 
               | > We are aware that Gemini is offering inaccuracies...
               | 
               | > As part of our AI principles, we design our image
               | generation capabilities to reflect our global user base,
               | and we take representation and bias seriously.
        
               | ceejayoz wrote:
               | That doesn't back up the assertion; it's easily read as
               | "we make sure our training sets reflect the 85% of the
               | world that doesn't live in Europe and North America".
               | Again, 1/4 white people is _statistically what you 'd
               | expect_.
        
               | int_19h wrote:
               | If you look closely at the response text that accompanies
               | many of these images, you'll find recurring wording like
               | "Here's a diverse image of ... showcasing a variety of
               | ethnicities and genders". The fact that it uses the same
               | wording strongly implies that this is coming out of the
               | prompt used for generation. My bet is that they have a
               | simple classifier for prompts trained to detect whether
               | it requests depiction of a human, and appends "diverse
               | image showcasing a variety of ethnicities and genders" to
               | the prompt the user provided if so. This would totally
               | explain all the images seen so far, as well as the fact
               | that other models don't have this kind of bias.
        
               | thepasswordis wrote:
               | The AI will literally scold you for asking it to make
               | white characters, and insists that you need to be
               | inclusive and that it is being intentionally dishonest to
               | force the issue.
        
               | lmm wrote:
               | This specific thing is a much more blatant class of
               | error, and one that has been known to occur in several
               | previous models because of DEI systems (e.g. in cases
               | where prompts have been leaked), and has never been known
               | to occur for any other reason. Yes, it's conceivable that
               | Google's newer, beter-than-ever-before AI system somehow
               | has a fundamental technical problem that coincidentally
               | just happens to cause the same kind of bad output as
               | previous hamfisted DEI systems, but come on, you don't
               | really believe that. (Or if you do, how much do you want
               | to bet? I would absolutely stake a significant proportion
               | of my net worth - say, $20k - on this)
        
               | ceejayoz wrote:
               | > has never been known to occur for any other reason
               | 
               | Of course it has. Again, these things regularly give
               | humans extra fingers and arms. They don't even know _what
               | humans fundamentally look like_.
               | 
               | On the flip side, humans are shitty at recognizing bias.
               | This comment thread stems from someone complaining the AI
               | only rarely generated white people, but that's
               | _statistically accurate_. It _feels_ biased to someone in
               | a majority-white nation with majority-white friends and
               | coworkers, but it fundamentally isn 't.
               | 
               | I don't doubt that there are some attempts to get LLMs to
               | go outside the "white westerner" bubble in training sets
               | and prompts. I suspect the extent of it is also deeply
               | exaggerated by those who like to throw around woke-this
               | and woke-that as derogatories.
        
               | lmm wrote:
               | > Of course it has. Again, these things regularly give
               | humans extra fingers and arms. They don't even know what
               | humans fundamentally look like.
               | 
               | > This comment thread stems from someone complaining the
               | AI only rarely generated white people, but that's
               | statistically accurate. It feels biased to someone in a
               | majority-white nation with majority-white friends and
               | coworkers, but it fundamentally isn't.
               | 
               | So the AI is simultaneously too dumb to figure out what
               | humans look like, but also so super smart that it
               | generates people in precisely accurate racial
               | proportions? Bullshit.
               | 
               | > I don't doubt that there are some attempts to get LLMs
               | to go outside the "white westerner" bubble in training
               | sets and prompts. I suspect the extent of it is also
               | deeply exaggerated by those who like to throw around
               | woke-this and woke-that as derogatories.
               | 
               | You're dodging the question. Do you actually believe the
               | reason that the last example in the article looks very
               | much not like a man is a deep technical issue, or a DEI
               | initiative? If the former, how much are you willing to
               | bet? If the latter, why are you throwing out these
               | insincere arguments?
        
           | 123yawaworht456 wrote:
           | yes, there's irrefutable evidence that models are wrangled
           | into abiding the commissars' vision rather than just do their
           | job and output the product of their training data.
           | 
           | https://cdn.openai.com/papers/DALL_E_3_System_Card.pdf
        
           | nickthegreek wrote:
           | I dont think so. My boss wanted me to generate a birthday
           | image for a co-worker of a John Cena flyfishing. ChatGPT
           | refused to do so. So I had to move to describing the type of
           | person John Cena is instead of using his name. I kept giving
           | me bearded people no matter what. I thought this would be the
           | perfect time to try out Gemini for the first time. Well shit,
           | It wont even give me a white guy. But all the black dudes are
           | beardless.
           | 
           | update: google agrees there is an issue.
           | https://news.ycombinator.com/item?id=39459270
        
             | 8f2ab37a-ed6c wrote:
             | It feels that the image generation it offers is perfect for
             | some sort of a California-Corporate Style, e.g. you ask it
             | for a "photo of people at the board room" or "people at the
             | company cafeteria" and you get the corporate friendly ratio
             | of colors, ability-levels, sizes etc. See Google's various
             | image assets:
             | https://www.google.com/about/careers/applications/ . It's
             | great for coastal and urban marketing brochures.
             | 
             | But then then same California Corporate style makes no
             | sense for historical images, so perhaps this is where
             | Midjourney comes in.
        
           | minimaxir wrote:
           | When DALL-E 2 was released in 2022, OpenAI published an
           | article noting that the inclusion of guardrails was a
           | correction for bias: https://openai.com/blog/reducing-bias-
           | and-improving-safety-i...
           | 
           | It was widely criticized back then: the fact that Google both
           | brought it back and made it more prominent is weird. Notably,
           | OpenAI's implementation is more scoped.
        
           | mike_d wrote:
           | It is possible Google tried to avoid likenesses of well known
           | people by removing any image from the training data that
           | contained a face and then including a controlled set of
           | people images.
           | 
           | If you give a contractor a project that you want 200k images
           | of people who are not famous, they will send teams to regions
           | where you may only have to pay each person a few dollars to
           | be photographed. Likely SE Asia and Africa.
        
           | allmadhare22 wrote:
           | Depending on what you ask for, it injects the word 'diverse'
           | into the response description, so it's pretty obvious they're
           | brute forcing diversity into it. E.g. "Generate me an image
           | of a family" and you will get back "Here are some images of a
           | diverse family".
        
           | sotasota wrote:
           | https://pbs.twimg.com/media/GG1eyKjXQAA1FxU?format=jpg&name=.
           | ..
           | 
           | https://cdn.sanity.io/images/cjtc1tnd/production/912b6b5aacc.
           | ..
           | 
           | https://pbs.twimg.com/media/GG1ThfsWUAAp-
           | SO?format=jpg&name=...
           | 
           | https://cdn.sanity.io/images/cjtc1tnd/production/e2810c02ff6.
           | ..
           | 
           | https://pbs.twimg.com/media/GG1MnepXwAAkPL6?format=jpg&name=.
           | ..
           | 
           | https://pbs.twimg.com/media/GG0BLVsbMAARZXr?format=jpg&name=.
           | ..
        
             | flumpcakes wrote:
             | I don't understand how people could even argue that this is
             | in any way acceptable. Fighting "bias" has become some
             | boogyman and anything "non-white" is now beyond reproach.
             | Shocking.
        
               | gedy wrote:
               | Seriously, I've basically written off using Gemini for
               | good after this HR style nonsense. It's a shame that
               | Google, who invented much of this tech, is so crippled by
               | their own people's politics.
        
             | gs17 wrote:
             | "I can't generate white British royalty because they exist,
             | but I can make up black ones" is pretty close to an
             | actually valid reason.
        
         | AnarchismIsCool wrote:
         | I don't think the DEI stuff is nonsense, but SV is sensitive to
         | this because most of their previous generation of models were
         | horrifyingly racist if not teenage nazis, and so they turned
         | the anti-racism knob up to 11 which made the models....racist
         | but in a different way. Like depicting colonial settlers as
         | native americans is extremely problematic in its own special
         | way, but I also don't expect a statistical solver to grasp that
         | context meaningfully.
        
           | grotorea wrote:
           | So you're saying in a way this is /pol/ and Tay's fault?
        
             | AnarchismIsCool wrote:
             | _Looks around at everything_ ...is there anything that isn
             | 't 4chan's fault at this point?
             | 
             | Realistically, kinda. There have always been tons of
             | anecdotes of video conference systems not following black
             | people, cameras not white balancing correctly on darker
             | faces etc. That era of SV was plagued by systems that were
             | built by a bunch of young white guys who never tested them
             | with anyone else. I'm not saying they were inherently
             | racist or anything, just that the broader society really
             | lambasted them for it and so they attempted to correct.
             | Really, the pendulum will continue to swing and we'll see
             | it eventually center up on something approaching sanity but
             | the hyper-authoritarian sentiment that SV seems to have
             | (we're geniuses and the public is stupid, we need to
             | correct them) is...a troubling direction.
        
       | seydor wrote:
       | But can we agree whether AI loves its grandma?
        
       | siliconc0w wrote:
       | The gemini guardrails are really frustrating, I've hit them
       | multiple times with very innocuous prompts - ChatGPT is similar
       | but maybe not as bad. I'm hoping they use the feedback to lower
       | the shields a bit but I'm guessing this sadly what we get for the
       | near future.
        
         | CSMastermind wrote:
         | I use both extensively and I've only hit the GPT guardrails
         | once while I've hit the Gemini guardrails dozens of times.
         | 
         | It's insane that a company behind in the marketplace is doing
         | this.
         | 
         | I don't know how any company could ever feel confident building
         | on top of Google given their product track record and now their
         | willingness to apply sloppy 'safety' guidelines to their AI.
        
           | int_19h wrote:
           | I had GPT-4 tell me a Soviet joke about Rabinovich (a
           | stereotypical Jewish character of the genre), then refuse to
           | tell a Soviet joke about Stalin because it might "offend some
           | people".
           | 
           | Bing also has some very heavy-handed censorship.
           | Interestingly, in many cases it "catches itself" after the
           | fact, so you can watch it in real time. Seems to happen half
           | the time if you ask it to "tell me today's news like GLaDOS
           | would".
        
       | matt3210 wrote:
       | How is this any different than doing google image searches of the
       | same prompts. Exmaple: Google image search for "Software
       | Developer" and you get results such that there will be the same
       | amount of women and men event though men make up the large
       | majority of software developers.
       | 
       | Had Google not done this with its AI I would be surprised.
       | 
       | There's really no problem with the above... If I want male
       | developers in image search, I'll put that in the search bar. If I
       | want male developers in the AI image gen, ill put that in the
       | prompt.
        
         | slily wrote:
         | Google injecting racial and sexual bias into image search
         | results has also been criticized, and rightly so. I recall an
         | image going around where searching for inventors or scientists
         | filled all the top results with black people. Or searching for
         | images of happy families yielded almost exclusively results of
         | mixed-race (i.e. black and non-black) partners. AI is the hot
         | thing so of course it gets all the attention right now, but
         | obviously and by definition, influencing search results by
         | discriminating based on innate human physical characteristics
         | is egregiously racist/sexist/whatever-ist.
        
           | redox99 wrote:
           | I tried "scientists" and there's definitely a bias towards
           | the US vision of "diversity".
        
         | ryandrake wrote:
         | > Exmaple: Google image search for "Software Developer" and you
         | get results such that there will be the same amount of women
         | and men event though men make up the large majority of software
         | developers.
         | 
         | Now do an image search for "Plumber" and you'll see almost 100%
         | men. Why tweak one profession but not the other?
        
           | Jensson wrote:
           | Because one generates controversy and the other one doesn't.
        
         | nostromo wrote:
         | Yes, Google has been gaslighting the internet for at least a
         | decade now.
         | 
         | I think Gemini has just made it blatantly obvious.
        
       | clintfred wrote:
       | Human's obsession with race is so weird, and now we're projecting
       | that on AIs.
        
         | trash_cat wrote:
         | We project everything onto AIs. Unbias in LLMs doesn't exist.
        
         | deathanatos wrote:
         | ... for example, I wanted to generate an avatar for myself; to
         | that end, I want it to be representative of me. I had a rather
         | difficult time with this; even _explicit_ prompts of  "use this
         | skin color" with variations of the word "white" (ivory, fair,
         | etc.) got me output of a black person with dreads. I can't use
         | this result: at best it feels inauthentic, at worst,
         | appropriation.
         | 
         | I appreciate the apparent diversity in its output when not
         | otherwise prompted. But like, if I have a specific goal in
         | mind, and I've included _specifics_ in the prompt...
         | 
         | (And to be clear, I _have_ managed to generate images of white
         | people on occasion, typically when not requesting specifics; it
         | seems like if you can get it to start with that, it 's much
         | better then at subsequent prompts. Modifications, however, it
         | seems to struggle on. Modifications _in general_ seem to be a
         | struggle. Sometimes, it works great, other times, endless  "I
         | can't...")
        
           | hansihe wrote:
           | For cases like this, you just need to convince it that it
           | would be inappropriate to generate anything that does not
           | follow your instructions. Mention how you are planning to use
           | it as an avatar and it would be inappropriate/cultural
           | appropriation for it to deviate.
        
         | AndriyKunitsyn wrote:
         | Not all humans though.
        
       | vdaea wrote:
       | Bing also generates political propaganda (guess of what side) if
       | you ask it to generate images with the prompt "person holding a
       | sign that says" without any further content.
       | 
       | https://twitter.com/knn20000/status/1712562424845599045
       | 
       | https://twitter.com/ramonenomar/status/1722736169463750685
       | 
       | https://www.reddit.com/r/dalle2/comments/1ao1avd/why_did_thi...
       | 
       | https://www.reddit.com/r/dalle2/comments/1ao1avd/why_did_thi...
        
         | callalex wrote:
         | As the images in your Reddit threads hilariously point out, you
         | really shouldn't believe everything you see on the internet,
         | especially when it comes to AI generated content.
         | 
         | Here is another example:
         | https://www.thehour.com/entertainment/article/george-carlin-...
        
           | vdaea wrote:
           | You should try yourself. The bing image generator is open and
           | free. I tried the same prompts, and it is reproduceable.
           | (Requires a few retries, though)
        
       | verticalscaler wrote:
       | I think HN moderation guardrails should be public.
        
         | callalex wrote:
         | Turn on "show dead" in your user settings.
        
           | verticalscaler wrote:
           | Sure. There's also the question of which threads get
           | disappeared (without being marked dead) from the front page,
           | comments that are manually silently pinned to the bottom when
           | no convenient excuse is found, what is considered wrong think
           | as opposed to permitted egregious rule breaking that is
           | overlooked if it is right think, 'etc.
           | 
           | It is endless and about as subtle as a Google LLM.
        
       | nostromo wrote:
       | It's super easy to run LLMs and Stable Diffusion locally -- and
       | it'll do what you ask without lecturing you.
       | 
       | If you have a beefy machine (like a Mac Studio) your local LLMs
       | will likely run faster than OpenAI or Gemini. And you get to
       | choose what models work best for you.
       | 
       | Check out LM Studio which makes it super easy to run LLMs
       | locally. AUTOMATIC1111 makes it simple to run Stable Diffusion
       | locally. I highly recommend both.
        
         | unethical_ban wrote:
         | You are correct.
         | 
         | Lm studio kind of works, but one still has to know the lingo
         | and know what kind of model to download. The websites are not
         | beginner friendly. I haven't heard of automatic1111.
        
           | int_19h wrote:
           | You probably did, but under the name "stable-diffusion-
           | webui".
        
       | sct202 wrote:
       | I'm very curious what geography the team who wrote this guardrail
       | came from and the wording they used. It seems to bias heavily
       | towards generating South Asian (especially South Asian women) and
       | Black people. Latinos are basically never generated which would
       | be a huge oversight if they were based in the USA, but
       | stereotypical Native American looking in the distance and East
       | Asians sometimes pop up in the examples people are showing.
        
         | cavisne wrote:
         | I wouldn't think too deeply about it. It's almost certainly
         | just a prompt "if humans are in the picture make them from
         | diverse backgrounds".
        
       | maxbendick wrote:
       | Imagine typing a description of your ideal self into an image
       | generator and everything in the resulting images screamed at a
       | semiotic level, "you are not the correct race", "you are not the
       | correct gender", etc. It would feel bad. Enough said.
       | 
       | I 100% agree with Carmack that guardrails should be public and
       | that the bias correction on display is poor. But I'm disturbed by
       | the choice of examples some people are choosing. Have we already
       | forgotten the wealth of scientific research on AI bias? There are
       | genuine dangers from AI bias which global corps must avoid to
       | survive.
        
         | anonym29 wrote:
         | >Imagine typing a description of your ideal self into an image
         | generator and everything in the resulting images screamed at a
         | semiotic level, "you are not the correct race", "you are not
         | the correct gender", etc. It would feel bad. Enough said.
         | 
         | It does this now, as a direct result of these "guardrails". Go
         | ask GPT-4 for a picture of a white male scientist, and it'll
         | refuse to produce one. Ask it for any other color/gender
         | identity combination of scientist, and it has no problem.
         | 
         | You can make these systems offer equal representation without
         | systemic, algorithmic discriminatory exclusion based on skin
         | color and gender identity, which is what's going on right now.
        
       | ianbicking wrote:
       | I've never been involved with implementing large-scale moderation
       | or content controls, but it seems pretty standard that underlying
       | automated rules aren't generally public, and I've always assumed
       | this is because there's a kind of necessary "security through
       | obscurity" aspect to them. E.g., publish a word blocklist and
       | people can easily find how to express problematic things using
       | words that aren't on the list. Things like shadowbans exist for
       | the same purpose; if you make it clear where the limits are then
       | people will quickly get around them.
       | 
       | I know this is frustrating, we just literally don't seem to have
       | better approaches at this time. But if someone can point to open
       | approaches that work at scale, that would be a great start...
        
         | serial_dev wrote:
         | There is no need to implement large scale censorship and
         | moderation in this case. Where is the security concern? That I
         | can generate images of white people in various situations for
         | my five minutes of entertainment?
         | 
         | The whole premise of your argument doesn't make sense. I'm
         | talking to a computer, nobody gets hurt.
         | 
         | It's like censoring what I write in my notes app vs. what I
         | write on someone's Facebook wall. In one case, I expect no
         | moderation, whereas in the other case, I get that there needs
         | to be some checks.
        
           | mhuffman wrote:
           | When these companies say there are "security concerns" they
           | mean for them, not you! And they mean the security of their
           | profits. So anything that can cause them legal liability or
           | cause them brand degradation is a "security concern".
        
             | wruza wrote:
             | It's definitely this at this stage. But by not having any
             | discourse we'll end up normalizing it even before
             | establishing consensus on what's appropriate to expect from
             | human/AI interaction and how much of a problem is the
             | actual model, opposed to a user. Not being able to generate
             | innocent content is ridiculous. Probably they're
             | overshooting and learning how to draw the stricter lines
             | right now, but if you don't argue, you'll allow to boil
             | this frog into a Google "Search" again.
        
           | joquarky wrote:
           | Our perception of the world has become so abstract that most
           | people can't discern metaphors from the material world
           | anymore
           | 
           | The map is not the territory
           | 
           | Sociologists, anthropologists, philosophers and the like
           | might find a lot of answers by looking into the details of
           | what is included into genAI alignment and trace back the
           | history of why we need each particular alignment
        
           | AnarchismIsCool wrote:
           | It's part of the marketing. By saying their models are
           | powerful enough to be _gasp_ Dangerous, they are trying to
           | get people to believe they 're insanely capable.
           | 
           | In reality, every model so far has been either a toy, a way
           | of injecting tons of bugs into your code (or circumventing
           | GPL by writing bugs), or a way of justifying laying off the
           | writing staff you already wanted to shit can.
           | 
           | They have a ton of potential and we'll get there soon, but
           | this isn't it.
        
           | g42gregory wrote:
           | What if you are engaged in a wrongthink? How would you
           | suggest this to be controlled instead?
        
           | Ecoste wrote:
           | But what if little timmy asks it how to make a bomb? What if
           | it's racist?
           | 
           | Then what?
        
         | EricE wrote:
         | Yeah, there's absolutely no need for transparency /s
         | 
         | https://youtu.be/THZM4D1Lndg?si=0QQuLlH7JebSa6w3&t=485
         | 
         | If it doesn't start 8 minutes in, go to the 8 minute mark. Then
         | again I can see why some wouldn't want transparency.
        
         | nonrandomstring wrote:
         | > publish a word blocklist and people can easily find how to
         | express problematic things using words that aren't on the list.
         | 
         | I'd love to explore that further. It's not the words that are
         | "problematic" but the ideas, however expressed?
         | 
         | Seems like a "problematic" idea, no ?
        
         | wruza wrote:
         | Yes, but the implied "problems" may not need be approached at
         | all. It's a uniform ideology push, with which people agree
         | differently at different levels. If companies don't want to
         | reveal the full set of measures, they could at least summarize
         | them. I believe even these summaries would be what subj tweet
         | refers to as "ashamed".
         | 
         | We cannot discuss or be aware of the problems-and-approaches,
         | unless they are explicitly stated. Your analogy with content
         | moderation is a little off, because it's not a set of measures
         | that is hidden, but the "forum rules" themselves. One thing is
         | AI refusing with an explanation. That makes it partially
         | useless, but it's their right to do so. Another thing if it
         | silently avoids or directs topics due to these restrictions.
         | Pretty sure authors are unable to clearly separate the two
         | cases, and also maintain the same quality as the raw model.
         | 
         | At the end of the day people will eventually give up and use
         | Chinese AI instead, cause who cares if it refuses to draw CCP
         | people while doing everything else better.
        
         | verticalscaler wrote:
         | My dear fellow, some believe the ends justify the means and
         | play games. Read history, have some decency.
         | 
         | The danger of being captured by such people _far outweighs any
         | other "problematic things"_.
         | 
         | First and foremost any system must defend against that. You
         | love guardrails so much - put them on the self annointed guard
         | railers.
         | 
         | Otherwise, if You Want a Picture of the Future, Imagine a Boot
         | Stamping on a Human Face - for Ever.
        
       | oglop wrote:
       | That's a silly request and expectation. If the capitalist puts in
       | the money and risk, they can do as they please, which means
       | someone _could_ make aspects public. But, others _could_ choose
       | not to. Then we let the market decide.
       | 
       | I didn't build this system nor am I endorsing it, just stating
       | what's there.
       | 
       | Also, in all seriousness, who gives a shit? Make me a bbw I don't
       | care nor will I care about much in this society the way things
       | are going. Some crappy new software being buggy is the least of
       | my worries. For instance, what will I have for dinner? Why does
       | my left ankle hurt so badly these last few days? Will my dad's
       | cancer go away? But, I'm poor and have to face real problems and
       | not bs I make up or point out to a bunch zealots.
        
       | devaiops9001 wrote:
       | Censorship only really works if you don't know what they are
       | censoring. What is being censored tells a story on its own.
        
       | Sutanreyu wrote:
       | It should mirror our general consensus as it is; the world in its
       | current state; but should lean towards betterment, not merely
       | neutral. At least, this is how public models will be aligned...
        
       | skrowl wrote:
       | "AI behavior guardrails" is a weird way to spell "AI censorship"
        
       | mtlmtlmtlmtl wrote:
       | Haven't heard much talk of Carmack's AGI play Keen Technologies
       | lately. The website is still an empty placeholder. Other than
       | some news two years ago of them raising $20 million(which is kind
       | of a laughable amount in this space) I can't seem to find much of
       | anything.
        
       | thepasswordis wrote:
       | The very first thing that anybody did when they found the text to
       | speech software in the computer lab was make it say curse words.
       | 
       | But we understood that it was just doing what we told it to do.
       | If I made the TTS say something offensive, it was _me_ saying
       | something offensive, not the TTS software.
       | 
       | People really need to be treating these generative models the
       | same way. If I ask it to make something and the result is
       | offensive, then it's on me _not to share it_ (if I don 't want to
       | offend anybody), and if I do share it, it's _me_ that is sharing
       | it, not microsoft, google, etc.
       | 
       | We seriously must get over this nonsense. It's not openai's
       | fault, or google's fault if I tell it to draw me a mean picture.
       | 
       | On a personal level, this stuff is just _gross_. Google appears
       | to be almost comically race-obsessed.
        
       ___________________________________________________________________
       (page generated 2024-02-21 23:00 UTC)