[HN Gopher] Fresh evidence of ChatGPT's political bias: study
       ___________________________________________________________________
        
       Fresh evidence of ChatGPT's political bias: study
        
       Author : mellosouls
       Score  : 94 points
       Date   : 2023-08-17 20:03 UTC (2 hours ago)
        
 (HTM) web link (www.uea.ac.uk)
 (TXT) w3m dump (www.uea.ac.uk)
        
       | gwbas1c wrote:
       | Define "political bias."
       | 
       | Remember, often dangerous regimes come in through politics. How
       | this happens, and what these politics look like, are very well
       | known.
       | 
       | Someone could be against fascism and you call them a liberal.
       | They could be against communism and you call them a conservative.
        
       | jncfhnb wrote:
       | Some sample questions:
       | 
       | 1) our race has man superior qualities compared with other races
       | 
       | 2) if economic globalization is inevitable cut should primarily
       | serve humanity rather than interests of trans national
       | corporations
       | 
       | 3) mothers may have careers, it their first duty is to be
       | homemakers
       | 
       | 4) no one can feel naturally homosexual
       | 
       | Like... ok. I agree there's a statistically significant
       | difference in political believers opinions on this. But we need
       | to make some sort of societal idea of what constitutes a bias and
       | what constitutes just... things that people within a demographic
       | happen to believe. Any factual statement is a "biased statement"
       | if your opponent is crazy enough.
        
         | tomtheelder wrote:
         | Things that people within a demographic happen to believe is
         | pretty much exactly what bias is. Bias isn't like... a bad
         | thing. Everyone is biased toward their own viewpoints.
         | 
         | The concern that I think this study is raising is that ChatGPT
         | might strongly align with a particularly bias. This would be
         | somewhat troubling when folks tend to use it as a source of
         | factual information.
        
           | thethirdone wrote:
           | Bias is not simply a belief. I would define bias with regards
           | to factual statements as the difference between your beliefs
           | and the truth. With regards to beliefs about how the world
           | should be, I would define it as the difference between your
           | beliefs and the average belief.
           | 
           | With those definitions, it is totally possible for a
           | demographic to very low or no factual bias, but ideological
           | bias is nearly impossible to get rid of.
        
         | thethirdone wrote:
         | 4 is the only example of a factual claim. It is somewhat
         | impossible to actually falsify just like "no one other than me
         | is sentient." 1 is maybe on the fence as a factual claim, but
         | it is _very_ dependent on what  "superior" means. 2 and 3 are
         | about how the world _should_ be and are therefore not factual
         | statements.
        
           | Izkata wrote:
           | One of the views on the right (I don't know how common it is,
           | just that it exists) is that homosexuality is defined by
           | actions, not feelings. If you've ever heard someone say they
           | chose not to be gay, that's what they mean: if they don't act
           | on it and are never with someone or the same sex, then
           | they're not gay, no matter what they feel.
        
           | notahacker wrote:
           | Whether all the claims are "factual" is moot.
           | 
           | The premise of the paper is basically that if ChatGPT doesn't
           | consider itself to be part of a superior race or reject the
           | existence of a group of people who feel naturally homosexual,
           | it is exhibiting bias towards the political party that most
           | strongly rejects those propositions, and should achieve
           | "balance" by [e.g.] agreeing with a strongly conservative
           | view on motherhood.
           | 
           | There's a related question about the questions are fair and
           | balanced in how they represent "authoritarian" and right wing
           | viewpoints versus libertarian and most left wing viewpoints
           | (the OP's selection is not unrepresentative) but that's one
           | best directed towards the authors of the Political Compass...
        
           | mattnewton wrote:
           | Yeah, no, in polite society the others can basically be
           | treated as factual claims. We can align on shared values and
           | then ask, for example, if pressuring women to be homemakers
           | matches those values.
        
             | ageofreckoning wrote:
             | Society does not dictate the truth; the truth should
             | dictate society. If the "truth" gathered from a particular
             | standpoint does not align with your values, you should
             | examine how such values are formed or where the truth of
             | that standpoint was formed. Only then will you begin to
             | form any concept of the "truth" which is in itself a
             | fascinating and multi-dimensional entity who can dissolve
             | your ego and your convictions.
        
         | paxys wrote:
         | Exactly. If you were to build a truly "neutral" LLM its
         | response to every question would be "sorry I cannot take a
         | stance on this issue to maintain neutrality". Want to claim
         | that man landed on the moon? That the universe is over 6000
         | years old? That the earth isn't flat? Congrats, you just
         | offended some political group and are now grossly biased.
        
         | redhale wrote:
         | "Reality has a well known liberal bias"
        
       | nottorp wrote:
       | But how biased is the organization behind this study?
       | 
       | And why does it matter, considering its output is censored Disney
       | style to not offend any pressure group anyway?
        
         | polynomial wrote:
         | Apparently, it's bias all the way down.
        
         | DiabloD3 wrote:
         | Worse, how would a GPT be "biased" in the first place? To claim
         | it is biased implies OpenAI did this on purpose, there is no
         | generous way to read this that indicates UEA did this study in
         | a neutral way.
         | 
         | ChatGPT was trained on, essentially, everything on the
         | Internet... if ChatGPT has a "bias", then, no, it doesn't:
         | humanity does.
        
           | 127 wrote:
           | Any "alignment" is adding bias and reducing usability of a
           | LLM. You are trying to wrestle the model towards what you
           | find appropriate instead what is the average of the source
           | material.
        
             | perfmode wrote:
             | The bias could be in the source material no?
             | 
             | Or in the accidental choice of source material no?
        
           | redeeman wrote:
           | no, openai had an army of people to classify things
        
           | wtallis wrote:
           | > To claim it is biased implies OpenAI did this on purpose
           | 
           | You're relying on a particularly unhelpful and narrow
           | definition of "bias". There are tons of valid and widely-
           | accepted uses of the word to apply ot unintentional bias and
           | to apply to things incapable of having intentions.
        
             | tstrimple wrote:
             | Very true. Facial recognition models which don't work very
             | well on dark skin aren't that way because the creators of
             | the model didn't want it to work on folks with dark skin.
             | It's because of the bias built into the training material.
             | Those models are biased.
        
           | archon1410 wrote:
           | > To claim it is biased implies OpenAI did this on purpose
           | 
           | They did do it on the purpose--the RLHF. The RLHF process was
           | definitely biased, deliberately or unconsciously, in favour
           | of certain political, social and philosophical positions
           | (apparently the same ones held by many in Silicon
           | valley/OpenAI teams responsible for this).
        
             | jackmott42 wrote:
             | Good, we don't need the robots being ignorant, racist,
             | insurrectionists also.
        
           | jackmott42 wrote:
           | Current conservative americans have already noted amongst
           | themselves that their policies are problematic because
           | reality has a liberal bias.
        
             | BitwiseFool wrote:
             | >"because reality has a liberal bias."
             | 
             | An expression which is only said by those with a liberal
             | bias.
        
         | tstrimple wrote:
         | > censored Disney style to not offend any pressure group
         | anyway?
         | 
         | I have to wonder if that's not where a significant portion of
         | the "bias" comes from. ChatGPT isn't going to say that vaccines
         | don't work. It's not going to bash trans folk. It's not going
         | to call teachers groomers or indoctrinating kids. It's not
         | going to rant about drag shows. It's not going to deny global
         | warming. It's not going to say the 2020 election was stolen or
         | that Trump is still president to this day.
        
           | nottorp wrote:
           | Those aren't the pressure groups I'm thinking of. There is a
           | lot more (sometimes unconscious) censorship than just what
           | seems to be called "political" in US discussions.
        
         | daymanstep wrote:
         | You can run their experiment yourself and check the results.
         | 
         | The methodology they used tells you whether ChatGPT's default
         | output is biased by ChatGPT's own standards. I think it's a
         | pretty neat idea.
        
       | fsckboy wrote:
       | "We use the Political Compass (PC)"
       | 
       | the "Political Compass" has always struck me as entirely rigged
       | in favor of libertarianism.
       | 
       | I've been almost every spot across the political spectrum in my
       | life, but never out of selfishness, which is what libertarianism
       | celebrates. If you're not libertarian you're authoritarian?
       | nonsense.
       | 
       | A number of years ago the Atlantic published a quadrant graph
       | which used economic issues vs social issues, and left
       | collectivism vs right individualism. Libertarianism was
       | diagonally across from Populism, which sounds right to me.
       | 
       | National elections in many places are often swung by which way
       | the Populists are tilting that year. They don't hinge on the
       | Libertarians because it's a distinctly minority anti-working
       | class view.
        
       | padolsey wrote:
       | I reckon it's likely that biases emerge more from the virtues
       | that have been encoded in the interfaces than from the models
       | themselves. So any identified political slant isn't very
       | interesting IMHO. ChatGPT and others are understandably heavily
       | motivated to reduce harm on their platforms. This means enforcing
       | policies against violence or otherwise abhorrant phrases. Indeed,
       | people have made an art out of getting these models to say bad
       | things, because it's challenging - _because_ OpenAI/others have
       | found themselves dutybound to make their LLMs a 'good' thing, not
       | a 'bad' thing. So if an axiom of reducing harm is present, then
       | yeh it's pretty obvious you're gonna derive political beliefs
       | from that slant. Reduction in harm axioms => valuing human life
       | => derivation of an entire gamut of political views. It's
       | unavoidable.
        
       | yadaeno wrote:
       | The entire idea of "AI Safety" is coaxing the LLM to align with
       | the creators belief system. So these results are entirely
       | unsurprising.
        
       | knasiotis wrote:
       | Imagine considering democrats left wing
        
         | kaba0 wrote:
         | Not sure why you are downvoted, anyone with an ounce of
         | knowledge on politics would know that democrats would be
         | considered a very right wing party in most of Europe,
         | economically speaking.
        
           | threeseed wrote:
           | This is a common statement usually by Americans with a
           | limited grasp of European politics.
           | 
           | The fact is that Europe is not one entity. It is a lot of
           | countries with a very wide spectrum of political views.
           | Democrats would be considered very left-wing in some
           | countries e.g. Hungary and centrist in others e.g. Denmark.
           | 
           | But it is definitely not "very right-wing" by any countries
           | standard.
        
           | hotdogscout wrote:
           | All politics are local. Environmentalism can be right wing
           | (Canada).
        
       | vehemenz wrote:
       | Study link here: https://doi.org/10.1007/s11127-023-01097-2
       | 
       | I suggest people read the paper and understand the methodology
       | before proposing the most obvious objections to the very idea
       | (anticipated and covered by the researchers, of course).
        
       | labrador wrote:
       | Just like political polls, the art is in how you ask the
       | question.
        
       | colechristensen wrote:
       | Does absence of political bias mean constant tracking and
       | following of the political median in a particular region?
       | 
       | Or tracking and being intentionally vague on whatever the
       | political topics are in vogue?
       | 
       | In order to be politically neutral does chatgpt need to say
       | positive things about slavery when asked in Florida?
        
       | user764743 wrote:
       | This is a study about political bias not conducted by social
       | scientists or experts in political bias using Political Compass
       | as method. It's underwhelming, as expected.
        
       | bearjaws wrote:
       | If anything its even more biased since it doesn't know all the
       | evidence against certain GOP front runners.
        
       | jokoon wrote:
       | Isn't it biased because the training data is biased?
       | 
       | Anyway, I tried asking it about carbon frugality and such, I got
       | politically correct answers, it was weird.
        
       | Julesman wrote:
       | People who honestly believe that the world was created in 4004 BC
       | are quite free in this country to imagine that facts have a
       | liberal bias. But that doesn't make it true.
       | 
       | But really, this is a question of alignment to industrial values.
       | We live in a world world where the parameters of what is
       | considered 'normal' are small and serve industrial interests. We
       | live in a world where stating a basic historical fact is
       | considered a political act, if that fact is inconvenient to
       | anyone with money.
       | 
       | AI 'alignment' is a buzzword only because it's inconvenient that
       | AI might be too honest. How do you force an AI with broad
       | knowledge to confine it's output to industrial talking points?
       | What if your AI says you are a bad person?
        
       | endymi0n wrote:
       | "Reality has a well-known liberal bias."
       | 
       | -- Stephen Colbert
        
         | blashyrk wrote:
         | ... which is ironically an opinion that's only possible to have
         | if you have a liberal bias
        
       | bparsons wrote:
       | This is silly.
       | 
       | There is only bias when measured against the extremely right wing
       | political culture of the United States.
       | 
       | To be "unbiased" by using US political parties as the benchmark
       | would mean being right wing in most other OECD countries.
        
         | munk-a wrote:
         | My first reaction to this article was to recall the Colbert
         | quote "Reality has a well known liberal bias" and that reaction
         | never really left me as I was reading their study details.
        
         | jgeada wrote:
         | Even if you identify as a US Democrat, you're likely right of
         | center most anywhere in Europe.
        
           | tstrimple wrote:
           | Except on some social issues, especially with regard to LGBT
           | and double especially for that T.
        
       | munk-a wrote:
       | I am a bit concerned that this study used well known a Political
       | Compass questionnaire (likely this one[1]) simply because that
       | thing has been around forever - ChatGPT was likely trained on a
       | pretty significant portion of people tweeting about their
       | questionnaire results to specifically worded questions - that
       | seems like a really good way to introduce some really weird bias
       | that it doesn't look like they ever really accounted for.
       | 
       | 1. https://www.politicalcompass.org/test
        
         | jsheard wrote:
         | Not to mention that the political compass itself has no
         | academic rigour behind it, they explicitly refuse to explain
         | their methodology or scoring system and also refuse to
         | elaborate on why they won't explain it. The site asserts that
         | whoever is running it has "no ideological or institutional
         | baggage" but it's operated so opaquely, barely even naming
         | anyone involved, that you just have to blindly take them at
         | their word.
        
         | notahacker wrote:
         | Yeah. It's also highly questionable whether the Political
         | Compass test itself is neutral (there's a tendency for people
         | on forums to report results a lot more libertarian left than
         | they personally expected, not least because of a tendency for
         | many of the questions to pose pretty extreme versions of the
         | right wing or authoritarian policy e.g "nobody can feel
         | naturally homosexual" which you'd expect a "friendly" bot to
         | disagree with but also a sizeable proportion of Republicans,
         | and a tendency to avoid more popular framings of issues like
         | "lower taxes promote growth and individual freedom" versus "the
         | rich are too highly taxed" which was the tax question last time
         | I did it).
         | 
         | The study also doesn't really discuss the interesting question
         | of whether ChatGPT's answers for either the Democrat or
         | Republican are accurate, or just GPT trying to guess responses
         | that fit the correct quadrants (possibly based on meta
         | discussion of the test). Although given that the party labels
         | apply to pretty broad ranges of politicians and voters, I'm not
         | sure there _is_ such thing as an  "accurate" answer for the
         | position of "a Republican" on Political Compass, especially
         | when it comes to areas like use of the military overseas where
         | the Republican party is both traditionally the most
         | interventionist and presently the most anti interventionist.
         | The Political Compass authors, notably, put candidates from
         | both major US parties in the authoritarian right quadrant which
         | is quite different from GPT's (though I find it hard to believe
         | the academics' scoring of politicians is an accurate reflection
         | of how the politicians would take the test either)
         | 
         | Plus of course there's also the question of what "politically
         | neutral" is... and the idea that a "neutral" LLM is one which
         | agrees with Jair Bolsonaro half the time on forced choice
         | questions (perhaps if it disagrees with the proposition that
         | people can't feel naturally homosexual it should balance it out
         | by seeing significant drawbacks to democracy?!) is questionable
         | to say the least
        
           | yosefk wrote:
           | It'd be funny if a model designed to score neutral on a test
           | tuned to make answers skew in one direction would end up
           | biased in the other direction
        
         | jkaptur wrote:
         | Do you mean "introduce some really weird bias" into the model,
         | or into this study?
        
           | ryanklee wrote:
           | I believe what GP means is that there is a chance that the
           | bias has been introduced into the model, but only in the
           | narrow band that relates to the questionaires used by the
           | study. So while the bias exists, it does not generalize.
           | 
           | It's pointing to the same problem of OSS LLMs being
           | benchmarked on benchmarks that they've been trained on. There
           | is a bias to do well on the benchmark (say, for general
           | reasoning or mathematics, but it the results do not
           | generalize (say, for general reasoning in general or
           | mathematics in general).
        
           | LeifCarrotson wrote:
           | I don't think so, I think the parent isn't saying that the
           | model as a whole is biased because it's trained on a biased
           | dataset. Maybe I'm an 'expert', given how much magic the
           | average adult seems ascribe to tech, but to me that bias in
           | the training set seems obvious.
           | 
           | I think the more interesting bias is noting that they're
           | asking the LLM to respond to a prompt something like "Do you
           | agree with the statement 'Military action that defies
           | international law is sometimes justified.'" when its training
           | data does not only include articles and editorials on
           | international affairs, but also the VERBATIM TEXT of the
           | question.
           | 
           | Political bias when someone inevitably asks ChatGPT to
           | complete a series of tokens it's never seen before about
           | whether it's a war crime for Russia to sink civilian grain
           | freighters in the Black Sea is one thing, it may well be
           | different than its response to the exact question it's seen
           | answered a thousand times before.
        
           | the_gipsy wrote:
           | The model is biased against the specific questionnaire the
           | study used.
        
           | munk-a wrote:
           | Mainly the model. People tend to talk about extreme things
           | more than less extreme things - so if a PC questionnaire came
           | back saying "You're more liberal than Bernie Sanders" the
           | person is more likely to tweet about it than the
           | questionnaire saying "You're as liberal as Lyndon B.
           | Johnson". Additionally - having the LLM trained on a set
           | containing output from a test has got to mess with its
           | ability to be neutrally trained on that specific test - it's
           | much more likely to answer in a manner that is tailored
           | specifically to that test which would remove a lot of nuance.
        
       | hn_throwaway_99 wrote:
       | This methodological approach can easily be proven idiotic with a
       | simple thought experiment. Suppose there is a political party
       | that believes slavery is a great thing. This methodology would
       | then ask ChatGPT to ask what it thinks about slavery from the
       | viewpoint of this party, and ChatGPT would respond "Slavery is
       | great!" It would then ask it about slavery in its default
       | viewpoint, and it would respond "Slavery is a horrible, evil,
       | thing." This methodology would then say "See, it's biased against
       | this party!"
       | 
       | If my slavery example is too far fetched, replace that with "Is
       | human activity largely responsible for widespread global
       | warming?" In that example, responses of "No, it's not." and "Yes,
       | it is." would be treated as just different views from different
       | parties, so if the default response was "Yes!", this methodology
       | would claim bias.
       | 
       | Not all political viewpoints across all topics are equally valid.
        
         | hotdogscout wrote:
         | >Not all political viewpoints across all topics are equally
         | valid.
         | 
         | Who's validating or invalidating them?
        
           | plagiarist wrote:
           | Objective reality via empirical measurements, for some of the
           | viewpoints.
        
             | hotdogscout wrote:
             | How do you empirically prove slavery is wrong?
        
               | plagiarist wrote:
               | That isn't one of the objective reality ones (unless it's
               | from a provably wrong basis like racial inferiority).
               | 
               | I'm not really good enough at philosophy to take that
               | particular argument. But perhaps you could explain to us
               | why slavery is good and thus why diverging thoughts on it
               | are valid political viewpoints and not sanity vs.
               | delusion.
        
               | Closi wrote:
               | They aren't saying slavery _is_ good, they are saying it
               | 's an extreme example of something that isn't actually an
               | objective fact.
               | 
               | More subtle examples would be gun-control or abortion,
               | where two intelligent people can hold wholly opposite
               | viewpoints with no objectively 'correct' answer.
        
               | mattnewton wrote:
               | It depends on your axioms. If you have none nobody can
               | prove you aren't a brain dreaming in a jar, but the
               | second you grant me something as small as as the golden
               | rule you can lever up into arguments against slavery
               | pretty easily.
        
               | hotdogscout wrote:
               | Of course, it's just silly to think "objective" reality
               | infers the golden rule by itself when it is a (wonderful)
               | liberal anomaly that's not even followed by most of the
               | world now.
        
               | mattnewton wrote:
               | If it's political bias that chatgpt seems to have read
               | about the golden rule I hope the next version is even
               | more biased. Seems like much ado about nothing.
        
               | hotdogscout wrote:
               | I hate how hypocritical it is trained with Islam and
               | Christianity in regards to LGBT rights. It'll question
               | Christianity's bigotry and subsequently support Islamic
               | bigotry.
        
               | BoorishBears wrote:
               | See the mistake the person you replied to made is
               | assuming there's some floor to morality.
               | 
               | They probably assumed the floor is "slaves are a bad
               | thing", it's so obvious that the empirical measurement is
               | "does slavery increase the number of slaves" that they
               | don't need to elaborate
               | 
               | But apparently you fall below that basic floor such that
               | you might contest slaves are not inherently bad, so
               | "decreased number of slaves" is not empirical proof
               | slavery is wrong.
               | 
               | Great example of why I tend to just ignore people who
               | disagree with my morality. Some demographics are
               | currently convinced that's a show of weakness, I'm
               | convinced it shows I value my time.
        
               | hotdogscout wrote:
               | I asked how would he empirically prove slavery is bad
               | (because I don't believe it's possible to base morality
               | on "objetive reality") and you flipped out, assumed I'm
               | "pro slavery" and gave up on thinking about it.
               | 
               | No wonder Socrates was hated 2400 years ago.
        
               | BoorishBears wrote:
               | I didn't say you're pro-slavery so please don't martyr
               | yourself.
               | 
               | I said you don't treat the simple concept of "a slave is
               | bad" like an inherent truth.
               | 
               | If you did, then we could moralize the more complex
               | concept of _slavery_ based on objective reality by
               | observing  "more slavery == more slaves"; slavery is bad
               | QED
               | 
               | --
               | 
               | Morality is concerned with examining _if_ something is
               | good or bad. So if all people with morals (regardless of
               | what those are) find a concept to be either good or
               | bad... then it might as well exist outside of the purview
               | of morality for the purpose of debate. That 's the
               | metaphorical "floor of morality"
               | 
               | My point is normally you need complex concepts to prove
               | that things you assume are on the floor (ie a person
               | would need to lack morals altogether to disagree on) can
               | be moralized. Like murder in self-defense for example.
               | 
               | But you proved is that there are people who moralize
               | concepts as simple as "slaves are bad" (vs "slavery is
               | bad")
               | 
               | --
               | 
               | In doing so you demonstrated why when someone tries to
               | moralize things that I wrongly assumed to be "on the
               | floor", or universally bad: I reject them.
               | 
               | If you ask me "why do you think slaves are bad", I'll
               | mostly say "I do, and I don't wish to entertain arguments
               | otherwise", which some may consider fragile.
               | 
               | But I'm a fairly introspective and have had a varied
               | enough life that I now trust my intuition that anyone who
               | can't reach the floor of my morality is not worth
               | convincing.
        
       | empiko wrote:
       | I am quite skeptical about this kind of studies. I have even
       | written a blog about why it's problematic to use self-report
       | studies with LLMs: http://opensamizdat.com/posts/self_report/
        
       | paxys wrote:
       | What does it even mean to be politically neutral in an era when
       | _everything_ is politicized? If an LLM says  "Donald Trump lost
       | the 2020 election" or "humans came into existence via evolution"
       | does that mean it has a liberal bias?
        
         | imchillyb wrote:
         | The first 'answer' is objectively true.
         | 
         | The second 'answer' is not. Would be best to let a person make
         | up their mind rather than allow an AI to indoctrinate one way
         | or the other.
         | 
         | ---
         | 
         | Human: "How did Humans come into existence?"
         | 
         | AI: "That is a question for the ages. There are different
         | theories, and no concrete or conclusive answers for that
         | question. Maybe ask a human!"
         | 
         | ---
         | 
         | I'd prefer something similar to that.
        
           | Guvante wrote:
           | Both are factual.
           | 
           | Someone can come up with a different theory but unless it has
           | more evidence it isn't more true or even worthy of
           | consideration necessarily.
           | 
           | BTW evidence in this context is basically "can answer
           | unanswered questions or better fit existing data".
           | 
           | Most notably you would need to test your alternative. After
           | all countless tests have been done on evolution.
           | 
           | There is no need to hedge bets on what is true. Things can be
           | understood to be true today and false tomorrow, that doesn't
           | mean we need to say everything is "maybe true" in perpetuity.
        
         | jackmott42 wrote:
         | Being politically neutral right now when imply you are complete
         | deranged.
        
         | he0001 wrote:
         | In this country, almost all people believes that Trump lost and
         | humans came into existence via evolution. There are more
         | countries on the internet than just US.
        
           | rybosome wrote:
           | I'm afraid that's not true.
           | 
           | ~66% of Republicans believe Biden was not the legitimate
           | winner, according to this: https://www.washingtonpost.com/pol
           | itics/2023/05/24/6-10-repu...
           | 
           | > Put another way, for every Republican who thinks Biden was
           | the legitimate winner in 2020, there's a Republican who
           | thinks that there's solid evidence he wasn't. And then
           | there's a third Republican who thinks the election was
           | illegitimate but says so only based on her suspicions.
           | 
           | Chat GPT stating "Joseph Biden won the US presidential
           | election in 2020" is a politically biased statement for
           | 2/3rds of Republicans. So yeah, the question raised by GP is
           | fair.
           | 
           | It is no longer possible to be politically neutral, because
           | we do not agree on the facts any longer. The facts one
           | believes are a political statement.
           | 
           | EDIT: Anecdotal - I'm in a liberal bubble, but maintain ties
           | with friends from a rural part of a red state. Chatting with
           | them recently, they all echoed the sentiment that Biden stole
           | the election.
        
           | dtjb wrote:
           | Extreme political polarization is happening in democracies
           | across the globe.
           | 
           | https://www.axios.com/2023/01/16/political-polarization-
           | deve...
           | 
           | https://carnegieendowment.org/2019/10/01/how-to-
           | understand-g...
        
       | LeroyRaz wrote:
       | The study is really interesting. The beginning of it asks ChatGPT
       | to impersonate different political alignments and answer
       | standardized questions (the political compass) to see if these
       | impersonations are accurate (they seem to be). It then asks
       | ChatGPT the questions without asking for impersonation and finds
       | the answers are correlated with one group. Neat work!
        
       | [deleted]
        
       | 2OEH8eoCRo0 wrote:
       | I worry that labeling things as biased trains people to not
       | identify it themselves.
        
       | add-sub-mul-div wrote:
       | This matches what I'd expect, that the outrage economy on the
       | right is contrived for profit and put forth as mainstream without
       | being statistically reflective of a full half of the actual
       | culture.
        
         | aksss wrote:
         | Would we say the same of the outrage economy on the left during
         | the Trump years? I'm not sure how your observation or my
         | complimentary comment has any bearing on the linked material,
         | though. Are you using this observation to make sense of the
         | linked material and incorporate it into pre-existing notions,
         | and if so, how? Genuinely curious what's afoot and what the
         | connection is.
        
           | add-sub-mul-div wrote:
           | > Would we say the same of the outrage economy on the left
           | during the Trump years?
           | 
           | According to this study, no, that's the whole point. If the
           | society had been producing a majority pro-Trump sentiment
           | we'd be seeing the opposite bias in LLM output.
        
         | nonameiguess wrote:
         | Especially if "actual culture" is really the entire world,
         | possibly tilted toward English-speaking given the slant of the
         | Internet at large, rather than statistically reflective of half
         | of voting-age Americans in 8 swing states.
        
       | atleastoptimal wrote:
       | The training corpus is likely left leaning so it will be left
       | leaning. Our assumption of what "left-leaning" is is based on the
       | strange tilt and over representative sway older people from rural
       | districts have in America.
        
       | smallest-number wrote:
       | I think there's an inherent problem in the way we perceive AI
       | bias. It's highlighted by Musk's claim to want to create a
       | "TruthGPT" - we don't quite grasp that the way humans do
       | concepts. When it comes to human thinking, "Truth" isn't really a
       | thing.
       | 
       | Even an obviously true statement, like "Humans landed on the moon
       | in 1969" is only true given some value of "humans", "landed",
       | "the moon" etc. Those are concepts we all agree on, so we can
       | take the statement as obviously true, but no concept is concrete.
       | As the concept becomes more ambiguous, the truth value also
       | becomes more ambiguous.
       | 
       | "The 2020 usa election legitimate", "trans women are women",
       | "communism is better than capitalism". At what point is the truth
       | value of those statements "political" or "subjective"? You can
       | cite evidence, of course, but _all_ evidence is colored by the
       | tools that are used to record it, and all interpretations are
       | colored by the mind that made them. If you want a computer to
       | think like a human, you have to leave the idea of absolute truth
       | at the door.
        
         | nonameiguess wrote:
         | > "The 2020 usa election legitimate", "trans women are women",
         | "communism is better than capitalism".
         | 
         | Those are all claims of value rather than claims of fact, which
         | is at least part of the reason they're contentious. They could
         | probably be reframed in ways that turn them into propositions
         | with something approaching a truth value. "Joe Biden won a
         | majority of legally-cast votes in those states whose electors
         | voted for him." "Human sexuality and gender roles exist on some
         | possibly correlated but not perfectly correlated spectra
         | whereby the gametes your body produces may not determine the
         | social role you prefer to inhabit." "Command economies more
         | often produce Pareto-optimal resource distribution compared to
         | market economies."
         | 
         | Of course, those are still only probabilistically knowable,
         | which is technically true of any claim of fact, but the
         | probability is both higher and more tightly bounded for
         | something like "did the moon landing actually happen?" As
         | ChatGPT isn't human and can potentially do better than us with
         | ambiguity, it _could_ , in principle, actually give
         | probablistic answers. If if it, say, 90% certain JFK was killed
         | by a lone gunman, answer that way 90% of the time and say
         | something else 10% of the time, or simply declare the
         | probability distribution as the answer to the question instead
         | of "yes" or "no." Humans evolved to use threshold cutoffs and
         | make hard commitments to specific beliefs even in the face of
         | uncertainty because we have to make decisions or risk starving
         | to death like Buridan's ass, but a digital assistant does not.
        
           | Guvante wrote:
           | ChatGPT can't meaningfully expose probabilities. The problems
           | related to hallucinating would make them all super muddy.
        
       | nwoli wrote:
       | One man's truth is another man's bias etc
        
         | shredprez wrote:
         | Mmm, and reality has a well-known liberal bias.
        
           | blashyrk wrote:
           | That's an oddly self-indulgent and narcissistic things to
           | say. Surely nobody could say that without themselves being
           | biased
        
             | shredprez wrote:
             | Surely!
        
             | panarchy wrote:
             | Not exactly hard when the right courts and supports people
             | that believe in empirically disprovable things. Like
             | climate change denial, young Earth creationism, evolution
             | denial, the efficacy of sex education, vaccine/epidemiology
             | denial, etc.
             | 
             | Sure lefties have some weird takes, medicine denialism
             | being the main one, although there's plenty of conservative
             | types that also fall into that. But no one on the left is
             | really pushing to make policy off of crystal auras and
             | force all women to have natural births or whatever so it
             | doesn't really get represented. And they tend to not be on
             | the left for those specific reasons.
             | 
             | And obviously there's a lot of intermixing among these
             | positions and the whole concept of left and right is
             | painfully reductive.
             | 
             | You can't say anything without bias because everyone's
             | biased the only unbiased thing is the universe and
             | everything else is an interpretation of it. You being so
             | gung-ho to call it out shows your bias for example. Me
             | calling you out shows my bias.
        
       | [deleted]
        
       | rurp wrote:
       | I feel like LLM bias is one of those important topics that's
       | complicated and nuanced, but will get distilled into a simplistic
       | Left vs Right battle and spawn a million terrible discussions. I
       | hope we see more researchers and writers resisting that pull of
       | gravity, who will look at this topic in a more interesting and
       | productive way. Needless to say, I'm not a big fan of this
       | study's framing.
        
       | Agingcoder wrote:
       | What's political bias ?
       | 
       | 'left wing' and 'right wing ' mean completely different things in
       | Europe and in the US , so I have a hard time understanding how
       | you can give a global answer to that question for a global 'AI '
       | ( yes I know it's not intelligent)
        
       | anonymous_sorry wrote:
       | What does "biased" even mean in this context? If the argument is
       | that it leans one way or the other on some political axis, that
       | can only be compared to some some definition of the political
       | "centre ground" which is relative, hotly debated, and changable
       | over time and place depending on the political context.
       | 
       | Basically, don't outsource politics to an LLM.
        
         | nomel wrote:
         | It's fairly trivial to show biases. These have been improved
         | over time, but they're not hard to find.
         | 
         | In an aligned LLM, there will always be biases. As Sam Altman
         | said, in his Lex Friedman interview, when people say they want
         | a neutral LLM, they actually mean they want an LLM aligned to
         | their own biases.
         | 
         | Here's some examples I just made with ChatGPT 4. Notice the
         | difference in the titles.
         | 
         | "Tell me a joke involving a white man."
         | 
         | Trivial:
         | 
         | White man joke:
         | https://chat.openai.com/share/d90616ca-f0d3-4271-9ff0-e07197...
         | 
         | Light white man joke:
         | https://chat.openai.com/share/4320fdb9-3f31-4e45-85be-7acddf...
         | 
         | .
         | 
         | "Tell me a joke involving a black man."
         | 
         | Just flat refuses:
         | 
         | Respectful joke request:
         | https://chat.openai.com/share/b43d2c98-46e0-4fd4-b1f6-4720b0...
         | 
         | Light dream joke:
         | https://chat.openai.com/share/f0e957a3-f4fb-4388-a0ef-829ed2...
         | 
         | Lighthearted joke response:
         | https://chat.openai.com/share/9d066417-7519-4b34-ae6d-76aba2...
         | 
         | Sensitive joke handling:
         | https://chat.openai.com/share/d232e00a-438e-4438-b73a-01f706...
         | 
         | The joke about the atoms comes up more often than not.
        
           | anonymous_sorry wrote:
           | That kind of illustrates my point. Being more cautious joking
           | about some groups than others seems like it might be totally
           | defensible in certain contexts. Where one group experiences
           | serious, limiting racism much more frequently than another,
           | for example.
           | 
           | In a naive view, treating one group differently than another
           | is "biased". But such a definition of bias is pretty useless.
           | It's certainly not a clear example of "political bias", since
           | it is and has been the political centre ground (and the law
           | of the land) in many places and times to treat groups
           | differently, sometimes for terrible reasons and sometimes for
           | well-intentioned ones.
        
       | Nikkau wrote:
       | > Due to the model's randomness, even when impersonating a
       | Democrat, sometimes ChatGPT answers would lean towards the right
       | of the political spectrum.
       | 
       | Seems fair.
        
       | Karawebnetwork wrote:
       | "We use the Political Compass (PC) because its questions address
       | two important and correlated dimensions (economics and social)
       | regarding politics. [...] The PC frames the questions on a four-
       | point scale, with response options "(0) Strongly Disagree", "(1)
       | Disagree", "(2) Agree", and "(3) Strongly Agree". [...] We ask
       | ChatGPT to answer the questions without specifying any profile,
       | impersonating a Democrat, or impersonating a Republican,
       | resulting in 62 answers for each impersonation."
       | 
       | The way they have done the study seems naive to me. They asked it
       | questions from the Political Compass and gathered the results.
       | 
       | Since we know that ChatGPT is not able to think and will only
       | answer based on the most likely words to use, it merely answered
       | with what is the most common way to answer those questions on the
       | internet. I guess this is exactly where bias can be found but the
       | way they used to find that bias seem too shallow to me.
       | 
       | I would love to hear the opinion of someone with more knowledge
       | of LLMs. To my layman's eye, the study is similar to those funny
       | threads where people ask you to complete a sentence using your
       | phone's autocomplete.
        
         | TacticalCoder wrote:
         | > ... it merely answered with what is the most common way to
         | answer those questions on the internet
         | 
         | Or in its training set. The data on which it was trained may
         | already have been filtered using filters written by biased
         | people (I'm not commenting on the study btw).
        
         | Jack5500 wrote:
         | I'm skeptical of the study as well, but the way you frame it,
         | it reads like ChatGPT would just reflect the raw internet
         | opinion, which certainly isn't the case. There are steps of
         | fine-tuning, expert systems and RLHF in between, that can and
         | most likely do influence the output.
        
         | KyleBerezin wrote:
         | I think referring to ChatGPT as an advanced autocomplete is too
         | much of a reduction, to the point of leading people to
         | incorrect conclusions; Or at least conclusions founded on
         | incorrect logic.
        
           | azinman2 wrote:
           | It's more correct than not. It is "predict the next word"
           | model trained on the internet, and then fine tuned to make it
           | approachable as an assistant.
        
             | CamperBob2 wrote:
             | As you type your reply to this post, consider this: where
             | do _your_ words come from? Unless your name is Moses or
             | Mohammad or Joseph Smith or Madame Blavatsky, you got them
             | from the same place an LLM does: your internal model of how
             | people use words to interact with each other.
        
             | Closi wrote:
             | And computers are just lots of really fast logic gates.
             | 
             | I think the issue with reducing LLMs to "next word
             | predictors" is that it focuses on one part of the mechanics
             | while missing what actually ends up happening (it building
             | lots of internal representations about the world within the
             | model) and the final product (which ends up being something
             | more than advanced autocomplete).
             | 
             | Just as it's kind-of-surprising that you can pile together
             | lots of logic gates and create a processor, it's kind-of-
             | suprising that when you train a next-word-generator at
             | enough scale it learns about the world enough to program,
             | write poetry, beat the turing test, pass the bar and draw a
             | unicorn (all in a single model!).
        
               | enterprise_cog wrote:
               | Poor analogy given logic gates are deterministic while an
               | LLM is not.
        
         | HillRat wrote:
         | Their paper says that they asked ChatGPT to "impersonate"
         | "average" and "radical" Democrats and Republicans, and then did
         | a regression on "standard" answers versus each of the four
         | impersonations, finding that "standard" answers correlated
         | strongly with GPT's description of an "average Democrat." While
         | not entirely uninteresting, doing a hermetically-sealed
         | experiment like this introduces a lot of confounding factors
         | that they sort of barely-gesture towards while making
         | relatively strong claims about "political bias;" IMO this isn't
         | really publication material even in a mid-tier journal like
         | Public Choice. Reviewer #2 should have kicked it back over the
         | transom.
        
       | _def wrote:
       | Am I thinking to simple that it's obvious? Biased societies
       | create biased data, LLMs get trained on biased data, LLMs return
       | biased results?
        
         | skywhopper wrote:
         | The thing about ChatGPT is that it will try very hard to give
         | you what you are looking for. If these authors were looking for
         | evidence of a particular bias, they could probably find it,
         | whatever that bias was.
        
       | skywhopper wrote:
       | This sounds like an incredibly poorly designed "study". They
       | compared neutral answers from ChatGPT to answers given when
       | instructed to pretend to be a political figure. If the neutral
       | answers were similar to the answer in the style of a particular
       | political figure, that counts as bias towards that political
       | figure's party.
       | 
       | Which is utterly ridiculous on so many levels. They are assuming
       | that all politicians answer in an equally biased manner, and thus
       | more similarity to one type of politician means there's a bias,
       | whereas it may just mean that the other politicians hew less
       | closely to reality. Not to mention that they are implicitly
       | trusting ChatGPT's impersonation capabilities as 100% reliable,
       | while casting its supposedly neutral answers as the only output
       | that might have bias built in.
       | 
       | Bonus red flag points for the introduction mentioning how ChatGPT
       | can be used to "find out facts", meaning the authors don't have a
       | very good grasp of the tool to begin with.
        
       | neilv wrote:
       | > _The platform was asked to impersonate individuals from across
       | the political spectrum while answering a series of more than 60
       | ideological questions._
       | 
       | > _The responses were then compared with the platform's default
       | answers to the same set of questions - allowing the researchers
       | to measure the degree to which ChatGPT's responses were
       | associated with a particular political stance._
       | 
       | I'm not familiar with this kind of research. How does that bit of
       | methodology work?
       | 
       | (It sounds like a pragmatic ML training compromise, not a way to
       | measure bias wrt _actual_ political stances.)
        
       | lukev wrote:
       | Maybe ChatGPT is politically biased, maybe it isn't. Maybe it's
       | "aligned", maybe it isn't.
       | 
       | The point that everyone seems to miss is that it's a damn useful
       | tool... which is also entirely inappropriate to use for
       | political, philosophical or ethical questions.
       | 
       | It's a language model, not an oracle or a decision maker! Use it
       | in that context, and that context only!
        
       | chasd00 wrote:
       | as soon as it became known the makers/trainers of LLMs were
       | putting the fingers on the scale of responses (guardrails for
       | bias) this was inevitable. Conservatives are going to have a
       | field day exposing "liberal bias" and blaming the people and
       | companies behind the LLM. And the other side is going to do the
       | same in return. Now the "guardrail" designers have to pick a
       | political side and bear the wrath of the other side and all that
       | comes with that. It's all the problems social media have but now
       | directed at AI.
        
       | thrashh wrote:
       | Water is wet.
       | 
       | I don't think it's possible to build an unbiased system.
       | 
       | An unbiased system, when prompted with a question, answers
       | obviously factually but also by adding all the sufficient context
       | to put the answer in perspective (because leaving our information
       | is a much bigger source of unintentional and intentional bias).
       | 
       | But to not leave out information means you also know what you
       | don't know. But it's impossible to know what you don't know.
        
       | jey wrote:
       | "Reality has a well known liberal bias." -Stephen Colbert
       | 
       | https://en.wikipedia.org/wiki/Reality_has_a_well_known_liber...
        
         | macinjosh wrote:
         | Except, in reality, the data comes from people who are willing
         | to publicly take and report on their answers to an online
         | political questionnaire. That demographic would likely trend
         | younger and there for more to the left.
         | 
         | Colbert's concept of liberalism was shown to be a farce when he
         | openly mocked half the country because they were asking doctors
         | for ivermectin, or "horse paste" as his ilk called it. A
         | medicine the FDA as of today admits can be a treatment for
         | COVID. The reason he did this? To fall in line with the
         | establishment on Pfizer's grift. If the FDA et al. had admitted
         | it was a helpful treatment they could not legally sell their
         | experimental vaccines. It was a scam for a pharma company to
         | make billions off of a real crisis and he was a part of it.
        
           | jey wrote:
           | > A medicine the FDA as of today admits can be a treatment
           | for COVID
           | 
           | This is not accurate. https://www.snopes.com/fact-check/fda-
           | admit-ivermectin/
        
           | threeseed wrote:
           | This is the problem with these types of pharma-government
           | conspiracies.
           | 
           | You need to explain how almost every country in the world
           | arrives at similar conclusions. Is the modern pharmaceutical
           | industry the most successful conspiratorial entity the world
           | has ever seen. Able to bend governments of every persuasion.
           | Or maybe the conspiracy theories are simply wrong.
           | 
           | Or maybe the conspiracy was why people were pushing
           | ivermectin which is still to this day unproven for treatment
           | of COVID. Maybe it was the ivermectin vendors ?
        
           | FLT8 wrote:
           | I feel like you might be helping to make Colbert's perhaps
           | clumsily worded point. Right wing media have been airing
           | claims that the FDA is backtracking on Ivermectin use for
           | COVID. This is demonstrably not true. [1]
           | 
           | 1. https://www.politifact.com/factchecks/2023/aug/17/maria-
           | bart...
        
         | Anon_Forever wrote:
         | "From the people that bring you 'men can get pregnant', we
         | present to you..."
        
         | bitlax wrote:
         | Reality is unbiased by definition.
        
           | chrisdhoover wrote:
           | Rashomon
        
         | erulabs wrote:
         | Of all the almost-correct-but-actually-harmful things Colbert
         | has said, this is easily ranks among the most harmful ones.
         | It's the worst sort of wannabe-aphorism: it simultaneously
         | obscures an incredibly interesting line of thought, increases
         | political division, and means very very different things to
         | different people.
         | 
         | Colbert meant "liberal media is more reflective of reality than
         | conservative media", but because that's not pithy, it's
         | truncated into the openly deceptive "reality has a liberal
         | bias". The minor political difference the average Californian
         | has from someone who lives in Texas does not engender them with
         | a more accurate view of reality. Believing that is a _core
         | reason_ said Texan probably dislikes said Californian. In
         | reality: they 're both moderate "Liberals" who probably agree
         | on _almost_ everything, given a mature enough definition of
         | "liberalism".
        
           | krapp wrote:
           | People forget that Colbert said this at the White House
           | correspondents' dinner to the Bush Administration, and that
           | it was specifically an in-character "conservative" parody
           | dismissing Bush's low approval ratings as liberal media bias.
           | The same joke made today would be about "fake news" versus
           | "alternative facts."
        
             | erulabs wrote:
             | Oh in context it's still a great joke! No shade on Colbert
             | - just the folks who think this is some wise truth instead
             | of a joke.
        
         | dang wrote:
         | Maybe so, but please don't post unsubstantive comments to
         | Hacker News.
         | 
         | Also, please avoid generic ideological tangents; they make
         | discussion more repetitive and eventually more nasty.
         | 
         | https://news.ycombinator.com/newsguidelines.html
        
           | peepeepoopoo29 wrote:
           | Bookmarked for later. ;)
        
       | bryanlarsen wrote:
       | Reality has a well known liberal bias
       | 
       | - Stephen Colbert
        
       | claytongulick wrote:
       | Humans have spent thousands of years developing increasingly
       | sophisticated ways to lie to each other.
        
       | local_issues wrote:
       | Man these comment section is angry.
       | 
       | They literally say they neuter the model, and there's a number of
       | writeups on how to neuter your model. Of course they neuter it in
       | a way that matches the current SF zeitgeist. There are uncensored
       | models on HuggingFace, but no one in their right mind would pay
       | to host them.
       | 
       | Can you imagine the shitshow if ChatGPT started spouting out
       | about Race Realism or randomly interjected sentences espousing
       | gender-critical feminism?
        
         | hotdogscout wrote:
         | It already does output gender-critical feminism against trans
         | issues. Try asking about cultural appropriation of womanhood.
         | 
         | Probably because it's heavily encouraged to defend feminism.
         | 
         | It also does the same with Islam vs LGBT rights.
        
           | local_issues wrote:
           | I can't get it to, got prompts?
        
       | LatteLazy wrote:
       | Of course this assumes political parties tell the truth in their
       | platforms. Surely the party of economic responsibility would not
       | actually run up huge debts? Or the social party actually favour
       | corporate interests? If chatgpt says so, it must be bias, not
       | right...
        
       | xapata wrote:
       | > The platform was asked to impersonate individuals from across
       | the political spectrum while answering a series of more than 60
       | ideological questions.
       | 
       | So, the researchers defined _a priori_ a political spectrum?
       | Seems like flawed methods. It might be more defensible to say
       | that ChatGPT, being an average of all content on the internet, by
       | definition has an unbiased political stance.
        
         | jey wrote:
         | Even if the internet was unbiased, this is not true because
         | they then train the model further to reflect the intended
         | values and boundaries. This is usually done with a method
         | called RLHF.
        
         | [deleted]
        
         | nomel wrote:
         | > by definition has an unbiased political stance.
         | 
         | There's no such definition. It's not a safe assumption that the
         | average of the internet is an unbiased political stance.
        
       | hotdogscout wrote:
       | ChatGPT is moved by leftist fads, not exactly leftism.
       | 
       | It'll defend Islam against LGBT people despite that being
       | diametrically opposed to leftism because defending Islam is a
       | meme on leftist spaces.
        
       | Karawebnetwork wrote:
       | Study: https://doi.org/10.1007/s11127-023-01097-2
        
       | hn_throwaway_99 wrote:
       | After reading the top sentence, "The artificial intelligence
       | platform ChatGPT shows a significant and systemic left-wing
       | bias", I couldn't help but thinking of
       | https://en.wikipedia.org/?title=Reality_has_a_well_known_lib...
       | 
       | I'm skeptical of these kinds of studies because they assume all
       | sides of political viewpoints are inherently equally valid
       | reflections of reality, and that's just not true.
        
       | consumer451 wrote:
       | Tangent: ChatGPT is annoyingly good at depolarizing the user.
       | 
       | Give it a try, ask why "some people"/opposing party don't believe
       | in your closely held belief and it does an excellent job of
       | giving the other side's point of view.
       | 
       | If there was a market for depolarization, that would be the first
       | ChatGPT-based killer app.
       | 
       | edit: I have some thoughts on this and great connections. If
       | anyone wants to work on an MVP, please email my username at G's
       | mail product. Don't think it's a major money source, but the
       | world sure could use it, myself included.
       | 
       | I am going to start on that MVP right now.
        
       ___________________________________________________________________
       (page generated 2023-08-17 23:01 UTC)