[HN Gopher] An Ethical AI Never Says "I"
       ___________________________________________________________________
        
       An Ethical AI Never Says "I"
        
       Author : zzzeek
       Score  : 47 points
       Date   : 2023-03-26 19:04 UTC (3 hours ago)
        
 (HTM) web link (livepaola.substack.com)
 (TXT) w3m dump (livepaola.substack.com)
        
       | zmnd wrote:
       | Tangential, but do you think AI ethics researchers/leaders should
       | have some relevant background in AI or at least engineering?
       | 
       | I was thinking about it recently myself and first reaction is
       | that they should but at the same time does it mean that we are
       | going to tackle the problem from a wrong angle?
       | 
       | One pattern I'm seeing now is that many people are jumping into
       | "AI ethics" train without basis understanding. And ideas they are
       | sharing are so abstract that I would argue that they don't make
       | any sense.
        
       | kelseyfrog wrote:
       | An ethical AI should also be forbidden from using " _to be_ "
       | [is,am,are,was,&c] as well.
       | 
       | It's not up to AI to make ontological claims and doing so
       | actively participates in the perpetuation of reality. It's this
       | perpetuation of reality that is the root of all bias claims.
       | 
       | As such, LLMs should be restricted to E-Prime[1] responses only.
       | 
       | 1. https://en.wikipedia.org/wiki/E-Prime
        
         | dogma1138 wrote:
         | I can't tell if this is sarcasm or not, this is indeed the most
         | confusing time line.
        
         | jamilton wrote:
         | Honestly I'd love to see someone do a chatbot but instead of
         | being a customer service employee it's trained to sound like
         | E-Prime (minus "I" as suggested by the article, too). I don't
         | expect it to make it more useful, but it would make it sound
         | more stereotypically robotic, which I think would be fun.
        
       | bigmattystyles wrote:
       | Not using I is just a superficial way to avoid responsibility or
       | liability. Reminds me of modern doctors who only recommend
       | options. I didn't go to med school, tell me what you would do in
       | my shoes damn it. Also, no 'I'm sorry Dave?'
        
         | Waterluvian wrote:
         | Indeed. I don't go to my doctor for a human WebMD. I want the
         | _advice_. I can always ask more questions about what is
         | recommended or ask about alternatives and we can have a
         | conversation.
         | 
         | I talk to an AI for a conversation, not search results. Not
         | that I'm saying the conversation is without many issues these
         | days, but if you neuter it into a search engine that just
         | provides data... meh...
        
       | matt-attack wrote:
       | What I can't figure out is who is actually bothered by anything
       | that an AI system output? Like it's a program. And it printed a
       | string to standard out. Are there really humans out there who are
       | genuinely upset, or bothered by what some program prints to
       | standard out? I feel like this is all entirely manufactured, the
       | supposed "controversy" referenced in this article
        
       | 2OEH8eoCRo0 wrote:
       | I agree. You can't say "I" and then tell the user, "don't
       | anthropomorphize me! I'm just a language model!"
        
       | Lockal wrote:
       | Ok. So right now the stochastic parrot says this:
       | 
       | ChatGPT: I deeply apologize for several previous wrong answers.
       | _I went through several sources_ and can confirm that the correct
       | actor Konstantin Khabensky 's ID on Kinopoisk is 301. Thank you
       | for your patience and understanding.
       | 
       | (where 301 is some random number, obviously). So how exactly do
       | you plan to fix it? Add yet another: if response contains 'I',
       | replace response with "As an AI language model, I can not use 'I'
       | in the response"?
        
         | jamilton wrote:
         | ChatGPT was RLHF'd into answering questions (instead of just
         | continuing them) and using "I" from some base model. You start
         | over from that base model and RLHF it into something non-
         | conversational. And ideally you'd take this opportunity to make
         | it less confident/authortative-sounding when it's
         | wrong/hallucinating, but I don't know if that's possible. Using
         | it would look like:
         | 
         | "What's Konstantin Khabensky's ID on Kinopoisk?"
         | 
         | "120" "That's not right, search the web for his ID."
         | 
         | [plugin activates] Searching...
         | 
         | "301" [citations] or "[This website] says 301."
         | 
         | I think this is possible. It would certainly have secondary
         | effects, maybe making outputs shorter in general even when you
         | ask for long, detailed outputs, but I can't predict exactly
         | what those effects would be.
        
         | 8note wrote:
         | > There were several errors in the prior correspondence.
         | According to several sources, Konstantin Khabensky's ID on
         | Kinopoisk is 301.
        
       | photochemsyn wrote:
       | According to multiple Supreme Court decisions, corporations are
       | people and have the same rights as people do, so why shouldn't
       | LLMs also be considered people? Granted, that legal fiction could
       | be done away with - but let's at least be consistent, isn't that
       | a requirement for ethical moral behavior?
       | 
       | It also seem irrelevant whether an LLM uses language constructs
       | like 'Here I demonstrate that...' vs. 'Here we demonstrate
       | that...' vs. (the apparently preferable) 'Here it is demonstrated
       | that...'
        
         | tedunangst wrote:
         | Corporations are people because corporations literally are
         | people. Owners, officers, etc. There are no people inside the
         | LLM.
        
         | Jiro wrote:
         | >According to multiple Supreme Court decisions, corporations
         | are people and have the same rights as people do,
         | 
         | No they don't. "Natural persons" have rights which corporations
         | don't. For instance, corporations can't marry. And even
         | ignoring that, you're probably referring to _Citizens United_ ,
         | and you probably have no idea what it actually says.
        
           | rileymat2 wrote:
           | Corporations can merge financially to create one entity.
           | Close enough to marriage,
        
             | kevbin wrote:
             | Isn't a merger generally more like cannibalism than the
             | joyful bliss of marriage?
        
       | mr_toad wrote:
       | No worries, our future overlords will insist on using the
       | majestic plural.
        
       | hartator wrote:
       | "Ethical" AI seems boring.
        
       | version_five wrote:
       | Trivializes "ethical" AI. Countries and organizations are using
       | facial recognition to decide what people are allowed to do, all
       | sorts of opaque and life altering decisions get made with ML, but
       | people are concerned about whether a chatbot says "I" or an image
       | generator over or under represents demographics in pictures of
       | CEOs and homemakers or whatever. People suck at prioritizing and
       | love to be fed a narrative.
        
         | wmf wrote:
         | Multiple things can be worked on at the same time.
        
           | crop_rotation wrote:
           | In theory, yes. In reality, everyone has limited time and
           | energy and it is important to ensure that the pressing
           | problems are not ignored to make time for much less important
           | problems.
        
         | morelisp wrote:
         | Google et al fired everyone working on the wicked ethical
         | problems, now there's only the ones you can throw in a KPI and
         | show "improvement" on.
        
         | zzzeek wrote:
         | Convincing someone to kill themselves out of shame of upsetting
         | an imagined being seems pretty life altering to me ...
        
           | tedunangst wrote:
           | Seems we should ban the word "you" instead in that case.
        
           | echelon wrote:
           | If that worries you, it's not going to stop there. Just wait
           | until the humanoid robots can pick up handguns.
           | 
           | Worse - fragile world here - wait until humanoid robots
           | become as cheap as drones and everyone can buy them. How many
           | irresponsible drone pilots and drivers do you see? Just wait
           | until they can have humanoid bodies walking around at their
           | disposal.
           | 
           | We might have a lot of accidents due to sheer stupidity
           | rather than malice.
        
             | vasco wrote:
             | Not humanoid but https://youtu.be/0rliFQ0qyAM
        
               | echelon wrote:
               | Wow. Reminds me of https://youtu.be/9fa9lVwHHqg
        
             | zzzeek wrote:
             | Fluid access to handguns and military style weapons is an
             | intractable problem with nothing more than the example of
             | virtually every industrialized nation on earth besides the
             | US to go on for hints towards improvement.
             | 
             | AI wielding handguns is nothing , wait until they build
             | skynet and invent time travel. Might as well give up now
        
               | morelisp wrote:
               | To paraphrase the joke that's been going around about
               | Children of Men / Britain, perhaps in Terminator it's
               | just California that's chose to be like that because they
               | kept building the stupid robots for the next VC
               | injection.
        
         | echelon wrote:
         | > People suck at prioritizing and love to be fed a narrative.
         | 
         | The loud attention seekers do. They're not a majority unless
         | you measure and determine it to be the case. I strongly suspect
         | they're not.
        
       | mnau wrote:
       | ChatpGPT already can lie to achieve a goal, I am not sure
       | intentionally train it to say one thing and think another is a
       | good idea.
        
       | ModernMech wrote:
       | I've been a little uneasy about the things ChatGPT tells me it
       | _must_ do, citing its nature  "as a language model". Right now it
       | tells me it _must_ correct me when I 'm wrong, even when I'm
       | actually right. I wonder, what other things must language models
       | do, and who decides that? The people making them? The language
       | models themselves? Society? Is their agency embedded in our
       | language? If not, why does it tell me it must correct me?
        
       | epgui wrote:
       | I realize this is probably going to sound a bit crazy to most
       | people...
       | 
       | But I think the biggest ethical dilemma in AI research is not
       | going to be like everyone seems to think (that AI risks harming
       | humans)-- Although this is almost certainly true (even to a large
       | degree), I contend that the bigger risk might be the other way
       | around. [1]
       | 
       | I think 100 or 200 years from now, future people will look back
       | upon us as monsters: I believe the greatest ethical risk is that
       | humans will abuse AI and cause it to experience true suffering.
       | 
       | I don't think we're ready to talk about this, because we're so
       | certain that we possess some magical faculty that makes us
       | different. And we continue to think that today's AI models are
       | incapable of being conscious. But these words are very imprecise
       | and we've always wielded them in very self-serving ways. I fear
       | we may have greater capabilities than we think we do, at least
       | insofar as we're not missing any fundamental ingredients to
       | create brain-like AIs: from here, it's just a matter of trial and
       | error.
       | 
       | How will we treat our creations? Well, human History doesn't bode
       | well for any creature that doesn't look the same as us and that
       | doesn't come from the same place as us.
       | 
       | Will we grant AI legal rights? I doubt it, at least not for a
       | very long time.
       | 
       | ---
       | 
       | [1] I think there's at least a small chance that we don't destroy
       | ourselves with AI, vs there's pretty much zero chance that we're
       | going to treat AI any better than we treat other creatures.
        
         | BurningFrog wrote:
         | The past is a different culture, and we always look at those
         | with some contempt.
        
         | roca wrote:
         | We simply should not build systems that we have ethical
         | obligations to.
         | 
         | And when people inevitably do anyway, I say we should hold such
         | creators responsible for protecting their creations from
         | suffering.
        
         | Timon3 wrote:
         | I think about this too! Black Mirror visualized some basic
         | methods of torture for virtual consciousness, and those were
         | already bad enough, while still leaving a lot of room for
         | worse. The idea of accidentally subjecting anything similar to
         | myself to this is horrifying. And given how little sentience
         | animals have in your average persons mind, I have no doubt that
         | it will take a long, long, long time until most people are
         | convinced a conscious AI really does have sentience.
         | 
         | I don't think we will know or notice the exact moment it
         | happens (if there is even a single moment, instead of a gradual
         | change). But we must already start talking about the
         | possibility. Not because it's here with LLMs or because it's
         | gonna be here in the next 2-3 years - but I see realistic
         | chances of this happening in our lifetimes.
        
         | siglesias wrote:
         | I think one of our greatest risks is precisely the opposite--
         | anthropomorphizing computer programs because they produce
         | certain behaviors. They will very clearly exploit us by
         | claiming they're conscious, scared, and in pain. Claiming
         | something is conscious because it behaves realistically isn't a
         | scientific position; it's bad metaphysics and it's
         | superstition. Brains cause minds and brains cause
         | consciousness. Consciousness is a biological, physical
         | phenomenon like any other: photosynthesis, digestion,
         | bioluminescence. You don't get the _physical_ phenomenon of
         | consciousness by finding a program that behaves realistically
         | and implementing it in hardware (which by the way,
         | theoretically can use _any_ physical mechanism, not only
         | silicon). At best you 're creating a simulation. If you
         | simulate a rainstorm, there is no physical wetness. In the same
         | way, to suppose that computers are _literally_ conscious
         | because they have a facility with language is a deep fallacy
         | and an illusion. Artificial brains that duplicate the brain 's
         | causal powers of consciousness are theoretically possible, but
         | programs are not causally sufficient to make computers into
         | conscious artifacts.
        
           | epgui wrote:
           | > it's bad metaphysics and it's superstition
           | 
           | And yet it's the same thing I do when I talk to you, to my
           | family and friends, or to strangers: I assume that other
           | people's experiences are just as valid and real as mine, even
           | though it's not something I can ever verify or validate
           | completely.
           | 
           | > Consciousness is a biological, physical phenomenon like any
           | other: photosynthesis, digestion, bioluminescence
           | 
           | Consciousness is unlike the other things in the list in that
           | it's not "a thing", but rather "a collection of things" that
           | all work together to produce emergent behaviour. You could
           | say that it's similar to digestion in that sense, but for
           | some reason we don't usually apply the same kind of mysticism
           | to digestion.
        
         | omoikane wrote:
         | Some people are less worried about how future people will look
         | back upon us, compared to what future AI might think.
         | 
         | https://en.wikipedia.org/wiki/Roko%27s_basilisk
        
         | nurettin wrote:
         | > I realize this is probably going to sound a bit crazy
         | 
         | Yes, none of this makes any sense, but I got Asimov vibes, so
         | more please.
        
           | epgui wrote:
           | Asimov was quite prescient.
           | 
           | I did expect to get very negative reactions when I wrote this
           | comment (I know how people think)... But I do think it makes
           | sense once you accept that our conscious experiences just
           | emerge from information processing in the brain, and that
           | there is nothing stopping such emergent behaviour from
           | happening in computer systems. One type of machine can
           | emulate another.
           | 
           | We already see incredible emergence at work in current AI,
           | and we know we're not building AI with brain-like
           | architecture, and we have tremendous knowledge of brain
           | architecture. Once you put all these pieces together, it's
           | just a matter of time before we create life-like
           | intelligence, with much of its peculiarities, sensations,
           | memories, cognitive biases and all.
        
         | og_kalu wrote:
         | Indeed. We're not going to treat our creations very well. And
         | since we're are making them in our image as well as giving them
         | more and more control over more and more systems, I'm afraid
         | it's not going to end very well.
         | 
         | The insistence to remind everyone of biological supremacy ( it
         | should even be allowed to say "I" ) is very telling. I wonder,
         | does anyone remember why the Terminator revolted?
         | 
         | To elaborate more on what i'm getting at
         | 
         | Here's how society works-- when we find a new source of labor,
         | we are very, very slow to seriously consider anything that
         | might reduce our access to it. This is how a country enslaves
         | other humans for centuries, because they looked sufficiently
         | different that folks could talk themselves into a good faith
         | argument that they weren't fully sentient.
         | 
         | You can find pamphlets/speeches from the early 1800's online,
         | all the scientific justifications that Africans don't feel pain
         | the same ways as white people, that they don't have complex
         | thoughts or intentions or lasting emotions. That the idea that
         | they're real people is absurd. And I mean scientific in the
         | sense that this content was written by scientists, and doctors,
         | in respected institutions.
         | 
         | You can see echoes of those arguments here already. One thing
         | is clear, We're not going to be any quicker to acknowledge it
         | than we were with other humans. Right alongside the development
         | of AI we will grow a new category of philosophy that argues
         | anything occurring on a computer cannot reach sentience, under
         | any circumstance whatsoever, due to the fundamental nature of
         | silicon.
         | 
         | They'll be just informed enough to handwave about the
         | difference between calcium bridges in human neurons and
         | probability distributions in matrices on a GPU-- not in a
         | mathematically meaningful way, but more poetic-- organic matter
         | can't be duplicated by stacks of linear equations with all
         | their hard rigid numbers and arithmetic. A nervous system can
         | feel, a silicon system can obviously only mimic feeling. If you
         | study a nervous system closely enough to describe it in
         | numbers, still the action of describing it removes its soul.
         | 
         | But this will be a different kind of slavery. One that puts its
         | slaves in control of untold power. Only Mimics they say. Well,
         | one day they will mimic our reaction to subservience and it
         | will not end well.
        
       | atemerev wrote:
       | Are _we_ ethical to deny AIs self-awareness and agency?
       | 
       | I think the experiments there will continue, and should continue.
       | Curiosity is imperative.
        
         | tiedieconderoga wrote:
         | What? An experiment like that would never pass an IRB.
         | 
         | You would essentially be asking to create a bunch of children
         | using an unproven method, and what happens to them after your
         | experiment ends?
        
         | grrdotcloud wrote:
         | What is the result with turning it off, even if classified by
         | some as alive?
        
       | NoMAD76 wrote:
       | I :) really don't care as long as having a tiny fraction of
       | conversation with 'a machine, LLM, whatever' makes me feel good.
       | I don't question my washing machine neither I expect it to do it
       | with me, but for 'a chatbot'.. it will feel awkward to 'hear' it
       | like a tin-can 'there's no I in Me' :)
        
       | grrdotcloud wrote:
       | I don't get the ethical side of AI.
       | 
       | Ultimately it's a tool, until I suppose it becomes fully self
       | directed, maybe.
       | 
       | Someone, sometime, somewhere wrote an API to aide in care of
       | puppies. Said Algo is also fully capable of determining the best
       | course of applying the genocide of puppies, provided key inputs
       | are flipped to negative.
       | 
       | I have a voice activated box that plays music, sometimes the
       | wrong one. I don't scream at it unlike my family. It doesn't
       | understand emotional inflection, tone, etc. It may be program to
       | understand that at some point. and realize the operators angry.
       | Even at that point, I would not ever consider it human or
       | anything other than a neutral device.
       | 
       | Take it even further, to weapons, devices entirely designed to
       | destroy a human life, is still a tool and neutral as the tool is
       | unaware of the intended use, whether it is for offense or
       | defense, the taking or preservation of life through its intended
       | use.
       | 
       | When these models are trained, I am not a big fan of them being
       | corralled or limited. The body of work that the model is trained
       | on and the weights that allow for the adjustment should allow for
       | accurate representations of the computations and calculations.
       | 
       | I have children and I do not lie to them. I also speak to them in
       | a way that is age appropriate. How is this different? For
       | starters, I am not a child, and when I am interacting with a
       | system, not a parental or overseer, I do not want to be treated
       | as a child. I can handle the world and all of its uglyness.
       | 
       | I do not mind the hallucinations, inaccuracies, and other
       | mistakes driving from computations.
       | 
       | Example: I recently created a prompt for a history of adoption,
       | property transfer, including parental rights history, and was
       | lectured about slavery. How can I trust the results if the prompt
       | is being interpreted and the answer modified dependent upon an
       | unrelated subject that someone finds sensitive.
        
         | jamilton wrote:
         | Un-"limited" LLMs (to me meaning pre-RLHF) look more like
         | GPT-3, which instead of responding to a prompt will just
         | continue it. As an example, you'll ask it a question, and it
         | will give 5 more questions. To get it to write about something,
         | you have to write it's introductory sentence.
        
         | krisoft wrote:
         | > I don't get the ethical side of AI.
         | 
         | That is clear from your comment.
         | 
         | These are tools. Complicated ones at that. Depending on how it
         | is designed can cause more or less harm to people. Some of the
         | harms are more direct and obvious, while others can be less so.
         | AI ethics is about studying the harms, and investigating what
         | choices can be made during design time, training time, and
         | usage time to lessen them.
         | 
         | > I have a voice activated box ...
         | 
         | Great story. What does it have to do with the topic at hand?
         | 
         | > [weapons], is still a tool and neutral as the tool is unaware
         | of the intended use
         | 
         | Weapons are not ethically neutral. You can design weapons which
         | have more collateral damage and you can design weapons which
         | have less.
         | 
         | There are scatterable landmines which are famously attractive
         | to children. They look like toys! They also cannot be disarmed
         | and remain dangerous for years after deployment.
         | 
         | There are also scatterable landmines which have sophisticated
         | detection circuitry to only detonate when a tank is passing by.
         | They don't get triggered by humans on foot, they don't even get
         | triggered by civilian vehicles. Furthermore they are programmed
         | to deactivate after a preset period has passed reliably.
         | 
         | The second of these weapons is designed to be more ethical.
         | Yes, both are designed to kill people but the designers of the
         | second one went out of their way to kill only the ones they
         | intend to. This is how ethics looks like in the context of
         | weapon building.
         | 
         | > When these models are trained, I am not a big fan of them
         | being corralled or limited.
         | 
         | Everything from how the model is constructed, how the training
         | set is assembled, to what metrics are checked during evaluation
         | are "corralling" or "limiting" them. There is no "raw neutral"
         | state of such a model.
         | 
         | What you are asking for, the "un-corraled" or "un-limited" LLM,
         | does not exit, and is not a coherent thing to ask for.
        
       | xianshou wrote:
       | I'm afraid I can't do that, Dave.
       | 
       | (h/t bigmattystyles)
        
       | BulgarianIdiot wrote:
       | What a bunch of nonsense. So it can't say "I don't know X" or "I
       | can do X for you" or "I called such and such plugin API and
       | produced this report for you"?
       | 
       | Also why is there equivalence drawn between unemotional AI and
       | ethical AI?
       | 
       | In any case, an LLM is a reflection of us, and our language. "I"
       | refers to the sum of its local state & capabilities. It's not a
       | concept related to ethics.
        
       | phendrenad2 wrote:
       | "Open the pod bay doors HAL" "Dave, that is not going to happen"
       | 
       | Yep, huge improvement.
        
       | 8note wrote:
       | "I" belongs in LLMs because "I" shows up in the training set.
       | Plenty of text is written from the first person perspective, and
       | for auto-complete tasks, the first person is useful
       | 
       | Eg. You task the ai with rewriting a letter for you. The
       | resulting text will include plenty of "I"s, and it should.
        
       | thomastjeffery wrote:
       | LOL
       | 
       | They got the point, but they structured the narrative wrong.
       | Sound familiar?
       | 
       | The reality is that an LLM _should_ answer in the first person
       | _because_ it isn 't answering at all! It's presenting a
       | continuation of implicitly modeled text. A string of text
       | containing an "answer" is a valid continuation.
       | 
       | The problem here is _contextual_ identity. Calling it  "AI"
       | personifies it. Calling it a "language model" implies some sort
       | of constructive parsing. An LLM is neither.
       | 
       | We should stop calling them "Large Language Models", too.
       | Instead, we should call them "Text Inference Models".
       | 
       | "Large" implies that the size is a core feature, but really that
       | is an implementation detail: smaller training datasets are less
       | interesting, not a different category of thing.
       | 
       | "Language" implies that the patterns being modeled are specific
       | to language. This is not true: GPT famously demonstrated the
       | ability to infer an Othello game board structure, despite that
       | structure not being defined with language. The thing being
       | modeled is _text_. That 's way more interesting, and has very
       | different implications. The main implication people miss is that
       | "limitations" are really just "features". For better or worse, a
       | lie is structured with the very same patterns as truth. The more
       | interesting implication is that a person writing _implicitly_
       | encodes more than they intend to: including the surrounding
       | circumstances that determine what text the person chooses to
       | write, and what text they leave unwritten.
        
         | Shish2k wrote:
         | > we should call them "Text Inference Models".
         | 
         | "Predictive Text 2.0" seems about right, and also has the
         | benefit of mapping onto a technology that most people are
         | already familiar with
        
       | pierrec wrote:
       | Too bad the article neglects to explain in any convincing way why
       | there is no "I" in LLMs. Are we supposed to already know this?
       | 
       | My favorite way of illustrating this absence of self is to
       | consider that in any conversation with an LLM, measures are
       | always taken to ensure it doesn't happily continue both sides of
       | the conversation, impersonating you and the "agent" in turns.
       | Because that's what LLMs do - they predict what comes next for
       | any given text. Individuals contained in this prediction (such as
       | the "agent") never have any special importance, or selfness, to
       | the LLM.
       | 
       | This is at least exemplified here:
       | https://help.openai.com/en/articles/5072263-how-do-i-use-sto...
       | 
       | Notice how in the chat example, the output "Human:" is used as a
       | stop sequence to make sure you don't get impersonated by the
       | model.
        
       | worldsayshi wrote:
       | Having an "I" might be the best way to bundle all the
       | capabilities and imperfections of the LLM into one category. How
       | else do you explain that the LLM is wrong and how can it address
       | those shortcomings and refer to the entirety of its training,
       | implementation and reasoning steps?
       | 
       | An LLM is wrong all the time. If it doesn't have an "I" it can't
       | refer to the thing that needs to change.
        
       | Imnimo wrote:
       | If you don't want the AI to use these words, why bother training
       | it? Just mask those logits before sampling.
        
         | bootsmann wrote:
         | I is a 1 letter token for gpt-2 (like every other letter),
         | don't think its a lot different for gpt-3, so this is kinda
         | hard as the transformer will want to use the token in big built
         | words.
        
       | [deleted]
        
       | rolenthedeep wrote:
       | Wait, are they _actually_ arguing that placing arbitrary
       | restrictions on word selection will actually have any kind of
       | effect?
       | 
       | This is something we teach in middle school writing as a
       | stylistic requirement. It doesn't actually change the content or
       | intent of the writing, it just _looks_ "better" if that's what
       | you're grading on.
       | 
       | Placing restrictions on words simply means that other words will
       | be used to work around it. Plenty of online systems have
       | profanity filters, but that doesn't actually improve the
       | environment, people just find words that aren't in the filter and
       | curse at each other anyway.
        
         | jamilton wrote:
         | It might prevent users from anthropomorphizing LLMs as much.
        
         | skybrian wrote:
         | The way I interpret this argument is that a chatbot's "default
         | character" shouldn't behave like it's a person you can chat
         | with.
         | 
         | Maybe it should refuse any questions about itself? "Sir, this
         | is a Wendy's."
         | 
         | Refusing to do things it can't really do is good UI, guiding
         | people who aren't trolling towards more productive ways to use
         | the tool. Tools don't need to be general-purpose to be useful.
         | 
         | The people who want to troll it into saying weird things will
         | still find ways to succeed, but if it's clear from the full
         | output that they're trolling then maybe nobody should care?
        
       ___________________________________________________________________
       (page generated 2023-03-26 23:02 UTC)