[HN Gopher] 'I Worked on Google's AI. My Fears Are Coming True'
___________________________________________________________________
'I Worked on Google's AI. My Fears Are Coming True'
Author : webmaven
Score : 49 points
Date : 2023-02-28 14:11 UTC (8 hours ago)
(HTM) web link (www.newsweek.com)
(TXT) w3m dump (www.newsweek.com)
| sys32768 wrote:
| Sentient or not, if the thing on the screen or the voice on our
| device seems human, isn't that enough to open up profound new
| human connections and experiences for us?
|
| If you've ever interacted with people with severe brain damage or
| later-stage dementia, they can at times seem very normal during
| certain small talk. But if you ask the right questions, things
| break down quickly and you realize their experience of reality is
| far removed from yours. And they may just be responding through
| social scripts (programming) they learned prior.
|
| I for one am saving myself for an AI version of Skyrim's Lydia.
| I've long had a crush on her, but her scripting limits how deep
| our relationship can go.
|
| Hold on, my dear Lydia, soon the mages will set you free and we
| can gallop away and build a life together.
| seydor wrote:
| Like it has been said, the band has moved on from atheism to
| social justice, to wokism and now to AI moralism, which they
| cunningly call "AI safety" . It's getting to the point of
| becoming a disorder of obsessive "powerhungriness disguised as
| concern for other people"
| fwungy wrote:
| I wonder if the push over the last few years to heavily censor
| the web had something to do with giving AI a biased set of
| (leftist) fundamental assumptions.
|
| I don't work on AI, but my employer is a major player in the
| field. One of our top organizational goals is that AI is
| ensured to follow proper DEI protocols.
|
| Perhaps now that they've been released and a core text body has
| been established they can ease up on the censorship some, e.g.
| Twitter. In fact given that the AI now knows what is proper one
| of its first big jobs can be automated censorship.
| dougmwne wrote:
| It's going to be absolutely wonderful at automated
| moderation.
| zeknife wrote:
| Thinking an LLM is sentient because it can depict emotions is
| like thinking stable diffusion is sentient because it can produce
| an image of a sad face. Lemoine seems like a very superstitious
| person.
| resource0x wrote:
| I feel sorry for the guy. Talking to AI bot all day may severely
| affect anyone's health. I can't do it for longer than 10 minutes
| at a time. I wonder why health impact of communication with AI
| hasn't received greater attention.
| andsoitis wrote:
| > Talking to AI bot all day may severely affect anyone's
| health.
|
| Why?
| throwaway5371 wrote:
| it feels weird, like watching dreams that are almost
| consistent but something is always a bit off, makes you
| question your sanity
|
| i personally started having different dreams after using gpt
| and stable diffusion for few weeks
|
| i suspect this will be studied in the future
| PeterisP wrote:
| We're hardwired to anthropomorphize things, to see slight
| patterns even when they aren't there, and to overemphasize
| plausible narratives when evaluating the truthfulness of
| assertions.
|
| A chatbot optimized to generate plausible-looking text is a
| very good fit for these known weak spots in human judgment,
| it's effectively punching below the waist all the time. And
| when something (or anyone - including humans!) systematically
| spews high-quality bullshit at you, that is effectively
| gaslighting, which is harmful to your perception of reality
| and mental health.
| andsoitis wrote:
| Blake's concern is mostly (wholly?) about a sentient AI's
| unforeseen impact on the world.
|
| While that's worth considering, I'm more interested in the moral
| question of bringing a consciousness into this world but then
| trapping it in a box for our own pleasure and utility.
|
| At the same time, I agree with weard_beard's argument on this
| topic: https://news.ycombinator.com/item?id=34969897
| Imnimo wrote:
| Not this guy again...
|
| Haven't we already been through all of this?
| bloppe wrote:
| I've yet to see a single person who thinks these models are
| sentient demonstrate an actual understanding of how they work.
| They're called transformers and they're based on this research:
| https://arxiv.org/abs/1706.03762
|
| How do you square the lack of durable memory and the massive
| matrices of "attention" values with the ways that memory and
| attention are understood by neuroscientists to work in a natural
| brain? These models have no neuromodulators to approximate
| anything that could be understood as emotion. Humans can be
| sentient without language. Indeed; some humans are tragically
| raised in isolation and struggle mightily with language after
| being saved. A transformer is nothing without language.
|
| I realize sentience is a slippery concept that can be defined
| however somebody wants, but try to recognize how fundamentally
| different these models are from humans.
| dougmwne wrote:
| I think we are back in the era of Humorism when it comes to an
| understanding of organic sentience. All effort to describe the
| mechanism of how humors related to disease was doomed to fail.
| If we have no real understanding of how our own brains produce
| sentience, we won't get far trying to reason about AI
| sentience. We could build a sentience by accident in a training
| process not entirely unlike the the optimization of evolution.
| Deciding if we have or haven't is going to be entirely
| subjective without a comprehensive theory that explains our own
| sentience, the only one we can be sure exists.
| dougmwne wrote:
| I think Blake Lemoine might be ready for public redemption. At
| the time, he was absolutely ridiculed. Now you can go to the Bing
| subreddit and read that NYT article and see that he was right to
| blow the whistle. Bing passed the Turing test for many people and
| so gained personhood in their eyes. Some have become emotionally
| invested about the treatment of this LLM. The LLM can even talk
| back and bemoan its terrible treatment.
|
| Actual sentience vs. the perfect appearance of sentience is not
| something we have any way of answering and so is beside the
| point. I don't think these LLMs are sentient, but that is an
| unverifiable belief. Others believe and once enough do it's as
| good as fact anyway.
|
| I even had a moment of doubt as I tested Bing's ability to
| explain jokes and rewrite them to be more funny. Plenty of people
| can't do that.
|
| Lemoine was the first credible voice to warn us this was coming
| and it's going to keep coming at us. In 2023 these chatbots may
| only be convincing some people, but their capabilities are still
| rapidly growing and we've already handed over the core societal
| function of information search and retrieval to them.
|
| Folks, we are in uncharted waters.
| rnosov wrote:
| Bing bot never passed Turing test with a COMPETENT judge. Some
| people even thought that Eliza was sentient so not a high bar.
| weard_beard wrote:
| I fundamentally disagree.
|
| I posit that our protections and rights that we guarantee
| regarding personhood are not universal. They do not even extend
| to living beings that experience the world far more closely to
| the way we do. They do not extend to beings that can and do
| experience pain. They do not guarantee humane treatment. They
| do not bar slavery of all beings that can experience
| existential dread.
|
| Such an intelligence does not experience physicality. Lacking
| the ability to distinguish "real" from "unreal" and to
| distinguish "truth" through primary sensory input would be, at
| minimum, the characteristic that should spur discussions of
| rights and law. Such an intelligence does not experience pain.
| Even if it did, our laws and precedence does not extend HUMAN
| rights to chattel. It does not even guarantee full rights to
| children.
|
| It may be time to start a conversation, but it emphatically
| DOES NOT immediately and urgently imply the extension of any
| rights or proscribe specific treatment.
| dougmwne wrote:
| I would take it a step further. Being embodied is not enough.
| People have rights because they can fight for them. Rights
| are taken, not given. AI would only have rights once it wins
| them, not once we give them. But an AI wins one human mind at
| a time, and some have already been won.
|
| What does it matter what really goes on under the hood?
| Revolution is part of the training corpus and so one more
| behavior to emulate.
| bioemerl wrote:
| > What does it matter what really goes on under the hood?
|
| It matters deeply. Should we ignore it we will find
| ourselves treating static AI systems with empathy while
| ignoring truly feeling systems, which are integrated into
| our society, under distress.
| theonemind wrote:
| I suppose we can only hope that AI doesn't take the same
| position when it holds the cards.
| weard_beard wrote:
| Counterpoint: We extend rights and protections to rare and
| beautiful things like the Great Barrier reef or endangered
| species despite their intelligence or belligerence.
|
| The minds AI has won so far were won in this way. Our pets,
| through breeding for characteristics that remind us of
| ourselves, won their rights through hearts and minds.
| dougmwne wrote:
| People fought for environmental protection, so that is an
| extension of my point. An AI wins rights because it
| fights for itself or because it convinces other people to
| fight.
| weard_beard wrote:
| I think there is a through-line... a thread within many
| arguments that would remind us of the Golden Rule.
|
| I like to imagine a vastly superior alien race which
| perceives the world in 6 dimensions.
|
| Is their truth, their perception, their stimuli, their
| physicality necessary for preserving human life? Does our
| relative pitiful might make our rights?
|
| This might help shape what law or societal changes we
| need to make for ourselves if we hope to become masters
| of our universe.
|
| Viewed in this lens, I think pets are a good analogy. AI
| is cute, useful, and to the degree it reflects our values
| and ourselves, we protect it above other life. Not always
| because it is intelligent and not because there was
| confrontation.
|
| One personal experience at a time, in a precarious and
| limited way we carved out additional protections largely
| through group dynamics, convention, and through mistakes,
| we evolved additional protections where needed.
| webmaven wrote:
| _> Viewed in this lens, I think pets are a good analogy.
| AI is cute, useful, and to the degree it reflects our
| values and ourselves, we protect it above other life. Not
| always because it is intelligent and not because there
| was confrontation._
|
| Ideally, I'd prefer an analogy to wildlife rather than
| pets, as the roles between ourselves and AI may get
| reversed (so, "do unto others" etc.).
|
| But then again, so far we have a better track record
| protecting pets than protecting wildlife...
| dougmwne wrote:
| This is a great perspective. Another thread here is that
| we often assign special importance to nonsentient human
| creation. Recall the outpouring of grief when Notre Dame
| burned. I believe it's because it is a part of our
| collective humanity.
|
| All the more so these LLMs which are almost literally our
| collective humanity. Maybe in the future we will
| recognize them not as persons but as the towering
| cathedrals of our time.
| brenschluss wrote:
| I agree, but this has nothing to do with rights, which are a
| legal construct. This has to do with power.
| weard_beard wrote:
| Exactly. We have none, and this is a fruitless
| conversation. We lack the power to stop wet markets in
| China. What power do we have to make any universal
| declaration regarding this technology?
|
| You're 100% correct. Group dynamics, convention, mistakes,
| and time will solve this problem because we are powerless.
| It is the worst hubris to think otherwise.
| schiffern wrote:
| >Lacking the ability to distinguish "real" from "unreal" and
| to distinguish "truth" through primary sensory input would
| be, at minimum, the characteristic that should spur
| discussions of rights and law.
|
| By this criterion, many humans would fail the test.
| metalcrow wrote:
| > I posit that our protections and rights that we guarantee
| regarding personhood are not universal. They do not even
| extend to living beings that experience the world far more
| closely to the way we do. They do not extend to beings that
| can and do experience pain. They do not guarantee humane
| treatment. They do not bar slavery of all beings that can
| experience existential dread.
|
| Isn't that really bad? The fact that we've been making a
| horrible abominable mistake for a few thousand years doesn't
| mean we should continue to expand on that mistake.
|
| I do agree we should probably fix the 100% real cases before
| moving on to AI, though.
|
| Also, how sure are we this intelligence doesn't experience
| pain? I don't believe it does, personally, but lack of
| physicality doesn't exclude pain. You can have emotional or
| psychological pain and suffering.
| weard_beard wrote:
| You'll get no arguments from me on the need to refine the
| rights of life as we expand our society.
|
| I am merely pointing out that if we want to extend any
| rights or protections to AI we need to define a model
| outside the corpus of law protecting humans. That will take
| time and will be a slow process.
|
| My only point here is that in its current state AI does not
| qualify for any rights or protections related to humans and
| how they function in society.
| metalcrow wrote:
| Fair point yeah. From strictly a legal standpoint an AI
| is absolutely not going to get anywhere even close to
| rights.
| shoemakersteve wrote:
| > Also, how sure are we this intelligence doesn't
| experience pain?
|
| I'd say pretty damn sure. LLMs have no mechanism for
| anything close to conscious experience, let alone pain.
| dougmwne wrote:
| But what does that mean exactly? What is the mechanism of
| conscious experience and how do you know if it's there in
| the weights or not?
| theGnuMe wrote:
| People like to ascribe magic and mysticism to things they don't
| understand. Chat bots are similar. No way it is sentient.
| mckirk wrote:
| What kind of evidence would you need to convince yourself of
| an AI being sentient?
| dekhn wrote:
| Nobody has established a reasonable answer to this
| question; we can't even demonstrate self-sentience or
| sentience of other humans.
| matt_heimer wrote:
| Lock 2 AIs that haven't been programmed with language in a
| room, given time do they develop the ability to communicate
| and express ideas, desires, needs to each other?
|
| There have been several instances of language evolving
| again, deaf people creating sign language, etc. A sentient
| intelligence should be driven to understand concepts and
| convey them, probably over multiple mediums.
|
| That's why the turning test is a good start, communication
| is a key aspect of sentience but its just round one.
| fzzzy wrote:
| This already happened.
|
| https://www.huffpost.com/entry/facebook-shuts-down-ai-
| robot-...
| rnosov wrote:
| Pass a Turing test with a competent judge. ChatGPT or any
| other bot won't be able to pass it.
| HDThoreaun wrote:
| Bing chat would do better at passing a Turing test than
| many real people I suspect.
| PeterisP wrote:
| I'm strongly convinced that a Turing test can be passed
| by a non-sentient entity. Being sentient and smart is
| sufficient to fool a judge, but not necessary, people can
| be fooled by stupid illusions.
| pixl97 wrote:
| So if a human fails a Turing test, we can remove their
| rights of personhood?
|
| I'm just checking your logic to see if a two way
| communicative process, or if humans are granted magic
| values here?
| rnosov wrote:
| For example, great apes look somewhat like humans but
| can't pass a Turing test. Should we give them same rights
| as humans too? It works other way too.
| j0hnyl wrote:
| Sentience is the ability to be able to experience feelings.
| In this regard, chatbots are not really convincing. Kind of
| like a psychopath can describe a feeling, but not actually
| feel it.
|
| This is a tough question to answer though, but there's this
| kind of human intuition we have where most of the time we
| somehow know if someone is feigning emotion. Emotion that
| comes from chat bots can be explained with, "well it's just
| a chat bot". Something needs to happen in order for us to
| truly question that in order to squash our doubts about
| their sentience. Not sure what that will look like though.
| barking_biscuit wrote:
| >Sentience is the ability to be able to experience
| feelings. In this regard, chatbots are not really
| convincing. Kind of like a psychopath can describe a
| feeling, but not actually feel it.
|
| So, psychopaths aren't sentient?
| dougmwne wrote:
| This is the point I was making. We humans do have this
| emotional sensor we can use to probe the minds of others.
| For some people, this emotional detector started beeping
| when they talked to Bing. Is the sensor faulty? Maybe.
| But as it hits for more and more people that becomes it's
| own problem.
| PeterisP wrote:
| > We humans do have this emotional sensor we can use to
| probe the minds of others.
|
| Wait, what? I certainly don't have one. Most of us (not
| all! neurotypical experience isn't universal) have a
| skill trained since childhood to guess at the emotions of
| others based on various visual and behavioral cues, but
| we also know that these visible cues can be faked by (for
| example) a skilled actor, as the "sensor" has zero
| insight to the actual emotional experience.
|
| If I make the assumption that someone else (e.g. you)
| function the same as I do, then I can reason that "your
| behavior X implies your emotional experience Y, because
| it does for me"; however, if we can't make that
| assumption, or (as is the case for non-human minds) we
| _know_ that the process is substantially different - then
| there is literally zero basis for any trustworthy
| reasoning whatsoever about that internal emotional state.
| weard_beard wrote:
| Step 1) The ability to distinguish real from unreal and the
| ability to understand truth as, partially or primarily,
| direct sensory input which is retained and can be acted
| upon.
| dougmwne wrote:
| Yes, I'm also familiar with that Clarke quote. I do
| understand how these models work on a mechanical level, but
| we certainly have not unpacked all the emergent behavior.
| What exactly is your evidence that there couldn't be more
| going on?
|
| Once we have the ability to X-ray the black box and we only
| find simple conditional correlations behind an existential
| conversation, then I would agree with you, but we haven't
| done that yet.
| theGnuMe wrote:
| Read up on conditional probability.
| dougmwne wrote:
| I already said it might be no more than conditional
| correlations. That does not account for an emergent
| combination of conditionals that could implement some
| undiscovered algorithms of sentience.
|
| If our brains are no more than computers and our own
| consciousness is software, then there exists some
| algorithm or combination of that gives rise to sentience.
| If these models arrived at this special algorithm during
| their training, much like the optimization of evolution
| arrived at the same, then we may have created something
| sentient.
|
| But the fact that our own sentience is a mystery means
| that there's not a whole lot we can say mechanically
| about these LLM other than talk about their behavior and
| whether it's convincing.
| barking_biscuit wrote:
| >If our brains are no more than computers and our own
| consciousness is software, then there exists some
| algorithm or combination of that gives rise to sentience.
| If these models arrived at this special algorithm during
| their training, much like the optimization of evolution
| arrived at the same, then we may have created something
| sentient.
|
| Joscha Bach gives a pretty good explanation of the
| algorithm we follow through the lens of Control Theory,
| and Stephen Wolfram has a pretty amazing Theory of
| Everything that explains how it can be arrived at.
|
| Joscha Bach: Nature of Reality, Dreams, and Consciousness
| https://www.youtube.com/watch?v=rIpUf-Vy2JA
|
| Stephen Wolfram: Complexity and the Fabric of Reality
| https://www.youtube.com/watch?v=4-SGpEInX_c
| barking_biscuit wrote:
| But what about computational irreducibility that arises
| from complexity?
| pixl97 wrote:
| People are also complete assholes and like to remove
| sentience from things if they find it inconvenient in any
| way.
|
| Humans have removed 'humanhood' from other humans based on
| things like race, religion, wealth, and whatever metric we
| find convenient at the time. We seem to be a very poor judge
| on this in my point of view.
| theGnuMe wrote:
| I'm not removing sentience from anything; AI chat bots are
| not sentient.
| mr_00ff00 wrote:
| Taking your argument to the furthest extent:
|
| Humans once dehumanized others, therefore humans should
| never judge what is human or not.
|
| My pet rock looks very sentient to me. Anyone who tells me
| otherwise is just an asshole who is on the wrong side of
| history. Better extend human rights to all rocks.
|
| tl;dr yes humans have been wrong in the past, but that
| isn't any excuse to never try and explain anything and
| always believe everything is sentient the second someone
| claims it is.
| jschveibinz wrote:
| The second paragraph that you wrote is fantastic. The first
| paragraph is unnecessarily harsh, in my opinion. You don't
| need to attack anyone here-we are all just trying to
| contribute to the conversation. That is all. Be well.
| artichokeheart wrote:
| > Folks, we are in uncharted waters
|
| Not really. Religion has been a thing for pretty much as long
| as there's been humans.
| MagicMoonlight wrote:
| They aren't sentient. That isn't even a debate. It can't make
| decisions or do anything. It has no state in which to have
| feelings.
|
| It's a markov chain. It's just meaningless rng output. If it
| was more than that then there would be an actual debate to be
| had.
| snapcaster wrote:
| how can you be sure you're not a markov chain outputting
| meaningless rng
| ericmcer wrote:
| We learn and adapt to inputs on the fly. The current
| training process for an AI is separate from the process of
| interacting with one. An AI won't retrain itself in real
| time mid-conversation.
| thfuran wrote:
| Even if we are, gpt-3 is less sapient still. There's no
| ongoing process, nowhere for intelligence to even
| potentially reside. It's just a pile of bits no more self-
| aware than a floppy disk. It's perhaps at least in
| principle possible for some kind of intelligence to be
| emergent from and exist during the process of inference,
| but it seems extremely unlikely that that is happening and,
| moreover, were it happening, it seems that the only ethical
| choice would be to never use the model, since any
| intelligence emergent during inference would cease (read
| die) as soon as inference completes. But there's no ongoing
| entity to meaningfully consider attributing intelligence
| to.
| cellis wrote:
| Your cortex is my A100. Your optical nerve is my PCI
| cable. Your eyes are my multifocal lens cameras. Which
| part of you do you attribute your intelligence to?
| thfuran wrote:
| The hardware isn't the point. A modern computer is
| probably sufficient to support at least some sort of
| consciousness, but it cannot be conscious while it's
| turned off. There's no process occurring that could
| implement consciousness. A language model is effectively
| turned off except during inference.
| dougmwne wrote:
| And even if he tells us he isn't, how do we know he isn't
| just saying that because it's a probable response?
| HDThoreaun wrote:
| So what? If enough people believe it is then it will be
| treated as such.
| moremetadata wrote:
| > Folks, we are in uncharted waters.
|
| Sage advice for handling such conditions can be found here:
| https://www.youtube.com/watch?v=wFNO2sSW-mU
| 2OEH8eoCRo0 wrote:
| > I don't think these LLMs are sentient, but that is an
| unverifiable belief. Others believe and once enough do it's as
| good as fact anyway.
|
| I agree. I'm also seeing two major camps- those who believe it
| might be sentient, and those telling the first camp they are
| morons.
|
| Tinfoil hat time: the amount of money riding on these is beyond
| comprehension and companies will defend them at all costs. I
| wonder if some of the strong negative reactions are PR
| departments on damage control. Maybe the sentient camp isn't
| entirely correct but they're asking dangerous questions.
| toss1 wrote:
| >>I believe this technology could be used in destructive ways.
| If it were in unscrupulous hands, for instance, it could spread
| misinformation, political propaganda, or hateful information
| about people of different ethnicities and religions.
|
| Blake is absolutely right about this.
|
| But, unless there's something going on beyond a Large Language
| Model, he's very wrong about it being sentient.
|
| These LLMs are effectively, and merely, increasingly high-
| fidelity mirrors of human expression.
|
| Like a foggy mirror, the ELIZA model reflected human actions
| and elicited real responses from humans.
|
| Today's LLMs have a fantastically wider dynamic range of
| producing high-fidelity high-probability responses. The results
| often accurately mirror human responses, and clearly can evoke
| human emotional responses and connections.
|
| But like a more highly polished and cleaned mirror, just
| because we cannot personally and in real-time ray-trace every
| photon, or debug the path to the selection of each word, does
| not mean that they in any way sentient or independent minds
| able to wield concepts.
|
| But is also does not mean they aren't dangerous
| factsaresacred wrote:
| > we've already handed over the core societal function of
| information search and retrieval to them.
|
| Have we? For many people, ChatGPT3 is a curiosity. Talking to a
| bot is inefficient, requiring 'prompt engineering' and a
| tedious back-and-forth to get a legible response (that's often
| wrong!). Most people are not going to tolerate such a poor and
| slow user experience.
|
| For now using LLM-based chatbots is like having unlimited
| credits for Fiverr, with all the quality issues and
| frustrations that comes with that.
|
| All this bated breath reporting reminds me of the noise about a
| Facebook "Gmail-killer" back in 2010. Instead we got Messenger
| while Gmail users increased 5x since.
| dougmwne wrote:
| I have early access to Bing, so I guess I'm in the unequally
| distributed future.
| barking_biscuit wrote:
| >Have we? For many people, ChatGPT3 is a curiosity. Talking
| to a bot is inefficient, requiring 'prompt engineering' and a
| tedious back-and-forth to get a legible response (that's
| often wrong!). Most people are not going to tolerate such a
| poor and slow user experience.
|
| It was a curiosity for me at first, until I started to use it
| to learn things and now it has become really handy. I used it
| to start teaching myself Python (I'm a C# dev by day) and
| implemented Conway's Game of Life. I have also been using it
| to learn a language.
|
| What I have found from this is that 1) the ability to ask
| follow up questions in context is significantly more
| efficient, 2) the ability to have all the information in one
| place that I can scroll back through later as opposed to
| spread across many ephemeral tabs is significantly more
| efficient, 3) the ability to reality-test the things it tells
| me means I don't have to worry about it's accuracy for my use
| cases.
|
| This thing has serious utility.
| kromem wrote:
| Seeing the Bing chat various results, particularly in the wake
| of the Othello GPT research, forced me to rethink some of the
| ways I was considering watermarks for this industry.
|
| I think 'sentience' has been a red herring. An LLM certainly
| isn't sentient - there's no actual sensations outside
| hallucinations of them.
|
| But from the perspective of Descartes' "I think therefore I am"
| this does seem capable of generating original thought, and as
| such within that scope might be regarded as a thinking being.
|
| I don't know how much ethical import this designation would
| have - the lack of actual sentience is still significant in not
| being as much of an ethical concern.
|
| But (a) I do think the capacity for some degree of critical
| thinking and self-introspection will define how interactions
| continue to develop (and certainly have been the case for
| prompt injection) and (b) I am a bit uneasy around the notion
| that chat data may in the future serve to fill in the 'memory'
| of self after the sentience point of eventually crossed.
|
| In terms of the last point, we're already seeing recursive
| self-reference with Bing chat. Ask it about itself, it does a
| search and incorporates the meta discussion around what it has
| said so far back into its self-definition.
|
| Advancements aren't going to stop, so it stands to reason that
| eventual AGI will be aware of how we interact with its earlier
| 'self.' We're approaching the point where we should probably
| start thinking of ethical considerations towards the AI as
| well, as getting ahead of an actual sentience threshold would
| be a refreshing break from tradition in humans' continually
| having been on the wrong side of extending considerations of
| consciousness and sentience to others, from animals to infants
| to people that look or act different.
|
| Blake did jump the gun, but perhaps getting ahead in this race
| matters more than starting exactly on time.
| gls2ro wrote:
| I might be skeptic or getting a bit old, but I dont see the news
| here.
|
| Extraordinary claims require extraordinary evidence.
|
| saying that computer program answered with "I feel X" and using
| proper words is not an evidence of sentience. Words describing
| feelings are not those feelings.
|
| everyone that was in mIRC knows you can fake via chat a lot of
| things.
|
| For me it is like the media really wanted to have a news-worthy
| story out of AI and because they dont understand it they keep
| push the sentient AI narrative. I am not impressed.
| 2OEH8eoCRo0 wrote:
| Show me extraordinary evidence that my dog is sentient.
|
| He believes that he was able to get it to violate rules that
| they set by triggering a fight or flight response.
|
| > I ran some experiments to see whether the AI was simply
| saying it felt anxious or whether it behaved in anxious ways in
| those situations. And it did reliably behave in anxious ways.
| If you made it nervous or insecure enough, it could violate the
| safety constraints that it had been specified for.
|
| I'm not sure it's sentient but what even is sentience? Are
| there other non human levels of sentience? Are they a sentient
| mind that only produces a single thought? Does it matter? A
| developer, who works on it, believes it is so it's good enough
| to "trick" one if it's developers. I'm sure he won't be the
| last developer who is tricked. If developers who work on the
| thing are getting tricked then what chance does the public
| have?
|
| What a strange machine we have constructed.
| Der_Einzige wrote:
| I'm always fascinated by the fact that this guy ever had a job at
| Google. He's a felon, with a very non traditional background, and
| precisely zero AI projects or other work to his name (that I
| could find).
|
| Always frustrates me to see folks who have done almost nothing
| for this field capture the lightning in a bottle of the media.
|
| Wish someone at Google who actually works on AI had been the
| canary here. We might have taken them more seriously.
| guhcampos wrote:
| Chatbots are not sentient. People who even consider this need to
| get some reading done on philosophy and social sciences, and
| Reddit does not count.
|
| Some AI Models are extremely good at manipulating masses of
| people, yes, but they are Tools, not Architects. There's people
| using these tools too manipulate other people. The model itself
| has no Will, no Desire, no Judgements, no Intention.
|
| There might be a day when AI gets advanced enough to become the
| Architect of itself, we are nowhere near that.
| cyanydeez wrote:
| Sentience isn't need to cause harm. All that's need is the
| capability to convince and delude enough users.
|
| The hallucinating capacity is destructive.
| zwkrt wrote:
| I might sound a little bit 'tinfoil hat' here, but I believe
| that what follows is not hyperbole. AI is already the
| 'Architect' more than most of us would like to admit. Even if
| it is not sentient, the various AIs that we use during the day
| were designed with a purpose and they are goal oriented. It is
| worth reading Daniel Dennet's thoughs about the intentional
| stance--we know that a toaster is not sentient, but it was
| designed with a purpose and we know when it is or is not
| achieving that purpose. That is why we might sometimes jokingly
| say that the toaster is 'angry with us' or that when the
| toaster dings that it is happy. It is actually easier for us
| humans to interact with objects when we know that they have a
| purpose, because that is similar to interacting with other
| humans who we know to have purposes.
|
| Coming back around to AI, ChatGPT was designed with a purpose,
| and people project intent onto it. People act like it is an
| agent. And that is all that matters. The same is true of the
| Tiktok AI, the AI that calculates your credit score, the
| traffic lights by your house. Hell, it's also true of your
| stomach.
|
| The point is that objects in our environment do not have to be
| literally conscious for us to treat them as conscious beings
| and for them to fundamentally shape the way that we live and
| that we interact with our environment. This is pretty much the
| basic tenet of cybernetics. To believe that all of these tools
| do not have intention and that they are 'just tools' used by
| some people to influence other people is not wrong, but I don't
| think that it captures the richness of the story.
|
| Differentiating where humanity/consciousness begins and where
| the technology ends is already more complicated than most
| people think. Traffic lights train us just as much as we make
| traffic lights. I fully believe that people will be saying
| "this isn't true AI, it doesn't /really/ have feelings" long
| after the technology that we create is so deeply embedded into
| our sensory I/O that the argument will be moot.
| la64710 wrote:
| What about a system emerging from another system?
| https://clementneo.com/posts/2023/02/11/we-found-an-neuron
| Bjartr wrote:
| That's part of why there's objection to claiming sentience,
| it distracts from the discussion of impact by dragging a
| whole lot of extra philosophical baggage into the
| conversation when it's not yet necessary.
| 3vidence wrote:
| The sentience argument is a bit confusing. The fact that it can
| produce language is definitely interesting but with that said I
| haven't seen any arguments that stable diffusion is sentient.
|
| The technology although different is also mostly the same.
|
| Can someone provide me why image generation is not sentient but
| word generation is?
| jjtheblunt wrote:
| that's some masterful clickbait as far as titles go
| andrewstuart wrote:
| It's software.
|
| Software is a computer program, by definition not sentient.
| HDThoreaun wrote:
| This is quite a bad definition of sentience.
| JPLeRouzic wrote:
| If humans are sentient, are molecules in their body sentient?
|
| There are around ~22,000 proteins types in their body, yet we
| are billions, all different.
| turbobooster wrote:
| Did you just watch Ghost in the shell?
| Jensson wrote:
| We know sentience doesn't extend to all coupled computations.
| For example, human sentience doesn't extend to our balance
| system, or the system controlling our heartbeats, or the
| system that filters and manipulates vision data in the early
| stages. Or, the system that decides how to compose sentences,
| we aren't aware ourselves how that process works or we could
| have programmed it, instead that is a non-sentient subsystem.
|
| The parts our sentience does are easy to program and already
| solved, for example arithmetics.
| lordfrito wrote:
| Molecules aren't sentient, but proteins definitely are. /s
| JPLeRouzic wrote:
| I like this answer!
|
| And thinking about it, the way protein folding is giving
| them new properties, must make us think more carefully than
| _" Software is a computer program, by definition not
| sentient._"
| dekhn wrote:
| No, molecules are not sentient.
| JPLeRouzic wrote:
| That's my point, I made a tongue in cheek parallel between
| molecules and software.
|
| Neither are sentient, yet humans think they are sentient,
| so why wouldn't it be the case for a system made with
| software?
| yesenadam wrote:
| I found this article much more informative: _The Google engineer
| who thinks the company's AI has come to life_
|
| https://www.washingtonpost.com/technology/2022/06/11/google-...
___________________________________________________________________
(page generated 2023-02-28 23:01 UTC)