[HN Gopher] Jaron Lanier on the danger of AI
___________________________________________________________________
Jaron Lanier on the danger of AI
Author : atchoo
Score : 310 points
Date : 2023-03-23 11:10 UTC (11 hours ago)
(HTM) web link (www.theguardian.com)
(TXT) w3m dump (www.theguardian.com)
| 01100011 wrote:
| As I've said for years, the robots don't have to kill us, they
| just have to convince us to kill ourselves.
| audessuscest wrote:
| too late
| bartislartfast wrote:
| In the short term, I'm concerned about the energy required to run
| things like ChatGPT and how everyone is cramming it into
| everything. Soon every todolist app on your phone will need a
| supercomputer running in the background to auto arrange your
| grocery list, and we'll double or triple our datacenter footprint
| within a few years.
| throwuwu wrote:
| Running the models isn't the expensive part and is subject to a
| lot more optimization
| PaulHoule wrote:
| It is great to see how Lanier has grown up.
|
| Seriously though. once people take the limiters off chatbots you
| will see "artificial bullshitting" generating duckspeak on
| subjects from anti-vax and climate denial to cryptocurrencies,
| longtermism, transsexual maximalism and police abolitionism. And
| people who believe those ideologies will eat it up.
| leokeba wrote:
| Can you elaborate on what you mean by "transsexual maximalism"
| as an ideology ?
| johnaspden wrote:
| We want to change everyone's gender, as often as possible;
| and will devote superhuman resources to spreading this
| philosophy over as much of the available future lightcone as
| physics permits.
| flangola7 wrote:
| Let's do this but unironically
| PaulHoule wrote:
| I would point the documentary "The Witch Trials of J. K.
| Rowling" and the various reviews people have made of it.
|
| I'd contrast Rowling's own statements which are tentative
| and equivocal to those of opponents who threaten the
| violence (often sexual violence) that they believe they
| experience whenever someone disagrees with them in the
| slightest way. A healthy community would have some sort of
| self-policing (say the doctrine of Martin Luther King) to
| not act in ways that are so discrediting of the group.
|
| That documentary combines two themes, one of which is her
| story, the other of which is the story of the Tumblr-4chan
| descent into the intellectual rabbit hole. That last bit is
| flawed in my mind because it does not weave in the
| movements I criticize above that Bari Weiss (whose
| organization produced that podcast) and her followers
| support that have a similar element of "we can create our
| own reality", that is, people think they can pump unlimited
| amounts of carbon into the atmosphere and face no
| consequences, that Fauci did it all to destroy America,
| etc.
|
| On top of that I have seen "egg-hatchers" that prey on
| neurodivergent people whose neurodivergence is unnamed
| (such as schizotypy) because there is no system in place to
| make money off them, such as the autism-industrial complex
| and the pill mills and quack doctors that prescribe
| amphetamines. I know one individual who has no real idea of
| what women are other than "it's" mother, anime girls and
| vTubers, almost never leaves "it's" room and fortunately is
| too depressed to take any permanent actions. The CIA never
| discovered a formula to turn a random person into a
| "Manchurian Candidate" but it is very possible to lead an
| isolated and directionless individual into dangerous rabbit
| holes and I see it happening.
| pjc50 wrote:
| > people think they can pump unlimited amounts of carbon
| into the atmosphere and face no consequences, that Fauci
| did it all to destroy America, etc.
|
| You've got to take into account the fact that there is a
| large right-wing mobilization dedicated to pumping lies
| about trans people to the same extent as about climate
| and Fauci. Largely the same people, as well.
| https://www.motherjones.com/politics/2023/03/anti-trans-
| tran...
| leokeba wrote:
| Yeah right, it really sounds like what most transgender
| people are advocating for indeed.
|
| I asked ChatGPT to enlighten me on the subject, here's what
| it had to say :
|
| The term "transsexual maximalism" is not a widely
| recognized or established ideology, but it has been used by
| some individuals to describe a perspective on transgender
| issues that emphasizes a strong and uncompromising approach
| to advocating for the rights and recognition of transgender
| people.
|
| At its core, transsexual maximalism is an approach that
| prioritizes the experiences and perspectives of transgender
| people, particularly those who have undergone medical
| interventions such as hormone therapy or gender
| confirmation surgery. Proponents of transsexual maximalism
| often believe that these interventions are necessary to
| fully alleviate the distress and dysphoria experienced by
| many transgender individuals.
|
| Transsexual maximalism can be seen as a response to more
| moderate or incremental approaches to transgender advocacy,
| which may focus on incremental policy changes or attempts
| to work within existing systems of power and institutions.
| Some proponents of transsexual maximalism may view these
| more moderate approaches as insufficient or even
| counterproductive, arguing that they do not go far enough
| in challenging the underlying structures of cisnormativity
| and transphobia.
|
| However, it's worth noting that the term "transsexual
| maximalism" has not been widely adopted or embraced by the
| broader transgender community, and some individuals within
| the community may have different perspectives or priorities
| when it comes to advocacy and activism.
|
| Me : Why would people consider this a threat ?
|
| ChatGPT : Some people might consider transsexual maximalism
| as a threat because it is a highly assertive and
| uncompromising approach to advocating for transgender
| rights and recognition. This approach can challenge
| existing social norms and institutions that uphold
| cisgender privilege and transphobia.
|
| For example, some individuals who are opposed to
| transgender rights might view transsexual maximalism as a
| threat to traditional gender norms and the gender binary.
| They may feel uncomfortable with the idea that gender
| identity can be different from the sex assigned at birth,
| or that medical interventions such as hormone therapy or
| gender confirmation surgery are necessary for some
| transgender people to live authentically.
|
| Additionally, some people might see transsexual maximalism
| as a threat to their own power or privilege. Transgender
| individuals have historically been marginalized and
| oppressed, and advocating for their rights and recognition
| can challenge the existing power structures that uphold
| cisgender privilege. This can cause discomfort or
| resistance among those who benefit from these power
| structures.
|
| It's important to note that concerns about transsexual
| maximalism as a "threat" are often rooted in transphobia
| and a lack of understanding or empathy for the experiences
| of transgender individuals. Transgender people have the
| right to advocate for their own rights and recognition, and
| it is not a threat to anyone else's well-being or identity.
| blackbear_ wrote:
| Since we're at it:
|
| what are criticisms to transsexual maximalism that are
| not rooted in transphobia?
|
| There are several criticisms of transsexual maximalism
| that are not rooted in transphobia. Here are a few
| examples:
|
| Alienating allies: One critique of transsexual maximalism
| is that it can be overly confrontational and alienate
| potential allies. By using tactics that are perceived as
| extreme or disruptive, some argue that the movement risks
| turning off people who might otherwise be supportive.
| This can make it harder to build coalitions and achieve
| the desired goals.
|
| Ignoring intersectionality: Another criticism of
| transsexual maximalism is that it sometimes fails to take
| into account the ways in which gender intersects with
| other forms of oppression, such as racism, classism, or
| ableism. By focusing solely on trans issues, the movement
| may inadvertently exclude or marginalize other groups
| that are also affected by discrimination and violence.
|
| Neglecting practical considerations: Some critics argue
| that transsexual maximalism can be unrealistic or
| impractical in its demands. For example, calling for the
| immediate abolition of gendered restrooms or the complete
| overhaul of healthcare systems may not be feasible in the
| short term, and could distract from more achievable
| goals.
|
| Disrupting dialogue: Finally, some people argue that
| transsexual maximalism can be counterproductive in its
| effect on public discourse. By using tactics that are
| perceived as aggressive or confrontational, the movement
| may make it harder to have constructive conversations
| about transgender issues. This can lead to further
| polarization and a lack of progress.
|
| It's important to note that not everyone agrees with
| these criticisms, and there is ongoing debate within the
| transgender community about the most effective strategies
| for achieving social and legal recognition. However, it
| is possible to critique transsexual maximalism without
| resorting to transphobia or other forms of
| discrimination.
| PaulHoule wrote:
| Definitionally "transsexual maximalism" describes anyone
| who disagrees with it on any point to be a "transphobe".
| blackbear_ wrote:
| > By using tactics that are perceived as aggressive or
| confrontational, the movement may make it harder to have
| constructive conversations about transgender issues. This
| can lead to further polarization and a lack of progress.
|
| So ChatGPT was right on point :)
| parthianshotgun wrote:
| Maybe it was for effect, like 'psychic bedouinism'
| Turing_Machine wrote:
| > It is great to see how Lanier has grown up.
|
| Indeed. It would be interesting to study how he became one of
| the go-to guys for hot takes on any new technology, considering
| that he hasn't developed any new technology in the last 40
| years or so.
|
| Only Cory Doctorow, perhaps, matches him when it comes to
| famous for being famous. Sorry, an awesome set of dreads
| doesn't make you a "guru".
| hoppersoft wrote:
| First, I have to admit that I was negatively biased by his
| appearance. I have a hard time believing anything that some
| guy who looks like he smokes metric tons of weed and dorks
| around on his guitar all day has to say.
|
| That being said, the tech world has a bad habit of letting
| "visionaries" rest on their laurels. He'd have a lot more
| credence if he was actively developing AI and had more than
| gut feel to contribute.
| [deleted]
| [deleted]
| jjulius wrote:
| >Sorry, an awesome set of dreads doesn't make you a "guru".
|
| Got a critique about the topic at hand, which is what he's
| said about AI, or do you just have cheap, physical appearance
| potshots?
| Turing_Machine wrote:
| > Got a critique about the topic at hand, which is what
| he's said about AI
|
| Sure. He has no special expertise in AI, and his opinion on
| the subject is of no more value than that of any other
| random person working in the field.
|
| > do you just have cheap, physical appearance potshots?
|
| If you think his physical appearance doesn't have a lot to
| do with why his opinion is sought after by general media
| sources, well... you're wrong. A major reason they go to
| him is because he makes an impressive-looking photograph to
| go along with the article.
| jjulius wrote:
| >Sure.
|
| Cool! I'm very curious to see wha-...
|
| >He has no special expertise in AI, and his opinion on
| the subject is of no more value than that of any other
| random person working in the field.
|
| ... oh, you meant "no" rather than "sure". I asked for "a
| critique about ... what he's said about AI", but you
| completely ignored that and opted to simply dismiss all
| of it outright. It turns out that, in point of fact...
|
| >If you think his physical appearance doesn't have a lot
| to do with why his opinion is sought after by general
| media sources, well... you're wrong. A major reason they
| go to him is because he makes an impressive-looking
| photograph to go along with the article.
|
| ... you're still hung up on his physical appearance.
|
| Have a good one.
| Turing_Machine wrote:
| > .. oh, you meant "no" rather than "sure".
|
| No, I meant "sure". Yes, he has an opinion. No, that
| opinion isn't of any more value than that of anyone else
| who works in technology, because he has no expertise in
| AI. His hot take isn't more worthy of critique that some
| Twitter rando's hot take. It's up to him to provide
| evidence that his opinion has some special value. I
| haven't seen any such evidence (other than the "famous
| for being famous" thing). Do you have any?
|
| > you're still hung up on his physical appearance.
|
| Oh, I'm pretty sure it's not me who's overly impressed by
| his physical appearance.
| jjulius wrote:
| Alrighty then.
| sirsinsalot wrote:
| I'm not sure any of the fundamentals have changed in 40
| years, so being able to speak as an authority isn't really
| dependent on knowing React or that Java 19 got whatever new
| syntax.
|
| That attitude is why ageism is so prevalent in engineering.
|
| Maybe read a few of his books before writing someone off
| because you think they're out-dated.
| Turing_Machine wrote:
| > I'm not sure any of the fundamentals have changed in 40
| years
|
| Really? He's holding forth on GPT and similar technologies.
| Those represent dramatic changes from the state of the art
| 40 years ago.
|
| P.S. what makes you think I haven't read his work? I
| certainly haven't read _all_ of it, but the significant
| portion that I have left me...underwhelmed.
| sirsinsalot wrote:
| He is talking about the impact on society of things
| perfectly imaginable over 40 years ago and detailed
| across the whole of sci-fi.
|
| He isn't talking about the effectiveness of a loss
| function or something.
| beepbooptheory wrote:
| With respect, its bad enough not even directly commenting on
| the article and instead using the space as a soapbox, but at
| least please save the rest of us from your perverted fixations.
| There are other forums!
| pavlov wrote:
| I think the scariest part is not even the content but the
| connection that people can develop with these bot
| personalities.
|
| A demagogue who knows your intimate desires and fears and talks
| just to you, seems to care about you, is available 24/7. Your
| very own Personal Jesus (or Personal Hitler).
|
| Every communications revolution seems to lead to a genocide.
| Radio and TV in the 20th century (Germany, Rwanda etc.) Social
| media in the past decade (Myanmar). The upcoming AI-instigated
| genocide is a terrifying thought. The Rwandans didn't think
| they'd start killing their neighbors even as rhetoric on the
| radio got harder. Chatbot-dependent Americans may not see it
| coming either.
| PaulHoule wrote:
| Look at the parasocial relationships that people have had
| with L. Ron Hubbard, Eliezer Yudkowsky, and even camgirls.
| (Particularly followers of LRH and EY frequently speak and
| write like a chatbot trained on their writings.)
|
| I think most "freezoners" (renegade Scientologists) believe
| they've had telepathic conversations with "Ron". Ron has
| certainly written enough that a chatbot could be trained to
| converse in his style, maybe even write the OT levels that
| Ron never got around to writing.
|
| Since Harold Lasswell people people have been hip to the use
| of content analysis to predict war and genocide which is
| possible because the radio, television, and newspapers are
| all public. However the Iranian revolution of 1979 was not
| televised, instead it was promoted through relatively
| unobservable cassette tapes passed through the underground.
| Since social media shows something different to everybody
| there might not be any overall surveillance that warns us of
| upcoming danger.
|
| It doesn't have to go so far as genocide, individual acts of
| violence are bad enough. There is no magic technique that can
| take an average person and make them into a "Manchurian
| Candidate" but if you find somebody who is directionless,
| socially isolated and vulnerable you can very much lead them
| into a rabbit hole and lead them into antisocial behavior.
| Chatbots could do this with superhuman patience.
|
| For instance, a follower of the black pill incel who calls
| himself "Wheat Waffles" became a mass shooter. If somebody
| took the "lesswrong" ideology seriously they might attempt to
| assassinate an A.I. research and I'd go so far to say that
| they're just cosplaying because if they were serious one of
| them would have done it already.
| AnimalMuppet wrote:
| The parent poster's username seems appropriate...
|
| (Note well: This is not a "username checks out" drive-by
| dismissal. I think the point is perfectly valid.)
| dahwolf wrote:
| In recent times we've already significantly given up on our
| humanity. The lack of shared institutes (church, bars, etc.),
| remote studying, remote work, ecommerce, personal contact via
| chat, social media, these all point in the same direction of a
| contactless society where we rarely interact with the physical
| world and its people.
|
| It stands to reason that AI will only accelerate this further. It
| will be convenience on steroids. Your AI ear piece isn't going to
| tell you to throw it into the bin and go for a walk in the
| forest. It's going to tell you that you need to buy more stuff
| and it knows exactly what it is that you need. It's also going to
| feed you non-stop ultimate entertainment, custom generated for
| you and you only.
|
| In a rare moment of humanity, one of your friends calls you. AI
| knows all about your friends and their recent events so had
| already summarized talking points. In case you can't be bothered,
| AI will carry out the actual conversation, it's trained in your
| voice.
|
| A long running trend of outsourcing humanity to technology.
|
| Good news for philosophers though, they finally might have their
| moment of actual relevancy. In particular to answer the question:
| what is the point of anything, really?
| conductr wrote:
| We're slowly opting into the matrix.
| roryisok wrote:
| > In a rare moment of humanity, one of your friends calls you.
| AI knows all about your friends and their recent events so had
| already summarized talking points. In case you can't be
| bothered, AI will carry out the actual conversation, it's
| trained in your voice
|
| I love this thought. Why not go further, have AI reach out to
| my friends and ask them about things they (or their AIs)
| recently told "me" about?
|
| Soon our AIs will carry on our social lives and we'll just lie
| in the dark with tubes in us. We become the computers, and the
| computers become us, and the robots have finally won.
| red-iron-pine wrote:
| > I love this thought. Why not go further, have AI reach out
| to my friends and ask them about things they (or their AIs)
| recently told "me" about?
|
| We already have this. Secretaries. Automated Happy Birthday
| emails.
|
| When I was in a sales engineering role our sales team had a
| admin assistant who would sent out follow-ups, check-ins, and
| other correspondence (e.g. customer made a big public
| release, so congratulate them, etc.).
|
| This is just another example of robots takin ur jerbs,
| basically.
| dahwolf wrote:
| Yep, our AI voice equivalents could maintain friendships with
| each other in which case the "what is the point?" question
| applies. Or, you might reach out for real but fail to be sure
| if you're talking to your real friend or not.
|
| Or how about this interesting second-order effect: email.
| Soon Office will include advanced AI capabilities to write
| and reply to email.
|
| What is the point of me reading it? If my AI can generate a
| satisfactory reply, your AI could have generated the response
| too. No email needed, nor a reply.
|
| We're now in a phase where anybody can generate spectacular
| art. What is the point of me looking at your generated art?
| AI can generate personalized art based on what it knows I
| like.
|
| If AI works, and it's headed that way, you keep ending up at
| the same question: what is the point of anything?
|
| As counter force, there's significant room for a new low tech
| hippie Luddite movement.
| roryisok wrote:
| > Soon Office will include advanced AI capabilities to
| write and reply to email. What is the point of me reading
| it? If my AI can generate a satisfactory reply, your AI
| could have generated the response too. No email needed, nor
| a reply.
|
| You'd be kind of daft not to proof-read the emails your AI
| sends out on your behalf. who knows what you might
| unknowingly agree to do?
|
| > We're now in a phase where anybody can generate
| spectacular art. What is the point of me looking at your
| generated art? AI can generate personalized art based on
| what it knows I like. If AI works, and it's headed that
| way, you keep ending up at the same question: what is the
| point of anything?
|
| there _is_ no point to anything, there wasn 't before AI
| and there isn't now. anything we do is meaningless, because
| eventually we all die, and our efforts are ultimately
| forgotten. once you get over that you can make your peace
| with whether the pretty thing you look at is made by a
| human feeding sentences to a computer or a human carefully
| marking paper with a pencil, or some combo.
|
| but seriously, as a human who has spent all of my life
| doodling, drawing, illustrating, painting, thousands of
| hours creating my own art, and even building my own tools
| to do so, I find AI is just another tool in the box. I can
| use it to make images, and now someone who has not spent
| most of their life drawing can use it to make something
| more visually stunning than I ever could. has it rendered
| my efforts meaningless? hell no, I enjoyed every second I
| spent drawing. I still draw. but I don't harbour any
| illusions that I'm doing it for anyone other than myself.
| golergka wrote:
| Remote work brings people together. Instead of being in an
| office with colleagues, I'm in the same space with my
| significant other, and what used to be smoke breaks are now sex
| breaks. The time I used to waste on commute I now use to meet
| with friends and acquaintances.
| tick_tock_tick wrote:
| I mean I agree for my life but only because I already built
| up my social circle from these shared spaces. What's someone
| fresh out of school in a new city supposed to do in 20 years?
| dahwolf wrote:
| "The time I used to waste on commute I now use to meet with
| friends and acquaintances."
|
| I hope this is true, same for the sex breaks, but I'm
| skeptical. So on any given work day, you physically meet with
| friends between 7-9AM and/or 5-7 Pm? Like, every day?
|
| These "friends" of yours, they have nowhere to go? Or do you
| sneak this into your work day and just randomly disappear for
| any length of time, which is something most of us can't do?
| gumballindie wrote:
| Some people are so devoid of life that they think the
| formality of an office is what counts as friendship.
| unsupp0rted wrote:
| If my colleague calls me at least two times a day that
| means they like me and we're friends
| gumballindie wrote:
| They like you so much they want you to stop seeing your
| family and return to office so you can provide emotional
| support and water cooler chats.
| solarmist wrote:
| South Park did it last week.
| unsupp0rted wrote:
| We gave up our humanity when we came down from the trees, then
| again when we started cooking our food, then again when we made
| up languages, started writing, reading, and counting... the
| list goes on. Whatever "our humanity" is, we don't seem to be
| the worse for having lost it and made up a new one over and
| over. Each time might be the last, but so far we've done well.
| lonelyasacloud wrote:
| "done well"? At best isn't the jury out on that one pending
| what is left after we've handled climate change?
| unsupp0rted wrote:
| So far we've done well. We may annihilate a big chunk of
| life on the planet in a nuclear apocalypse tomorrow, but up
| to today we're ok. It's too early to tell what comes next.
|
| If we're going to penalize humanity for climate change we
| might as well give points to humanity for becoming capable
| of diverting asteroids or curing multi-species plagues too.
| lonelyasacloud wrote:
| > We may annihilate a big chunk of life on the planet in
| a nuclear apocalypse tomorrow, but up to today we're ok.
|
| UN Report: Nature's Dangerous Decline 'Unprecedented';
| Species Extinction Rates 'Accelerating'( https://www.un.o
| rg/sustainabledevelopment/blog/2019/05/natur... ).
|
| UN climate report: Scientists release 'survival guide' to
| avert climate disaster
| (https://www.bbc.co.uk/news/science-environment-65000182)
|
| Sorry, not seeing doing ok.
| colonCapitalDee wrote:
| Climate change is horrible and will change life on Earth
| for the worst for all forms of life, including humans.
| But given the choice between (a) no industrial revolution
| and no climate change, and (b) industrial revolution and
| climate change, I'm picking (b) every time and it isn't
| even close.
| codr7 wrote:
| I've been saying this for a long time, ever since ML seriously
| entered the scene:
|
| The singularity is the least of our worries, AI will fuck up
| society beyond repair long before we reach that point because we
| give it way more trust and responsibility than it deserves. It's
| the Eliza-syndrome, as long as a computer shows the least sign of
| possessing human qualities we're more than happy to fill in the
| blanks with pixie dust and unicorns.
| LesZedCB wrote:
| i like to call it the "stupid singularity"
| wslh wrote:
| Yes, the core issue is trust: when you trust somebody or
| something your brain stops working. Far for being a ludite
| myself but blind less trust is very dangerous.
| thundergolfer wrote:
| The stories of how much trust layman and even certain 'experts'
| put on ELIZA back in that 60s is wild. Given how much better
| GPT models are, hopefully there isn't a proportional increase
| in trust and anthropomorphizing.
| mydriasis wrote:
| It has already driven me insane. I am the first casualty ;)
| skee8383 wrote:
| i don't fear this stuff. at the end of the day if the digital
| world becomes detrimental to my health i can just call up my ISP
| and cancel my service. good luck rotting my brain when you have
| no internet connection to my house.
| concinds wrote:
| The interview isn't very intellectual, and even rambles, but
| blame the reporter for that. Lanier's a great thinker.
|
| I'll add my own danger: AI/VR could lead us to each live in our
| own realities. When you watch the evening news, it'll be
| specifically written for you, and won't get any of the scrutiny
| that a broadcast watched by millions would get. Or, you go watch
| the president's State of the Union, and get served a custom
| speech written to appeal to your particular psychological
| profile. This'll be possible on day, and it gives me Deus Ex
| vibes.
| theptip wrote:
| If you claim "these [AI risk] concerns make no sense" then you
| either lack imagination, are willfully ignorant, or are selling
| something.
|
| It's perfectly reasonable to say something like "I think it's
| very unlikely because I disagree that [insert causal link in the
| model] is plausible."
|
| But to claim that the whole set of concerns are nonsensical is
| lazy thinking.
|
| You see this a lot (and Robin Hanson and PG have commented on
| this dynamic recently) where a small group makes a bunch of very
| specific claims, which get dismissed by the "mainstream" without
| actually engaging with or understanding them.
|
| So in this case, "[the concerns] make no sense" should be read as
| "I did not bother to try to understand these claims, but I don't
| like the conclusion", not any particular argument about whether
| they are logically sound.
|
| Ignore.
| [deleted]
| CadmiumYellow wrote:
| It's the Lucille Bluth response: "I don't understand the
| question and I won't respond to it."
| PoignardAzur wrote:
| This is very eloquently put.
| xwdv wrote:
| The real danger will start when one day an AI emerges that is
| somehow able to crack any encryption in polynomial time, through
| methods obfuscated to us by its neural network. And if this model
| somehow becomes widely available, the tech world will be brought
| to its knees overnight.
|
| Not sure what will become of society then.
| caycep wrote:
| what's the difference?
| boplicity wrote:
| In the future we'll all have AIs listening to everything we
| listen to. These AIs will also have sensors on our body to detect
| our emotional state (heck, maybe even inferring our thoughts
| based on this.) The AIs will then provide contextual advice.
| We'll generally say what it tells us to say; do what it suggest
| we do. This will happen for everyone; we'll end up, effectively,
| being the "mouthpieces" for AIs talking to one another.
|
| The technology for all of this already exists. It's just a matter
| of time. Right?
| Tubbe wrote:
| Who would want that and why?
| pjc50 wrote:
| Targeted advertising in exchange for entertainment.
| NovaDudely wrote:
| Thats a big part of advertising. Make people feel inadequate
| if they don't have these things. FOMO in action.
| anon7725 wrote:
| > Who would want that and why?
|
| Nobody wanted a perpetual outrage machine, but we have
| Twitter.
|
| Nobody wanted ever-present surveillance, but we have ad
| networks.
|
| Nobody wanted identity theft, but we have a thousand attack
| vectors.
| worldsayshi wrote:
| It's like innovation by obsessive compulsion.
| [deleted]
| marcellus23 wrote:
| > In the year 3535
|
| > Ain't gonna need to tell the truth, tell no lie
|
| > Everything you think, do and say
|
| > Is in the pill you took today
|
| https://www.youtube.com/watch?v=l3yDLvp9le0
| mtlmtlmtlmtl wrote:
| Resistance is futile.
| akomtu wrote:
| Most people speak their thoughts silently, and that "speech"
| must trigger the same neurons that do regular speech. This will
| be the basis of the mind-reading devices. Abstract thoughts
| cannot be read this way, but how many know what an abstract
| thought even is?
| jacobr1 wrote:
| Yep, this is coming. But it is unclear how interactive will be
| in practice and what the feedback mechanisms will look like.
| r1b wrote:
| If you enjoy this kind of thinking I'd recommend [1].
|
| > Electric speed mingles the cultures of prehistory with the
| dregs of industrial marketeers, the nonliterate with semiliterate
| and the postliterate. Mental break-down of varying degrees is the
| very common result of uprooting and inundation with new
| information and endless new patterns of information.
|
| [1] https://web.mit.edu/allanmc/www/mcluhan.mediummessage.pdf
| qikInNdOutReply wrote:
| Technology implies beligerence
|
| http://akkartik.name/post/2012-11-21-07-09-03-soc
|
| The problem i see, that someone might send our "primitive" AI
| into a hostile environment, were it has to compete against other
| AI, creating a "take-over" and a "devensive" monster, similar to
| the go automaton. While the real world training data might be
| dripping, the speed in which a NN under evolutionary pressure
| against itself might evolve could go through the roof.
| dukeofdoom wrote:
| Alex talked about this 6 years ago. It's not the AI, it's the
| people controlling the AI developing a God complex
|
| https://www.youtube.com/watch?v=UZPCp8SPfOM&t=6610s
| polynomial wrote:
| Assuming it's not already just a reflection of a pre-existing
| lack of sanity, or that it's arguably a symptom of a the descent
| into madness, not the cause.
| AnimalMuppet wrote:
| How about the danger that AI itself becomes insane?
|
| On the HN "new" page as I write this: "Google and Microsoft's
| chatbots are already citing one another's misinformation". As AIs
| produce more and more content, one danger is that they feed back
| on each other, with AIs training on more and more AI-generated
| content, and the connection with reality becomes more and more
| diluted. I don't think that's going to produce some new kind of
| wisdom; I think it's going to produce raving lunacy.
| [deleted]
| Nuzzerino wrote:
| Hot take: The world (most of the time) already works that way
| (more or less), but people usually don't notice for whatever
| reason.
|
| One possible bias here is that we expect people to make
| mistakes and computers to get it right. Don't forget that the
| vast majority of people don't get it right.
|
| Perhaps this is just the latest and most obvious consequences
| of the Internet's tendency to give the podium to some of the
| least qualified to speak on whatever subject. I think if we go
| back to the drawing board there, we could be in a slightly
| better situation. Quora made a nice attempt to do this but fell
| way short of it's potential.
| dd36 wrote:
| Just like we've seen in some human populations.
| ChatGTP wrote:
| "New from Silicon Valley...AI Therapists"
| [deleted]
| labrador wrote:
| I'm really enjoying how "Imposter Syndrome" is actually justified
| in the tech industry as many influencer posts or posts by people
| riding on reputations from past accomplishments are actually
| imposters and making fools of themselves. Conversely, some very
| insightful posts are being made by the quiet ones with few
| followers who aren't necessarily even in the tech industry.
|
| I've been through sea changes before, when the old guard falls by
| the wayside and a new group rises to meet the new challenges.
| This is certainly one of those times.
| rideontime wrote:
| I'm struggling to understand what point this comment is trying
| to make. I thought it was referring to Lanier as one of "the
| quiet ones with few followers," but then it claimed that the
| "old guard" is falling by the wayside, and Lanier's hardly new.
| labrador wrote:
| I wasn't actually referring to Lanier because he's mostly
| been a lone voice in the woods, even as he's highly
| influential in some circles. I was actually referring to the
| noisy self-promoters currently polluting Hacker News with
| their AI posts, but now that you mention it, maybe Lanier
| fits that category of insightful people with few followers.
| How long has he been telling people to delete social media?
| How long have people been ignoring him? The world is finally
| coming around and I for one would be glad for him to become
| as famous as some of the noisy ones.
| unconed wrote:
| Funny, I have mentally filed him as a designated person who
| is consulted by media to offer opinions on tech despite
| having no particular claim to familiarity with said tech.
| johnaspden wrote:
| Those whom the gods wish to destroy, they first make mad.
| 1970-01-01 wrote:
| Tldr: AI will become so good that we're going to forget how to
| "live, laugh, love" and instead we slowly insane ourselves out of
| existence. Yes, I used insane as in a verb. That's how he intends
| this threat will unfold.
| holoduke wrote:
| AI will encapsulate us slowly till the only thing left is your
| digestive system to provide energy to the AI systems.
| __derek__ wrote:
| > "[...] To me the danger is that we'll use our technology to
| become mutually unintelligible or to become insane if you like,
| in a way that we aren't acting with enough understanding and
| self-interest to survive, and we die through insanity,
| essentially."
|
| It's surprising that the article doesn't mention Infinite Jest.
| agentultra wrote:
| I'm more concerned that the people using LLM's will drive me
| insane. I hope that I don't get flooded with LLM-generated emails
| and documentation from colleagues and co-workers that don't
| bother to proof-read what they send me. It will be a frustrating
| day when I reply to them with a question and they don't have an
| answer because they don't even know what they sent me.
|
| Multiply that by all of the boilerplate code I'll end up reading.
| No pleasant abstractions that hide complexity and are reinforced
| by laws checked by properties or proofs; abstractions that make
| the problem easier to understand and communicate. Instead,
| endless bowls of oatmeal code: similar, bland, and laborious to
| read. Who knows if the model "hallucinated" this code and whether
| the author bothered to check it before asking for a review. Who
| knows how long before I get bored and worn down and stop
| bothering to read it myself.
|
| And then we're supposed to change our job titles to "Prompt
| Engineers"? How absurd is that. Nothing about what you do is
| creative, human, or interesting. Now your job is to feed
| sentences into an LLM to get it to do all that work for you...
| how long until that dead-end job gets replaced?
|
| Maybe this could drive us all insane from burnout and nihilism.
| geraneum wrote:
| My boss (head of engineering) loves ChatGPT and he uses it more
| and more in work. Recently he just wrote an crucial feedback
| after one of our meetings and the moment I started reading, it
| was obvious it's ChatGPT (confirmed by ZeroGPT later) as he
| does not talk like that at all. It read like all the other
| ChatGPT responses you read on Internet.
| otikik wrote:
| If you receive a ChatGPT-generated message, it's only fair
| that you respond with another ChatGPT-generated message.
| prox wrote:
| Just send him back a generated email that your chatGPT read
| his ChatGPT mail.
| danielbln wrote:
| relevant: https://i.imgur.com/D8WFIMZ.png
| roncesvalles wrote:
| GPT needs to be a codec!
| marcosdumay wrote:
| It's just an uncompression stage to facilitate
| transmission.
|
| Too bad it's lossy. I'm waiting for a lossless algorithm
| before I add it as an encoding on my web server.
| fishnchips wrote:
| I'm not surprised at all. ChatGPT is a great empathy
| prosthesis.
| gniv wrote:
| > empathy prosthesis.
|
| Nice. I also checked "empathy crutch" (it's the name of a
| band) and "empathy surrogate" (slightly more used).
| solarmist wrote:
| Fuck. Now you have to figure out if people actually care or
| are just saying the words...
|
| Wait.
| consumer451 wrote:
| > empathy prosthesis
|
| This is my favorite phrase that I have learned in recent
| times and it appears only one other time on the internet.
|
| Thanks for that.
| geraneum wrote:
| It was actually a technical feedback.
| 0xdeadbeefbabe wrote:
| And isn't it even more disappointing to think that it's just
| people destroying people, same as always.
| mattgreenrocks wrote:
| > And then we're supposed to change our job titles to "Prompt
| Engineers"? How absurd is that. Nothing about what you do is
| creative, human, or interesting.
|
| It's weird how all the discourse around AI replacing jobs never
| ponders the possibility of AI commoditizing management itself.
| (I suppose that's on the list of Illegal Ideas.)
| christkv wrote:
| Somebody will for sure create a HN Karma mining bot connected
| to chatGPT. Just a matter of time.
| strangattractor wrote:
| Isn't driving us insane a way of destroying us?
|
| I think we need an EPA equivalent for the Internet or at least
| Social Media. People should not be able to release toxic
| substances directly into the Web.
| favaq wrote:
| Why not? EPA exists because you can't "escape" the
| environment. But the web? If social media is full of crap,
| then don't visit it - that's what I do.
| throwaway1777 wrote:
| This is beyond social media.
| red-iron-pine wrote:
| Let's be real: in 2023 a significant portion of our lives
| is online, and that's not going to go away.
|
| I can't just not have a smartphone. My kid's baseball team
| only publishes schedules on FB. Local bakeries only sell
| their stuff, fancy cakes, via Insta. I have multiple work
| and personal emails and some of them get BLASTED with spam
| cuz of one purchase I made for replacement springs like ~5
| years ago.
|
| Virtually all of my banking and stocks are online. They'll
| still send me paper sometimes but there is no way to do
| money transfers or basic life activities like paying my
| bills without using their app.
|
| Even if I don't look at it, my wife does. My kids do. They
| spread the memes, and potentially invite all sorts of shady
| malware into my network. Ditto for my coworkers, my
| employees, my bosses.
|
| Just "don't visit it" isn't an option if 95% of the
| population does.
| golergka wrote:
| Documentation and code we have now is already pretty bad, and
| most of the people who write both are stupid, can't answer
| follow-up questions or come up with good abstractions. I don't
| see how LLMs could make it any worse.
| disgruntledphd2 wrote:
| Famous last words.
| hosh wrote:
| At some point, someone will write a LLM assistant to go through
| all of the LLM-generated email to be able to intelligently
| filter out spam, summarize the rest, and highlight the
| important stuff.
|
| Not that I think this is good for our society. If we go that
| route, at some point, it would be AIs talking with AIs, and we
| end up being the biomass.
| ryandvm wrote:
| Ugh. Logging on to a Zoom meeting that the AIs scheduled for
| us is the hell we never imagined.
| usefulcat wrote:
| Right now, these models were trained almost entirely on content
| created by humans, since that's what was available. Now that
| it's so relatively cheap to create generated content, it seems
| likely that in the future much if not most of the material used
| for training will have come from other models, not from humans.
| I have no idea what the implications of that will be but it
| seems very likely and does concern me.
| oldsecondhand wrote:
| That would doom LLMs to mediocrity, which is great news for
| creative people.
| elcomet wrote:
| I'm not sure why it's a concern to you.
|
| If it works well it's great news, and if it doesn't work,
| then it won't be used. Companies will not make their AIs less
| intelligent on purpose.
| toss1 wrote:
| >>Companies will not make their AIs less intelligent on
| purpose.
|
| That sounds a LOT like saying "companies will not make
| their products worse on purpose".
|
| Yet it happens allt he time, and is almost inevitable once
| the MBAs, finance people, and marketers get involved - they
| seek out ways to cheapen the product that (they think)
| customers won't notice, they undermine "cost centers" like
| R&D and production, and 1000 other flaws. In internet
| products they deliberately skew the output to generate more
| ad revenue (See Enshttification" by Cory Doctorow [0].
|
| And that is all for products that otherwise would stay
| mostly the same, without some kind of natural degradation.
|
| These LLMs and image generators, etc. will definitely
| degrade by ingesting their own output. Sort of like the
| jokes about clones becoming worse every generation. Just
| watch.
|
| Thx for provoking me to think for a few seconds and
| convincing me that it's inevitable...
|
| [0] https://doctorow.medium.com/tiktoks-enshittification-
| bb3f5df...
| wishfish wrote:
| Agree with everything you wrote. Gave me the funny thought of
| how ultimately the AI will just be talking to itself. Someone
| sends me AI-written documentation to evaluate. I'll ask the AI
| to respond with feedback because I'm too busy playing Civ. This
| cycle will go back and forth until one of the AI instances
| gives final approval, and then the documentation will get
| posted publicly without a human ever reading it. End users,
| when confronted with the daunting wall-of-text docs, will ask
| for AI summaries. The lack of human readership will continue
| indefinitely.
| all2 wrote:
| I had a vision in my mind while reading this of a dystopian
| (utopian??) future where city states are empty of people, and
| the country sides are sparsely populated. The people in the
| cities died out long ago, but no one knows that. They just
| see these giant glistening walled gardens, with aircraft
| flying around, and it all looks very busy and amazing. But
| the truth is its just the machine, maintaining itself,
| talking to itself. Eventually it will go insane.
| FooBarBizBazz wrote:
| _There Will Come Soft Rains_
|
| https://www.btboces.org/Downloads/7_There%20Will%20Come%20S
| o...
| akiselev wrote:
| Reminds me of the Doctor Who episode _Gridlock_ [1] set
| five billion years in the future. The entire planet is
| covered by a city called New New York with the remainder of
| humanity living in a perpetual traffic jam in the dystopian
| bottom half of the city. A mutated virus wiped out all the
| people living in the utopian top half of the city while
| sealing everyone left alive in the bottom half to
| constantly drive around with nowhere to go, with systems
| barely maintained by the Face of Bo.
|
| [1] https://en.wikipedia.org/wiki/Gridlock_(Doctor_Who)
| nine_k wrote:
| Eventually the machines will evolve this intermediate
| language, and with time it will become impenetrable for
| humans without the machine caring to translate.
| mizzao wrote:
| You might even get the AI to start playing Civ for you at
| some point!
| wishfish wrote:
| That made me laugh. And it's an interesting thought. Civ is
| notorious for an AI that isn't particularly good and
| frequently resorts to cheating. Maybe GPT has absorbed
| enough Civ FAQs & tactics articles to be better than the
| built-in AI? There's a ton of (virtual) ink spilled about
| the game for decades. Lots for GPT to use.
|
| Would love to see a War of the AIs in Civ, or Crusader
| Kings, or any moderately complicated game which has had a
| ton of writing about it.
| chongli wrote:
| Civ's AI is not built to win. Never has been, throughout
| the series. It's specifically designed to role-play as a
| caricature of a world leader and allow the player to
| exploit it before inevitably losing.
|
| It would not be very hard to build an AI that actually
| plays to win and presents a real challenge to the player.
| The problem is that players would complain even more! An
| AI that plays to win is not a reliable ally, let alone
| trading partner. Against such an AI, the player would no
| hope of trading technologies for an advantage: the AIs
| will have already traded them all. The AI would also use
| every possible treaty to build up its forces and only
| launch the surprise attack when it deems it can hit with
| overwhelming force.
|
| I have seen such an AI, actually. It was created by a
| modder for the game Master of Magic [1]. It makes the
| game incredibly difficult while at the same time cutting
| down on the cheating. It does so by incorporating the
| modder's extremely detailed and extensive knowledge of
| the best strategies in the game. It does not need neural
| networks or machine learning to accomplish this. It just
| implements a good old fashioned expert system.
|
| [1] https://www.realmsbeyond.net/forums/showthread.php?ti
| d=8246
| LeftHandPath wrote:
| The Wargame series (European Escalation, Red Dragon) is
| plagued by a bad AI system that relies almost entirely on
| cheating. Had always wondered how much better it would be
| if it were an "expert system" type AI instead of the devs
| just giving it full knowledge of the battlefield and an
| insane amount of counters to whatever units the player
| fields.
| idle_zealot wrote:
| This doesn't necessarily sound bad to me. The natural
| language being passed around is effectively an information
| transfer protocol. Each agent has instructions for how to
| manipulate and respond to messages, and the publicly-posted
| final document is akin to HTML source, in that a user's agent
| interprets it for consumption rather than display it
| directly.
|
| The only problem is lossyness and hallucinations, but
| assuming the technology improves such that that becomes a
| non-issue then having bots deal with formalities and
| formatting sounds great.
| AstralStorm wrote:
| One could also flat out refuse to talk to bots. It might
| seem childish but in the long run could be fine saving
| considering how people will use them to water your time and
| confuse you.
|
| It's like people skipping preselection tired topic menus in
| self service via phone.
| oldsecondhand wrote:
| And that's when work from home will be abolished by that
| company, and AI will be banned from the company network.
| deeviant wrote:
| Where do you get the idea GPT can't create abstractions? And do
| you really believe it generates more boilerplate than the
| average SWE?
|
| The more comments I read on HN regarding GPT stuff, the more I
| realize that many people are willing to put out an opinion on
| something they do not at all understand.
|
| While I can understand concerns about a flood of automatically
| generated emails, this is already a significant issue that most
| software engineers have the tools to address. Additionally,
| there isn't much difference between composing an email by hand
| and generating one with an LLM; both begin with a list of key
| ideas to convey, which are then elaborated on either manually
| or through automation. It was always the key ideas that are the
| point.
| volkk wrote:
| i have these exact concerns. the concept of on overload of
| information that means very little that will get me to stop
| caring/reading literally anything on the internet at a certain
| point
| ahepp wrote:
| I think it's more likely human-to-human communication will
| adapt (dare I say improve?) than cease
| gniv wrote:
| > And then we're supposed to change our job titles to "Prompt
| Engineers"? How absurd is that.
|
| Why is that absurd? Asking good questions is a useful skill in
| many domains even now. As the models improve they will become
| better than any human in many areas of expertise. So we will
| have to rely on them. Being a good prompt engineer sounds like
| a good job in that world.
| timcavel wrote:
| [dead]
| NikkiA wrote:
| From my point of view, it seems that AI is being kept only to
| those that can afford to pay for it. So it's not much of a threat
| to me or other people below the poverty line.
|
| If it eats the rich for us, I'm fine with that.
| circuit10 wrote:
| What makes you think an AI with the ability to take actions in
| the real world will only affect the people who pressed the
| button to start it? This is like not being worried about a
| nuclear war because you think it will only affect politicians
| and the military
|
| (I'm not trying to make a statement as to whether I think
| nuclear war is likely or not because I don't know, just using
| it as an example)
| NikkiA wrote:
| The title of the post, that it's more likely to drive us
| (assumedly meaning it's users) insane, rather than destroy
| us.
| Jensson wrote:
| You will interact with AI whether you like it or not, when
| you call for customer support, they will send spam ad
| messages to you, bots on dating sites etc.
| throwuwu wrote:
| You haven't been paying attention
| sh4rks wrote:
| It's the opposite. Only the rich will be able to afford AI-free
| spaces. Everyone else will become an AI-assisted drone.
| gitfan86 wrote:
| It is amazing how many people still have not realized what is
| going on.
|
| The AI is going to be vastly superior to humans in MANY ways very
| soon. And here we are arguing if it is "actually" smart, or
| "actually" conscious.
|
| Do Ants think that humans form "real" colonies or "real"
| organizations?
| stareatgoats wrote:
| > "This idea of surpassing human ability is silly because it's
| made of human abilities."
|
| Shows the level of insight from this "guru". The truth is we
| don't know how far the work being done on artificial intelligence
| is going to go. For now it will continue to develop and acquire
| more and more autonomy, just because that is the nature of our
| existence: better and more efficient will replace the lesser so.
|
| So, we may have potentially given birth to a new sentient being
| that will go on to live its own "life" (within 100, 500, 1000
| years?), or we might be able to constrain it so it so that it
| will always be in the service of humans. We simply don't know at
| this stage, but my money is on the former TBH.
| chadash wrote:
| This quote is taken out of context and is perhaps not a
| charitable meaning of what the author means. Here's the whole
| paragraph:
|
| _> Lanier doesn't even like the term artificial intelligence,
| objecting to the idea that it is actually intelligent, and that
| we could be in competition with it. "This idea of surpassing
| human ability is silly because it's made of human abilities."
| He says comparing ourselves with AI is the equivalent of
| comparing ourselves with a car. "It's like saying a car can go
| faster than a human runner. Of course it can, and yet we don't
| say that the car has become a better runner."_
|
| The author, Jaren Lanier, is a reasonably accomplished
| technologist, with some pretty groundbreaking work on VR in the
| 80s. He is most certainly aware that humans have been surpassed
| by computers in many ways. I think that line is arguing
| semantics about the word "intelligence" and clearly he knows
| that computers do many things far better than humans.
| coldtea wrote:
| > _"It's like saying a car can go faster than a human runner.
| Of course it can, and yet we don't say that the car has
| become a better runner."_
|
| That's a pointless argument. We might not say it, but for all
| intents and purposes the car does go faster than any human
| runner.
|
| We just don't say it because running when it comes to humans
| mainly means using your feet. If it was a more generic term,
| like "fast-mover", we could still use it to compare humans
| and cars, and say cars are better "fast-movers" than humans.
| waboremo wrote:
| No it's not pointless, language is important. Cars are not
| runners. "For all intents and purposes" is a cop out here.
| We're talking about LLMs, you know _language_ learning
| models.
| coldtea wrote:
| > _No it 's not pointless, language is important._
|
| Not that important, and not for this purpose. Things
| still work the same, even in languages with widely
| different semantics and ways to refer to them (I don't
| mean the trivial case where a house is called talo in
| Finnish etc., but languages where semantics and terms
| differ.
|
| Using language-specific (en. english specific, or german
| specific) word definition and etymology to prove some
| property of the thing reffered to is an old cheap
| philosophical trick that sounds more profound than it is
| insightful.
|
| Even more so, we might not say it for a car, but if we've
| built a human-looking robot with legs, we'd very much say
| it's a "better runner" if it started surpassing humans at
| running. Hell, we used to call employees doing manual
| calculations "calculators" in the past. Later, when
| machines doing that became available, we used the same
| term for them.
|
| So the idea that "human is runner but car is not runner",
| also means that "human is thinker, machine is not
| thinker", and this has some profound difference, doesn't
| make sense anyway. Human running is associated with legs,
| certain way of moving, etc. Thinking is more abstract and
| doesn't have such constraints.
|
| > _Cars are not runners._
|
| That's just an accidental property of having a dedicated
| word for "runner" in English that doesn't also apply to a
| car going fast. The term "running" though is used for
| both a human running and a car going fast ("That car was
| running at 100mph").
|
| > _" For all intents and purposes" is a cop out here._
|
| For all intents and purposes means "in practice". Any
| lexicographical or conceptual arguments don't matter if
| what happens in practice remains the same (e.g. whether
| we decide an AGI is a "thinker" or a "processor" or
| whatever, it will still be used for tasks that we do via
| thinking, it will still be able to come up with stuff
| like ideas and solutions that we come up via thinking,
| and effectively it will quak, look, and walk like a duck.
| The rest would be semantical games.
|
| > _We 're talking about LLMs, you know language learning
| models._
|
| Which is irrelevant.
|
| LLMs being language learning models doesn't mean the
| language used to describe them (e..g "thinkers" or not)
| will change their effectiveness, what they're used for,
| or their ability to assist or harm us. It will just
| change how we refer to them.
|
| Besides, AI in general can go way beyond LLMs and word
| predictors, eventually fully modelling human neural
| activity patterns and so on. So any argument that just
| applies to LLM doesn't cover AI in general or "the danger
| than AI destroys us" as per TFA.
| ignite wrote:
| That reminds me of the very old arguments that people can't
| program computers to play chess better than they themselves
| did. Obviously false, as is this. There is no reason we can't
| build something that is smarter than we are.
|
| > "This idea of surpassing human ability is silly because
| it's made of human abilities."
|
| It's not made OF human abilities, it's made BY human
| abilities - a completely different thing.
|
| And, of course, Boston Dynamics will be delivering the
| "better runner" very soon.
| SanderNL wrote:
| "we don't say that the car has become a better runner"
|
| We would if the car was to race against human runners. It's
| just word play. Cars are not used like runners, so we use
| different words. They definitely are better runners.
|
| Now that technology is touching our core business we get
| scared, but this has been going on for a long, long time.
| When it was our legs, we brush it off. But when it touches
| our ability to think we squirm.
| mistercheph wrote:
| Let's see your car bring you your dinner from the back of
| the kitchen!
| vhcr wrote:
| https://www.youtube.com/watch?v=bdiHNIgrHmI
| stareatgoats wrote:
| and https://www.youtube.com/watch?v=6-Jo9Lk3txI
| mistercheph wrote:
| That's not a car!
| [deleted]
| johnchristopher wrote:
| > We would if the car was to race against human runners.
| It's just word play. Cars are not used like runners, so we
| use different words. They definitely are better runners.
|
| This tendency on HN to annihilate discussions by stating
| that, for instance, flying is the same as running because
| your feet also touch the ground at some point when flying
| (it happens only at take off and landing but it still
| counts as running, right ?) is really something. Stop
| torturing definitions, it makes Carmackgod sad and they
| randomly switch off a bit on the mainframe every time you
| do that.
| jwalgenbach wrote:
| Cars don't run. And even if they did, or you tortured the
| definition to include rolling on fairly straight prepared
| paths as running, it is only better for specific
| definitions of better.
|
| Cars are faster on reasonable traversable terrain. Are they
| more or less energy efficient? Under what circumstances? Do
| they self navigate the best path around obstacles? Better
| is really subjective.
|
| And this applies to the large language models too. Just
| like calculators, they are going to do some things better,
| or maybe cheaper. But I've played with them trying to get
| them to write non-trivial programs, and they really do fail
| confidently. I suspect the amount of source code online
| means that any common problem has been included in the
| training data, and the LLM constitutes a program. So, at
| this point for programming, it's fancy Google. And that has
| value, but it is not intelligence.
|
| I am not saying we (as a society) shouldn't be worried
| about these developments. Near as I can tell, they will
| mostly be used to further concentrate wealth among the few,
| and drive people apart because we already can't settle on a
| common set of (reasonably) objective facts about what is
| going on -- both problems are probably the same thing from
| different perspectives...
| peoplefromibiza wrote:
| a car can't walk, if it can't walk it cannot run.
|
| You're actually oversimplifying the matter to a point where
| an F-16 Hornet is a very fast runner. Which it isn't.
| tshaddox wrote:
| You unintentionally point out the flaw of this argument
| by rephrasing it to eliminate the word "runner." That's
| the only word here that coincidentally strongly implies
| humans. By rephrasing it to "run" you end up with an even
| more clearly incorrect statement. My car _can_ run. It
| runs pretty good. Sometimes I let it run for a few
| minutes to warm up.
| maxerickson wrote:
| It's the other way around. Focusing on walking and
| running not being good comparisons rather than making
| valid comparisons is a distraction.
|
| Like a lot of the stuff being done with large models
| certainly isn't thinking, but they can clearly
| characterize sets of data in ways that an unassisted
| human can't.
| peoplefromibiza wrote:
| until the machine needs to run or think and "characterize
| sets of data" won't make it.
|
| being able to answer based on a probabilistic assumption
| is not that great in general, they do it fast on a frozen
| knowledge base, it can be useful, sometimes is
| surprisingly good, but not that great in general.
|
| When I asked for the 3 best wood shops near me it replied
| with a shop that does not sell wood, a shop that does not
| exist and a broken website of a former now closed wood
| shop.
|
| Now can an AI train another AI to become "smarter" than
| it is?
|
| It can't.
|
| Can an AI train another AI to become better at
| "characterize sets of data" than it is?
|
| It can't.
|
| An unassisted AI is as helpless as the unassisted person,
| but can't even rely on the intelligence of the species.
| throwuwu wrote:
| > When I asked for the 3 best wood shops near me it
| replied with a shop that does not sell wood, a shop that
| does not exist and a broken website of a former now
| closed wood shop.
|
| It's not a search engine, if you give it the necessary
| tools it can use a search engine for you to find these
| answers.
| maxerickson wrote:
| We don't know what the possibilty space looks like.
|
| We know that the current stuff can't do a lot of things,
| and it isn't really that interesting to enumerate them.
|
| A predictive text model having a poor grasp of wood shop
| and location doesn't seem hugely surprising.
| johnchristopher wrote:
| People make the same arguments for time travel. "We dOnt
| KnoW wHat's poSsiBle yEt sO aNytHIng iS".
| maxerickson wrote:
| We have some evidence that intelligence is possible, so
| it isn't quite the same as coordinate jumping.
| luckylion wrote:
| Walking and running are modes of movement. A car can
| move.
|
| Focusing on the "how" feels like you'd arrive at "a
| calculator isn't as good at calculating as a human,
| because it doesn't do it the same way, it doesn't have a
| brain".
| tshaddox wrote:
| Yep. This whole argument hinges on the fact that the word
| "runner" in this context happens to be used almost
| exclusively to refer to humans. Rephrase it even slightly
| and it falls apart. We _do_ say "cars can move faster than
| humans." Likewise we do say "machines can lift weights
| better than a human," but we don't say "machines are better
| weightlifters" because that particular word "weightlifter"
| is coincidentally only used to refer to humans.
| PaulDavisThe1st wrote:
| It's not just word play.
|
| Cars _go faster_ than humans can by themselves, under some
| specific conditions.
|
| Cars _go slower_ than humans, or rather cannot go at all,
| under other specific conditions. Two weeks ago my wife ran
| 30 miles on trails in southern Texas. A car could not have
| traversed any of the distance she travelled on, because a
| car cannot run.
|
| Cars make it easier for people to move themselves and stuff
| when there are appropriate roads to travel on. They have
| enhanced our abilities to do this, but they _cannot run_.
|
| You're squashing the meaning out words by trying to suggest
| that "running" is somehow equivalent to "any other method
| of a person moving from A to B". But that's not true.
|
| We can acknowledge the greater ease of cars for moving
| people and stuff without squashing the meaning out of
| words.
|
| Finally, even the notion that cars are "better" at moving
| people and stuff needs careful examination. Thus far I have
| said "make it easier" because I am aware that by a certain
| set of metrics (related to energy use, material use, impact
| on the environment) cars are actually worse most of the
| time.
| coldtea wrote:
| > _You 're squashing the meaning out words by trying to
| suggest that "running" is somehow equivalent to "any
| other method of a person moving from A to B". But that's
| not true._
|
| That's just an accidental property of the english
| language.
|
| We can imagine a language where "runner" and "thing that
| moves from A to B fast" used the same term T, and if
| people referred to T with the english notion of "runner"
| (e.g. a person running in a marathon") it was just
| deduced from the context. There are many cases like that.
|
| In any case, the point is moot, as "thinking" doesn't
| have the same constraints. We might not call what a car
| does as running/runner (though we do use the former term)
| but we absolutely have considered AI as "thinking" and
| called AIs "thinking machines", even before AI (never
| mind AGI) even existed.
|
| > _You 're squashing the meaning out words by trying to
| suggest that "running" is somehow equivalent to "any
| other method of a person moving from A to B". But that's
| not true._
|
| This depends on the level of abstraction of the
| discussion. At some level of abstraction it's irrelevant
| if the move happened via running or via horse buggy or
| via a car. Sometimes we just care about the act of moving
| from A to B, and different methods to do so are only
| differentiated by their speed or other effectiveness.
|
| In that case, we can compare man and machine though, and
| just care for the speed (machine can answer in 0.1. secs,
| a man needs to think over 1-2 minutes to answer such
| questions) or effectiveness (e.g. machine is better at
| juggling many things at the same time when thinking, or
| man is better at subtle semantical nuance).
| PaulDavisThe1st wrote:
| I already covered this in my post.
|
| Cars are an easier method to move people and stuff when
| there are suitable routes, where easier means "the
| journey will take less time, will require almost no human
| exertion by those moved, and will likely include weather
| protection".
|
| Nobody is going to disagree with this (they may raise the
| objections I did that cars are energetically, materially
| and environmentally less efficient than other means, but
| that doesn't invalidate "cars are easier for moving
| people+stuff").
|
| But that's not running. I will concede that even in
| English, there are idioms like "Can you run me to town?"
| meaning "Can you drive me to town?", or "I'm just going
| to run to the store" meaning "I'm going take a short
| journey to the store". But this doesn't mean that cars
| are better at running than humans, it means that the
| english word run can be used in different ways. And you
| know exactly which way Lanier meant it.
| JoshCole wrote:
| Are car parts car parts? Not according to an auto-
| mechanics, but according to the laymen. A radiator is not
| a battery or an engine. Are games games? Not according to
| a game theorist, but according to the the laymen. A game
| is not a play or a history.
|
| This isn't an accident of language. An example of an
| actual accident of language would be giving tanks instead
| of giving thanks.
|
| Are runners runners? Yes, according to you. A walker is a
| runner is a missile is a bowling ball rolling between
| places is light moving through a medium. No, according to
| a fitness coach, because a runner is not a tank is not a
| plane. When they say that a person should take up running
| they don't mean the person should melt down their body in
| a furnace and sprinkle their atoms into metal which is
| then pressed into iron plates that are attached to a tank
| which will then go running.
|
| Sometimes we need to be careful in language. For example,
| we probably don't want to confuse the process of being
| incinerated and pressed into iron plates with the process
| of a human exercising their muscles. The choice to be
| careful in this way is not an accident of language. It is
| a very deliberate thing when, for example, John Von
| Nuemann carefully explains why he thinks the laymen use
| of the word game has perilous impact on our ability to
| think about the field of game theory which he starts in
| his book about the same.
|
| I think you should make your point so as to disprove
| Nuemann, not pick on the straw man of running. Or you
| should argue against the use of the term radiator instead
| of car parts. It will better highlight your fallacy,
| because with running I have to make your position seem
| much more farcial then it is. We do gain something from
| thinking imprecisely. We gain speed. That can really get
| our thoughts running, so long as we don't trip up, but it
| calls to attention that when someone chooses to stop
| running due to the claim that the terrain isn't runnable,
| the correct response is not to tell them that running is
| accidental property. It is to be careful as you move over
| the more complicated terrain. Otherwise you might be
| incinerating yourself without noticing your error.
| pipo234 wrote:
| > But when it touches our ability to think we squirm.
|
| I think that's not the point. We're in awe by the machines'
| performances and then confused in how that compares to our
| abilities.
|
| The actual threat is that in our minds we narrow our _own_
| capabilities and limit the comparison such that the
| computer is in fact better.
|
| When computers were first doing math quicker than humans,
| that might have touch _some_ humans, sure. Similarly now
| that "AI"s produce convincing spam faster or photo
| realistic creative images -- that hurts _some_ , jobs or
| maybe a lot. But it doesn't come close to being "human" or
| "intelligent".
|
| Quite the opposite, the point is that we are getting dumber
| by focusing on human traits that _can_ be measured or
| emulated by machines.
| lo_zamoyski wrote:
| I think another general problem is that metaphors are
| quietly forgotten. The notion that computers "think" is
| something of a metaphor, but it is a superficial one that
| cannot be taken seriously as a literal claim.
|
| For example, when we say computers can "do math" more
| quickly than human beings can, this is fine as a matter
| of loose or figurative common speech. But strictly
| speaking, do computers actually do math? Do they actually
| compute? No, they don't. The computation we say a
| computer is doing is in the eye of the beholder. A better
| way to characterize what's happening is that human beings
| are _using_ computers _computationally_. That is, the
| physical artifacts we call computers participate in human
| acts as instruments, but _strictly speaking_, it makes
| about as much sense to say computers compute as it is to
| say that pencils write, hammers nail, vacuums cleaners
| clean, or cars drive. These things participate in the
| human act, but only as instrument. Whereas when human
| beings compute they are objectively computing,
| computation is not what computers are objectively doing
| (both Kripke and Searle make good arguments here). These
| artifacts only make sense in light of human intentions,
| as instruments of human intention and act.
|
| Human writing can be viewed similarly. Objectively, we
| only have some pigment arranged on some material. No
| analysis of a piece of written text will ever divulge its
| signification. Indeed, no analysis of a piece of text
| will demonstrate that what is being analyzed is a piece
| of text! Text, and even that something is a piece of
| text, needs to be interpreted as text to function as text
| in the eye of the reader. But the semantic content of the
| text is objectively real. It just exists in the mind of
| the reader.
|
| So we need to be careful because we can easily commit
| category mistakes by way of projection and confusion.
| stareatgoats wrote:
| That clarification didn't do it for me, I found it was like
| juggling semantics. Let's rephrase his comparison: "It's like
| saying a robot can run faster than a human runner. Of course
| it can (soon), and yet we don't say that the robot has become
| a better runner". It's just nonsense.
| peoplefromibiza wrote:
| A tractor is not better than humans at plowing, it is a
| plowing machine, so can do it at scale without suffering
| the same fatigue men experience, but it's not better at it,
| it simply does it mechanically in a way only a machine
| could do it.
|
| Running and plowing are not simply about doing it as fast
| as possible or as extensively as possible.
|
| So maybe what you are looking for is a definition of
| "better", it depends on what you mean.
|
| In my book a tailor made suit is always better than a
| machine made suit, because people are better tailors than
| machines for some definition of better.
| stareatgoats wrote:
| Yes, this is verily what I objected to. It's called
| "semantics", similar to when people say "hair" everyone
| knows what that means. But sooner or later someone will
| point ut that this hair is different from that hair and
| if you split one hair, now what do we have? This process
| is always a possibility in any discourse, but largely
| frowned upon, rightly so.
| philipov wrote:
| I think semantics is important :(
|
| In my experience, most arguments are the result of people
| not agreeing on what a word means.
| peoplefromibiza wrote:
| My opinion is that this it is not about semantics, it's
| about looking at the whole picture and not only to some
| specific outcome (running faster for example)
|
| Firstly, faster doesn't necessarily means better.
|
| Secondly, why do people run?
|
| Nobody can't say for sure in general.
|
| Why machines do it? (or would if they were able to)
|
| Because someone programmed them to do it. 100%.
|
| it makes all the difference in the World.
| pixl97 wrote:
| >Because someone programmed them to do it. 100%.
|
| It is this thinking of the past that will get large
| numbers of us in trouble with our future machines.
| aiphex wrote:
| This is not a sensible comparison. A mass-produced
| machine-made suit wasn't made using your exact
| measurements. If a human sat at a sewing machine on a
| factory production floor versus a machine, you wouldn't
| be able to tell the difference.
| PaulDavisThe1st wrote:
| This is nonsense.
|
| If you built a bipedal (or possibly N-pedal) robot that
| moved roughly similarly to how humans or dogs or cats or
| horses run, and it was faster than humans over all the
| terrains that humans can run over, I'm absolutely certain
| that everyone would agree that the robot is a better
| runner.
|
| But a car is not that thing. Neither is a helicopter, or a
| train, or a bicycle, or a jet aircraft or a hang glider or
| a skateboard.
| AstralStorm wrote:
| Anyone put Boston Dynamics into a dog race yet?
| naasking wrote:
| > "This idea of surpassing human ability is silly because
| it's made of human abilities." He says comparing ourselves
| with AI is the equivalent of comparing ourselves with a car.
| "It's like saying a car can go faster than a human runner. Of
| course it can, and yet we don't say that the car has become a
| better runner."
|
| The analogy to running is flawed because rolling and running
| are different types of locomotion.
|
| It's not at all clear that computing and thinking are
| meaningfully different forms of information processing. In
| fact, we know that we can compute by thinking since I can
| reduce lambda calculus terms in my head. We also know
| computers can compute all computable functions, and we know
| that all physical systems like the brain necessarily contain
| finite information (per the Bekenstein Bound), therefore they
| can in principle by simulated by a computable function. There
| are therefore strong reasons to suspect an underlying
| equivalency that would suggest that "artificial intelligence"
| is a sensible term.
| krona wrote:
| Then by that reasoning computers have, for decades, been
| 'artificially intelligent'?
| naasking wrote:
| No, thinking would be a specific type of computational
| process, just like computation requires a specific kind
| of thinking process.
| headsoup wrote:
| I think the argument is more that they only work from past
| inputs, they interpret the world the way they are told to. It
| is not that 'AI' can do things humans can't (otherwise the
| argument fails for many technical things, like a car at speed).
|
| If your bet is on the former, how does it create an entirely
| new, irrational thought?
| pixl97 wrote:
| Again, this seems like a weird argument. Not that long ago I
| was told AI would 'never' be able to perform some of the
| actions that LLMs are performing now. I have about zero faith
| in anyone that says anything along the lines of "AI won't be
| able to perform this human like action because..."
|
| The AI's we are using now are nearly one dimensional when it
| comes to information. We are pretraining on text, and we're
| getting "human like" behavior out of them. They have tiny
| context windows when working on new problems. They have no
| connection to reality via other sensor information. They have
| no means of continuous learning. And yet we're already
| getting rather insane emergent behaviors from them.
|
| What does multi-modal AI that can interact with the world and
| use that for training look like? What does continuous
| learning AI look like? What does a digital mind look like
| that has a context window far larger than the human mind ever
| could? One that input into a calculator faster than we can
| realize we've had a thought in the first place? One that's
| connected to sensory systems that span a globe?
| rowanG077 wrote:
| That argument is totally defeated by AI destroying human
| players, even top of the world level, at countless games.
| headsoup wrote:
| Refer to my point on past inputs. If a human suddenly said
| to the machine "change of rules, now you have to play by
| these new rules" the AI suddenly gets immensely dumber and
| will apply useless solutions.
| PaulDavisThe1st wrote:
| This no longer appears to be the case. Self-trained
| systems, that play themselves extremely rapidly and can
| even infer the rules by encountering just notices of
| illegal moves, are now commonplace.
| rowanG077 wrote:
| How is that relevant? A human will also get immensely
| dumber. Of course a lot less then an AI right now. The
| point is AI absolutely can do things a human can't.
| headsoup wrote:
| So can a PC, a car or a cat. You're not making any point
| here.
| rowanG077 wrote:
| You really have to start explaining exactly what you mean
| by that. Of course they can do things a human can't.
| RobotToaster wrote:
| >I think the argument is more that they only work from past
| inputs, they interpret the world the way they are told to
|
| Arguably humans are the same, being the product of genetics,
| epigenetics, and lived experience.
| PaulDavisThe1st wrote:
| Almost certainly true, but there's a huge difference. We're
| the result of forces that have played out within an
| evolutionary process that has lasted for millions of years.
|
| Current "machine learning"-style AI (even when it uses
| self-driven iteration, like the game playing systems) is
| the result of a few ideas across not much more than 100
| years, and for the most part is far too heavily influenced
| by existing human conceptions of what is possible and how
| to do things.
| circuit10 wrote:
| But even if the first AGI does end up perfectly simulating a
| human (which seems somewhat unlikely), a human given the
| ability to think really fast and direct access to huge
| amounts of data without being slowed down by actually using
| their eyes to read and hands to type would still be
| dangerously powerful
| AstralStorm wrote:
| Assuming they don't drown in the information overload and
| they don't take in any kind of garbage we also put out
| there.
|
| We also have some pharmaceutical tricks to tweak up
| processing capabilities of the mind, so there's potentially
| no need to simulate. The capabilities of the big ball of
| sentient goop have not been plumbed yet.
|
| Now imagine a technology that could obviate the need for
| sleep or maybe make it useful and productive.
| Tao3300 wrote:
| As Cicero said of Caesar, "the wariness and energy of that
| bogeyman are terrifying."
| peoplefromibiza wrote:
| Jaron Lanier is being called "guru" by the article, but he's
| much more than that.
|
| As a pioneer and intellectual he's been arguing about the
| commodization of human knowledge for a long time, he's not
| simply saying that "machines won't surpass humans" and it's not
| accurate to describe him as someone who would say something
| like that.
|
| Please take the time to research what he's published over the
| last 4 decades.
| sirsinsalot wrote:
| Lanier is brilliant, but sadly there any many brilliant
| people who've long seen the shifting sands and set out to
| capitalize first, rather than strategically build a future we
| fleshbags would like to be in.
|
| I wonder, how much will be enough?
| ChatGTP wrote:
| "fleshbag", cool way to degrade your actually awesome body
| ?
| sirsinsalot wrote:
| It is a pretty cool bag of flesh, I didn't rule that out.
| geraneum wrote:
| To be fair, he brings up an interesting analogy. From the
| article right after he says this, he follows with
|
| > It's like saying a car can go faster than a human runner. Of
| course it can, and yet we don't say that the car has become a
| better runner.
|
| He has a point and I believe his point is that it's a different
| type of intelligence. His view is more nuanced than how you are
| trying to frame it.
| dist-epoch wrote:
| But he is saying "cars are not dangerous because they are
| made by humans". Nuclear weapons are also made by humans
| abilities.
| geraneum wrote:
| It's not about the semantics of the sentence he said. This
| is obvious. He is pointing out a difference in nature of
| the attributes/properties of a human and a human creation.
| Not about something being more or less dangerous. He is
| trying to tell the resporter, or perhapes the reader, that
| they're asking the wrong question.
| dist-epoch wrote:
| From article:
|
| > _This idea of surpassing human ability is silly because
| it's made of human abilities_
|
| At some point in history we were just "chimp abilities",
| so the argument would become "it's silly to imagine that
| something made of chimp abilities could surpass chimp
| abilities".
| pixl97 wrote:
| I'm with you on this. People in these chains seem to be
| looking at all the wrong metrics.
|
| Single-mode LLMs are made of human abilities, but we're
| already going to multi-modal, though with what I would
| call rather limited interconnections. What does a LLM
| that takes language and mixes that with sensor data from
| the real world? You're no longer talking about human
| abilities, you're going beyond that.
| geraneum wrote:
| This thread reminds me of that Big Bang Theory episode
| with the Super Man argument.
|
| https://www.youtube.com/watch?v=9R5w4Qz6pVk
|
| I don't want to imply anything, just a moment of having
| fun.
|
| edit: wording
| aiphex wrote:
| We don't say it because we don't care. Machines moving faster
| than a human runner have not posed a threat to any industry
| or jobs in our lifetime. It's a silly comparison. I bet you
| there was someone at one point who was unhappy that a machine
| was a better or faster welder than them though. At least that
| person may have had the opportunity to keep working at the
| factory alongside the welding machine, doing QA and repairs.
| Most knowledge workers will not get to switch to that kind of
| replacement job vis-a-vis AIs.
| geraneum wrote:
| Beyond explaining what the author meant, and also the hype
| and hypotheticals which are rampant, this is a valid
| concern which I also share personally. This is more
| imminent than "AI overlords ruling us" and I am afraid the
| motivation behind creating this particular system, is to
| bring on the automation (the creators don't even hide
| this). Therefore I think the point you are making is
| actually important too.
| roflyear wrote:
| What are your qualifications?
| theptip wrote:
| Right, by that logic humans could not have surpassed chimps.
| QED, evolution is impossible.
| flohofwoe wrote:
| Lanier has been around during the last AI hype cycle in the
| 80's, he's seen it all rise and collapse already (same with VR
| btw).
|
| In general he should know what he's talking about when it comes
| to tech hype cycles ;)
| lostmsu wrote:
| That's only 25/75 baseline chance to guess correctly.
| asteroidz wrote:
| _> Lanier, 62, has worked alongside many of the web's
| visionaries and power-brokers. He is both insider (he works
| at Microsoft as an interdisciplinary scientist_
|
| And his unique perspective on AI is all the more valuable
| (and courageous) considering that Microsoft, recently laid
| off their AI ethics team. It's super important we don't let
| human considerations fall by the wayside in this rush. The
| potential of AI is limitless, but so are the potential risks.
| nathanaldensr wrote:
| "AI" is not currently autonomous; its algorithms that do
| exactly what their creators tell them to do. They run on binary
| computers that only do exactly as they are told.
| circuit10 wrote:
| That's not true, current machine learning algorithms involve
| no manual programming past the training and inference code
| and it's extremely difficult to predict what they will do
| without just trying it and seeing.
|
| I think this video is a nice introduction to the basic
| concept to how a computer can figure things out automatically
| without being manually programmed and without the creators
| understanding the "why": https://youtu.be/qv6UVOQ0F44
|
| ChatGPT is much more complicated than the AI in that video
| but it shows some of the basic concepts
| candiddevmike wrote:
| I think OP meant autonomous in the sense of motivation. It
| has no motivation to do anything aside from wait for
| someone to ask it to do something.
| pixl97 wrote:
| I mean, if I was paying for the power bill every month
| and had a limited amount of computing capacity, I
| wouldn't want my AI behaving like my teenage daughter
| busy daydreaming when I ask her to clean her room.
|
| But I have no reason to believe this will always be the
| case. As these machines become more capable and our
| compute power grows, someone will give one a server
| cluster and some free time to 'think' on it's own.
| goatlover wrote:
| LLMs generate text. They're built to generate text. That
| they generate some kind of textual output is entirely
| predictable. Same with image generators. They will generate
| some kind of image given a prompt. They're not Skynet.
| circuit10 wrote:
| That an AI will have some kind of output is obvious, it
| doesn't mean that you can predict what that output will
| be. It's like saying that you have solved physics by
| saying "something will happen"
| goatlover wrote:
| I'm confident the output of chatbot won't be literally
| taking over the world and killing all the humans.
| philipov wrote:
| I think the point he's trying to make is that AI does not
| have an independent Will. It lacks desires and the
| ability to operate in opposition to its programming. This
| makes it no different from any other tool we use to
| enhance our abilities.
|
| Whether or not you can predict a tool's output is
| irrelevant. I can't predict the output of a random number
| generator, but that doesn't make it sentient.
| flangola7 wrote:
| Asteroids are not sentient either, but they can still
| kill you.
| philipov wrote:
| Asteroids aren't intelligent, either.
| blackbear_ wrote:
| This is not necessarily true, however, for example in
| reinforcement learning there is a lot of work on
| "intrinsic motivation", i.e., creating systems that set
| and pursue their own goals.
|
| https://arxiv.org/abs/1908.06976
| philipov wrote:
| I think it should be possible to build a sentient AI, but
| it hasn't been done yet. What remains to be seen is
| whether our current techniques will be suitable for
| making that self-retraining process efficient, or if
| we'll need to find better math to use as the basis for
| it. Part of what makes the brain so useful is that it
| fits in our skull, and is fast enough to learn in real
| time.
|
| But, either way, I think that's what's on the line for
| people who disagree about how to use the word
| "intelligence." They mean it as a synonym for sentience,
| and the people arguing against them are using it
| differently. Before we can evaluate the truth of an
| argument, we should first agree to use words the same
| way.
| SanderNL wrote:
| With LLMs you say "you want to do X" and voila,
| personality.
|
| What is indeed missing from current implementations is
| continuous looping. Doing actions and taking stock of the
| results. I guess that's kind of expensive right now.
| We'll get there. I don't see the fundamental problem.
| circuit10 wrote:
| "It lacks desires and the ability to operate in
| opposition to its programming."
|
| Not really true, most AI is based on optimising some goal
| rather than following a fixed set of rules
| chongli wrote:
| You can't predict the outcome of a coin toss. That
| doesn't make the coin intelligent.
| planede wrote:
| Humans are also entirely predictable, all they do is move
| their limbs around, make sound, etc...
| PaulDavisThe1st wrote:
| Sure, absolutely.
|
| Make a list of the things we do "entirely predictably".
|
| Make a list of the things a given ML system does
| "entirely predictably".
|
| One of these lists is (a lot) longer than the other.
| Interesting, no?
| planede wrote:
| Conversely if a person loses their ability to move their
| limbs or to talk then they also become less inteligent.
| vharuck wrote:
| To be fair, humans exist only because of a long chain of
| organisms that started with "DNA generates proteins."
| Granted, it took billions of years for that process to
| create humans, but it shows that what seems to be a
| constrained process can have wild outcomes when it feeds
| itself. And text commands are how these models are
| defined, trained, deployed, and used.
| krisoft wrote:
| > its algorithms that do exactly what their creators tell
| them to do
|
| This is very much in doubt :)
|
| > They run on binary computers that only do exactly as they
| are told.
|
| This is true in first approximation. Every CPU instruction
| runs exactly as it is written, that is true. This is probably
| the interpretation of "only do exactly as they are told" to
| someone strictly technology minded. But even with much
| simpler systems the words "huh, that should have not
| happened", and "I wonder why it is doing that" are uttered
| frequently.
|
| The interpretation most humans would attach to "only do
| exactly as they are told" is that the maker can predict what
| the code will do, and that is far from the truth.
|
| After all if it so simple, why did the google engineers tell
| their computer to tell lies about the James Webb Space
| Telescope? Couldn't they just told it to only tell the truth?
| mahoho wrote:
| I think the machine code-level understanding is what's
| important. We can, in theory, put a person in a Chinese
| Room-style scenario and have them manually perform the
| code, and it will generate the same outputs (It would
| probably take millions or billions of years, but it is true
| in principle). A major difference is that we created the
| machine and the code and, at least as low as the level of
| digital logic design, we understand and control its
| behavior. The person in the room has a human mind with
| thoughts and behaviors completely out of the program
| designers' control and unrelated to the program; if they
| want to, they can break out of the room and punch the
| operator. The "unpredictability" of the machine is still
| constrained by the fundamental capabilities we give to it,
| so it might generate surprising outputs but it can't do
| things like punch people or launch nukes unless we connect
| it to other systems that have those capabilities.
| PaulDavisThe1st wrote:
| > A major difference is that we created the machine and
| the code and, at least as low as the level of digital
| logic design, we understand and control its behavior.
|
| The moment the software gets to interact with the world,
| whether via robotics or handling a mouse button event or
| some other type of sensor, we no longer fully understand
| or control its behavior.
|
| Pure computation (the dream of functional programming) is
| fully understandable and entirely predictable. When you
| add interaction, you add both randomness but also time -
| _when_ something happens can lead to different outcomes -
| and this can rapidly cause predictability to spiral away
| from us.
| aiphex wrote:
| One of my concerns is what happens when machines start
| making their own money. This could be possible with
| cryptocurrencies (another reason to loathe them.)
| Machines can do things online, make sex-working
| 3d-modelled chat-bots for instance, or do numerous other
| types of work, like things you see people do on Fivver.
| If machines start making their own money and deciding
| what to do with it, they could then pay humans to do
| things. At this point they are players in the economy
| with real power. This doesn't seem too far out of an idea
| to me.
| ben_w wrote:
| Given that algorithms are "how to learn" and "show me what
| you infer", that's the same kind of overly reductionist view
| that you don't need to worry about being eaten by a tiger,
| because it's just a set of chemical reactions that merely
| follow the laws of quantum mechanics.
| PaulDavisThe1st wrote:
| Not entirely a fair comparison.
|
| The tiger is dangerous because whether you consider it a
| sentient, intentional killing machine or a bunch of atoms,
| it exists and manipulates the same physical space that you
| do (indeed, as the tweeted image points out implicitly, it
| is _only_ a tiger when you consider at the same sort of
| physical scale that we exist at).
|
| Software, however, does not have this property. Ultimately
| it does exist as something in the physical world (voltages
| on gates, or whatever), but at that level it's equivalent
| to the "bunch of atoms" view. Software (by itself) does not
| operate in the physical space that we do, and so it cannot
| pose the _same kind_ of threats to us as other physical
| systems do.
|
| The question is therefore a lot more nuanced: what types of
| control (if any) can (a given piece of) software exert over
| the world in which we operate? This includes the abstract
| yet still large scale world of things like finance and
| record keeping, but it also obviously covers the physical
| space in which our bodies exist.
|
| Right now, there is very (very) little software that exists
| as a sentient, intentional threat to us within that space.
| When and if software starts to be able to exert more force
| on that space, then the "it's just logic and gates and
| stuff" view will be inappropriate. For now, the main risk
| from software comes from what other humans will do with it,
| not what it will do to us (though smartphones do raise
| issues about even that).
| ben_w wrote:
| Software has been killing people since at least
| Therac-25, so "sentience" is a red herring.
|
| The idea of harm from the unemotional application of an
| unthinking and unfeeling set of rules, which is
| essentially what algorithms are, predates modern
| computing by some margin as it's the cliche that Kafka
| became famous for.
| PaulDavisThe1st wrote:
| Software doesn't "apply" rules, humans do that.
|
| Yes, the software may be part of the apparatus of a cold
| unfeeling bureaucracy (private or state), but it is the
| decision of human beings to accept its output that causes
| the damage.
|
| I should have probably dropped the term "sentience" - I
| agree it is not really relevant. I will need to think
| about examples like Therac-25. Not sure how that fits in
| my ontology right now.
| ben_w wrote:
| > Software doesn't "apply" rules, humans do that.
|
| I think you're using at least one of those words very
| differently than me, because to me software is _nothing
| but_ the application of rules.
| PaulDavisThe1st wrote:
| When a software system says "this person must have their
| property foreclosed", it is following rules at several
| levels - electronics, code, business, legal. But
| ultimately, it is a human being that makes the choice to
| "apply" this "rule" i.e. to have consequences in the real
| world. The software itself cannot do that.
| ben_w wrote:
| Thanks, that clears up which word we differ on: "apply".
|
| With your usage, you are of course correct.
|
| Given how often humans just do whatever they're told, I
| don't trust that this will prevent even a strict majority
| of possible bad real-world actions, but I would certainly
| agree that it will limit at least some of the bad real-
| world actions.
| circuit10 wrote:
| https://twitter.com/nearcyan/status/1632661647226462211
| ben_w wrote:
| Yup, that's the thing I was thinking of. Wasn't even sure
| where I'd seen it, so thanks for the link!
| YetAnotherNick wrote:
| This is a flawed analogy, it certainly breaks down even in
| simple case of random number generation. Computers could use
| external source like minor heat changes for that.
| wccrawford wrote:
| Yeah, it's like saying that machines can never be stronger than
| humans because they're built by humans.
|
| Computers _already_ do things that humans can 't on a massive
| scale. It's not hard to imagine that they could leverage that
| ability to take AI beyond us.
| devb wrote:
| > it's like saying that machines can never be stronger than
| humans because they're built by humans.
|
| Did you even read the article?
|
| "This idea of surpassing human ability is silly because it's
| made of human abilities." He says comparing ourselves with AI
| is the equivalent of comparing ourselves with a car. "It's
| like saying a car can go faster than a human runner. Of
| course it can, and yet we don't say that the car has become a
| better runner."
| hoppyhoppy2 wrote:
| > _Please don 't comment on whether someone read an
| article. "Did you even read the article? It mentions that"
| can be shortened to "The article mentions that"._
|
| https://news.ycombinator.com/newsguidelines.html
| quantified wrote:
| We put far too great an emphasis on the human specifics of
| an activity. For most utilizations of running (delivering
| goods or information, hunting prey, etc.) the car, or
| helicopter, or airplane far exceed the human runner. This
| is poetic nonsense like "speed of thought". When Boston
| Robotics gets a robotic runner that sprints faster than a
| human, then what?
|
| The ML systems are not made of human abilities. They are
| made of software processes. Jared is a smart and informed
| guy but that sentence is just nonsensical.
| goatlover wrote:
| > When Boston Robotics gets a robotic runner that sprints
| faster than a human, then what?
|
| It enters the Olympics and automates away all sprinters?
| Or it becomes self-aware and decides to eat all the
| humans?
|
| Or more likely, it gets sold to the military so they can
| have one more kind of drone that runs on land instead of
| flies through the air.
| quantified wrote:
| Right, sorry, I was directing my question at the "does it
| surpass human runners" train of thought. Obviously it
| won't feel a pounding heart, or a thrill of victory if it
| wins a race, or die of hypernatrimia during a marathon,
| so it won't surpass our specific cares. Not sure those
| make a significant difference in the arc of development.
|
| It absolutely goes to the military with built-in weapons.
| nohaydeprobleme wrote:
| Jaron Lanier's point is much more interesting point in this
| context--though I felt that it was overall a brief quote
| near the introduction to capture attention, than the main
| argument of the article.
|
| In fuller context, Lanier argues that software using AI
| won't make human sport or competition useless, because it
| will use different processes to achieve the same result--
| the same way that competitive running (or top-level chess,
| Go, or certain video games) will still happen, even if
| human inventions can beat the best human at the task.
|
| For all these tasks, the software will take a different
| process for doing well at the task (e.g. a car doesn't
| "run," and a chess engine "thinks" differently than a
| human). In these activities, the process matters.
|
| A different interpretation of the argument is then a bit
| more interesting. If Lanier is also saying that software
| using AI won't be better than humans at activities outside
| of competitions, I would disagree--though to be fair, I
| don't think this is his argument. For lots of work, the
| result matters more than the process. If someone wants to
| make a funny poem as a one-off joke in a story, the result
| may matter more than the process of production. And if a
| worker wants to summarize lots of short texts where speed
| is the most important factor, the result may also matter
| more than the process. In the same sense, it's still true
| that a car is usually better at letting humans travel over
| long distances for work than running, because the result
| matters more than the process.
| coldtea wrote:
| Even without autonomous enhancement of AI, the argument that
| "[the] idea of surpassing human ability is silly because it's
| made of human abilities" is BS...
|
| A theoritical AI which thinks like a person, but (due to
| computing power) can think through and evaluate 1,000,000 ideas
| the time it takes a person to think through 10 of them, it
| already has surpassed human ability by a big margin. Same for
| memory capacity etc.
|
| That the input the machine is trained on is the output created
| by "human abilities" is irrelevant to whether it can surpass
| human ability.
| NoZZz wrote:
| I think I've seen this episode of The Orville.
| aaroninsf wrote:
| Something I've found myself observing in disparate domains:
|
| the only defense we have against applications of AI we don't
| like, is the application of more AI.
|
| This is a natural implication of AI being augmented-intelligence
| force-multipliers at the moment. When your opponent takes
| performance enhancing drugs, or wears a mecha suit, the only way
| to compete with them or defend against them is adoption of
| similar technology.
|
| The Prime Directive in Star Trek was prescient. AI arriving
| unevenly--and first under the direction and control of hands
| which have thus far proven disinterested in the common welfare
| where it interferes with personal benefit--looks like it will be
| a civilizational exercise indistinguishable from the delivery of
| disruptive technology from another species.
|
| Of particular personal concern is that the synergies where it
| meets other technological social problems against which we have
| few or no defenses, specifically, surveillance and its active
| counterpart, sentiment control.
|
| I am less concerned about needing AI tools to validate AI-
| generated code; and more concerned about the 2024 election cycle
| being opportunity for an arms race in application of AI to game
| politics.
| UberFly wrote:
| This feels a bit like Bill Joy's article about nano-technology. I
| think it's just a healthy fear the the unknown future. AI will be
| refined into usable tools, mostly good with some other unintended
| negative consequences. Sounds like a lot of things.
| costanzaDynasty wrote:
| People have spent the last decade modify their behavior to please
| algorithms. They've become indistinguishable from bots. Cattle
| herded into segregated pens. Being more human is the only
| possible defense, warts and all.
| smartmic wrote:
| Yeah, I agree, many of us have become bots or zombies, though
| still being basic humans and communicating as humans. If you
| were a techie who wants to create a new algorithm to which we
| shall obey, you had to learn the language of computers to do
| so. Now this has changed as well. The computers have learned to
| speak our --human-- language. That means they will also adapt
| to our behavior, which means the spiral into the insanity Jaron
| Lanier was talking about could possibly go faster...
|
| EDIT: So yes, a return to what makes us human, to nature, with
| an awareness of history and philosophy would be very desirable
| and quite appropriate in these and future times.
| jimkleiber wrote:
| I'm really wondering though because I feel kind of at a loss,
| what are some things you think we can do to be more human?
| sirsinsalot wrote:
| Anyone questioning the author's intention should read one of his
| books, "Who Owns the Future?"
|
| It was written sometime ago, and I think Sam Altman read it as a
| handbook on power concentration using AI rather than the human-
| centric approach it was laying out.
|
| Personally I wish Lanier wasn't as right about many things as he
| is, because I lose a little faith in humanity each time.
| favaq wrote:
| Funny that if you google "Who Owns the Future", the Google
| featured snippet says the answer is Jaron Lanier.
| vlod wrote:
| ~22 minute interview [0] by Jaron about "Who Owns the Future?"
|
| [0]: https://youtu.be/XdEuII9cv-U?t=172
| 13years wrote:
| I actually agree with his perspective. AI is simply a another
| huge leap in technology that directly affects social order. We
| only need to look at the effects social media has had on
| society and just amplify them to perceive what may be likely
| outcomes.
|
| This aligns very close to my own thoughts that I have written
| about in great detail. I foresee the societal impacts to be
| exceedingly disturbing long before we ever reach the concept of
| a Singularity.
|
| https://dakara.substack.com/p/ai-and-the-end-to-all-things
| ChancyChance wrote:
| Regulation of social media is still woefully behind even in
| cases where we do know there has been a hugely negative
| impact (Myanmar & Facebook, for example). And there are
| approximately 5 people who exert massive, unregulated power
| over the shaping of planetary discourse (social media CEOs).
| If social media is too big to regulate, AI regulation doesn't
| have a chance in hell.
| 13years wrote:
| Yes, additionally I find it somewhat ironic that AI
| researchers talk a lot about "power seeking" behavior of AI
| as a primary concern.
|
| However, seemingly overlooked, AI is itself power and we
| should expect that "power seeking" humans will inevitably
| become its custodian.
| nonbirithm wrote:
| This a thousand million times.
|
| The mislabeling of LLMs and diffusion models as
| "artificial intelligence" is probably the biggest
| marketing blunder in the history of technological
| progress, one that could ironically affect the course of
| AI alignment itself.
|
| Smart thinkers and policymakers are going to waste their
| time framing the problems the tech poses in terms of "an
| uncontrollable intelligence out to get us" like it's some
| kind of sentient overlord completely separate from
| humanity. But super-advanced technology that can operate
| in a closed loop (which could be called AGI depending on
| who's asked) isn't necessary for humanity to crater
| itself. What's _required_ for such tech to come into
| existence in the first place? Humans. Who 's going to be
| using it the whole time? Humans.
|
| And I think there's still a lot of disruptive, world-
| changing tech to be discovered before AGI is even a
| remote possibility. In reality this tech is probably
| going to be more like a superpowered exoskeleton for
| CEOs, politicians and the like to sway public discourse
| in their favor.
|
| "An uncontrollable intelligence" already describes the
| source of a lot of our current problems... that is,
| ourselves.
| tablespoon wrote:
| > And I think there's still a lot of disruptive, world-
| changing tech to be discovered before AGI is even a
| remote possibility. In reality this tech is probably
| going to be more like a superpowered exoskeleton for
| CEOs, politicians and the like to sway public discourse
| in their favor.
|
| Our current powers-that-be are so manifestly unsuited to
| have the kind of power our idiot technologists are
| desperate to build for them that part of me wishes for a
| disaster so bad that it knocks technological society off
| its feet, to the point were no one can build new
| computers for at least a couple generations. Maybe
| hitting the reset switch will give the future a chance to
| make better decisions.
| jhbadger wrote:
| The meaning of "artificial intelligence" has always just
| been programs that can get results that previously only
| humans could do, until the moment programs can do it. For
| decades AI researchers worked on chess programs even
| though the best chess programs until 20 or so years ago
| couldn't even beat a skilled amateur. Now of course they
| can beat grandmasters. And so we decided chess wasn't
| "really AI". LLMs would have been mindblowing examples of
| AI even a decade ago. But because we now have them we can
| dismiss them as "not AI" like we did with chess programs.
| It's a never ending cycle.
| 13years wrote:
| _" An uncontrollable intelligence" already describes the
| source of a lot of our current problems... that is,
| ourselves._
|
| Yes, precisely. One of the best quotes I've seen was
| "Demonstrably unfriendly natural intelligence seeks to
| create provably friendly artificial intelligence"
|
| The whole ASI alignment theory is a paradox. What the AI
| researchers don't realize, is that they are simply
| building an uncomfortable mirror of human behavior.
| pixl97 wrote:
| Microsoft put out a 150 page paper yesterday on why GPT-4
| is proto-AGI. LLM's are AI, now we're just closing the G
| gap.
| sdrinf wrote:
| Link, please?
| hikarudo wrote:
| "Sparks of Artificial General Intelligence: Early
| experiments with GPT-4"
|
| https://arxiv.org/abs/2303.12712
| mcculley wrote:
| I am less worried about what humans will do and more
| worried about what corporations, religions, and
| governments will do. I have been trying to figure out how
| to put this most succinctly:
|
| We already have non-human agentic entities: corporations.
| They even have the legal right to lobby to change laws
| and manipulate their regulatory environment.
|
| The talk about AI being misaligned with humanity mostly
| misses that corporations are already misaligned with
| humanity.
|
| AI-powered corporations could render enormous short-term
| shareholder value and destroy our environment in the
| process. Deepwater Horizon will be insignificant.
| 13years wrote:
| Corporations, religions, governments etc are just an
| amalgam of human values and behavior that results in the
| effects we perceive. Yet, AI researchers most grand
| theory of successful alignment relies on simply applying
| our values to the AI such that it will be aligned.
|
| You can look at any human organized entity simply as
| another form of power and how our values become
| interpreted when given power. Your observation could
| simply be seen as further evidence of how alignment is a
| flawed concept.
|
| If you take a single individual and have them fully
| illicit their values and principles you will find they
| are in conflict with themselves. Two values that are
| almost universal and individually positive, liberty and
| safety, are also the very values that also cause much of
| our own conflict. So yes, we are all unaligned with each
| other and even minor misalignment causes conflict.
| However, add power to the misalignment and then you have
| significant harm as the result.
|
| FYI, I've written a lot specifically on the alignment
| issues in the event you might be interested further -
| https://dakara.substack.com/p/ai-singularity-the-hubris-
| trap
| nradov wrote:
| The government of Myanmar is free to regulate Facebook
| however they like within their own sovereign territory. But
| given the level of corruption, oppression, and incompetence
| there I doubt the results would be any better than usage
| policies written by random corporate executives (and
| haphazardly enforced by outsourced moderators). The only
| real solution to improving the situation in Myanmar is for
| the people to rise up and change their own government; this
| may take a long time and a lot of deaths but there is no
| alternative.
| yyyk wrote:
| >The only real solution to improving the situation in
| Myanmar is for the people to rise up
|
| They are rising up:
| https://www.nytimes.com/2023/03/17/world/asia/myanmar-
| killin...
| ChancyChance wrote:
| This reply confuses me. You are implicitly accepting that
| FB, and American company, had a roll in the atrocities,
| but you are then saying it is up to Myanmar to handle
| this. If that's correct interpretation, I find that
| attitude abhorrent. I hope I'm wrong.
| rslice wrote:
| Your substack is a treasure trove. Makes lesswrong articles
| look mentally rigid.
| 13years wrote:
| Thank you for the appreciation!
| worldsayshi wrote:
| I feel that if smart people spent more time writing books about
| how good outcomes could come about rather than warning about
| bad outcomes powerful actors wouldn't have so many dystopian
| handbooks lying around and might reach for those positive books
| instead.
| sirsinsalot wrote:
| "Who Owns the Future?" is exactly a book about developing
| good outcomes, and building a future that supports humanity
| and happiness.
|
| But you can also read it at an obtuse angle and see the
| problems outlined to resolve as opportunities for personal
| gain.
|
| It's just a matter of perspective.
| worldsayshi wrote:
| Glad to hear. I will put it on my list.
| birdyrooster wrote:
| It's way easier to write believable dystopian novels because
| you are deconstructing what already is rather than building
| something new. The smart ones are the ones capable of writing
| the utopian novels.
| germinalphrase wrote:
| I was about to comment the same thing. It's simply much
| harder to create from whole cloth positive visions for the
| future where dystopias can be immediately be extrapolated
| from existing trends (and our long human history of abuse,
| horror, and destruction).
|
| Edit: If anyone would like an example, I'll offer Huxley's
| "The Island" as a utopian counterpoint to his "Brave New
| World". In addition to exploring the qualities he believe
| make up a 'utopia', a significant thematic concern is the
| need for channeling our innate destructive impulses*
| because utopia - should it exist - can only be maintained,
| not _manufactured_ , through the active
| preservation/conservation of our natural world, our
| positive human values, etc.
|
| *for example, there is an innate human impulse to subjugate
| others. Huxley suggested that we should channel, rather
| than suppress, this impulse into a productive activity that
| satisfies the desire without causing harm: rock climbing
| (which must have been much more of a niche activity in
| 1962).
| skinpop wrote:
| easier to imagine the end of world than the end of
| capitalism...
| nradov wrote:
| Help us out here. What would the end of capitalism look
| like? All of the attempts at ending capitalism so far
| have collapsed into disaster, so people are
| understandably hesitant now to start grand social
| experiments which historically speaking are likely to end
| in famine and genocide.
| germinalphrase wrote:
| I think your question might be his point.
|
| We can easily imagine the destruction of all existence
| because we have mental models for what that destruction
| might look like; however, imagining the end of capitalism
| requires us to invent entirely new ideas that exceed the
| salience of capitalism itself (which is... obviously hard
| much harder).
| sirsinsalot wrote:
| Capitalism works because it models the world without
| saying much about it. Just as I can pile sticks and mud
| to form a house, removing entropy and then giving that in
| exchange for a sack of grain.
|
| It models the physics there, but adds an indirection,
| value stored as currency.
|
| Money doesn't have any morality or inherent motivation.
| Capitalism is what happens when humans project theirs
| onto it, on average, with a good amount of autonomy
| enabled by that currency.
|
| If people were not, on average, greedy survivalists, then
| the value store would produce an economy that operates
| much differently.
|
| That's why capitalism persists, because we're all just
| advanced monkeys gathering as many rocks, sticks and mud
| as we can in a big pile, because it is built-in to our
| genetics to stockpile resource when we can.
|
| Everything else is just advanced mechanisms of this.
|
| The end of capitalism is the end of humanity, because
| while we exist, we will want to stockpile resources
| through increasingly elaborate means in an attempt to
| stave off the entropy of death.
| thaeli wrote:
| If you read Brave New World and think of the lower
| "classes" as instead being automation and AI (really,
| most of the jobs done by Epsilons and Deltas in the book
| were automated decades ago, and the Gamma / Beta jobs are
| rapidly moving towards AI replacement as well) it's not a
| bad system, nor is it a dystopia.
| [deleted]
| ChrisMarshallNY wrote:
| I have nothing but respect for the chap.
|
| I never _wanted_ to respect him, as I always thought he was one
| of those "too good to be true" people, and was mostly a paper
| tiger.
|
| It turns out that he's the real deal, and has been right about
| a lot of stuff.
| dr_dshiv wrote:
| Such as? I have my skepticism too.
| ChrisMarshallNY wrote:
| I'm not particularly interested in going into a back-and-
| forth on this.
|
| He's sort of like Edward Tufte; _lots_ of ego, but earned,
| and not for everyone.
|
| I like your job title. Always up for more "human," in our
| design.
| jacquesm wrote:
| There are lots of parallels between Jaron Lanier and Richard
| Stallman. Cory Doctorow is another one I would put in that
| list, as well as SF writer Charles Stross.
|
| They are all pretty good at looking ahead.
| jjulius wrote:
| ... so, same difference? I mean, even with his full quote, the
| outcome is ultimately the same - humans are gone.
|
| >... "the danger isn't that a new alien entity will speak through
| our technology and take over and destroy us. To me the danger is
| that we'll use our technology to become mutually unintelligible
| or to become insane if you like, in a way that we aren't acting
| with enough understanding and self-interest to survive, and we
| die through insanity, essentially."
| BigCryo wrote:
| Can we please put off worrying about dangerous AI for a couple
| hundred years? Chat GPT is a chatbot and not much else.. it is
| not the harbinger of an era of dangerous AI
| antibasilisk wrote:
| Stable Diffusion was used to produce convincing images of
| Donald Trump being arrested. A guy used GPT to post comments on
| 4chan without people noticing. A 'mere chatbot' can do much
| more damage than you think.
| dkjaudyeqooe wrote:
| It's notable that we've been having AI panics since Eliza.
|
| What is it about chatbots that trigger people? Probably the
| importance and centrality of language in human consciousness,
| and the manifest credularity of the general population (and
| many readers of HN).
|
| Unfortunately it's unlikely this will stop, and it'll probably
| get worse. The final irony will be that when some "real AI" is
| created, no-one will believe it, having been exhausted by 200
| years of hype.
| qgin wrote:
| Current gpt-level technology already makes personalize psyop
| campaigns possible.
| PeterStuer wrote:
| Lesser AI in the hands of ignorant and/or malevolent humans can
| be pretty dangerous though.
| jiveturkey42 wrote:
| Chat GPT is a consumer product and hardly the pinnacle of AI
| research. We have no clue what is getting developed for million
| dollar private contracts
| Loughla wrote:
| Whenever someone states that we are so far off of --insert
| technology barrier here--, I like to remind them that the
| F-22 was developed in the early to mid-90's. The F-16 was
| developed in the 1970's, and the F-15 was developed in the
| 1960's.
|
| We have no idea what is happening behind the curtain, and to
| assume that private industry or the might of the US military
| is not decades ahead of what is available for consumers is
| just naive at best.
| rf15 wrote:
| For artists Midjourney/StableDiffusion/etc. are already a very
| real and surprisingly effective threat.
| escapecharacter wrote:
| You are correct about AI not being inherently some spooky
| dangerous thing.
|
| However, human beings will treat it like it is, so you will
| experience non-sensical happenings like "your new credit score,
| brought to you by AI". When you dispute this, the credit score
| company will shrug and say "you'll have to speak the AI
| directly to make a claim, it is objective". Meanwhile the AI
| isn't that much better than ChatGPT now
| pixl97 wrote:
| "I am a skilled textile worker, the auto-loom will never
| represent a threat to me" --BigCryo 1760
| circuit10 wrote:
| Hundred years? If the current rate of progress holds we have to
| start worrying about it in 5 or 10 years, the earlier we can
| come up with solutions the better because it's a very hard
| problem
| Barrin92 wrote:
| How do people determine the "current rate of progress"? There
| is absolutely no empirical standard to evaluate the
| performance of AI systems. How is this anything else but a
| gut feeling? And how is that feeling different from any other
| period? Minsky et al famously declared that AGI was away a
| few months of hard work, and they did it for the same reason,
| they lived through a period of dynamism in computer science.
| People definitely said it after Deep Blue beat Kasparov.
|
| Progress in AI doesn't imply that we're dangerously close to
| AGI, just because people at any given time are amazed by
| individual breakthroughs they witness.
| ben_w wrote:
| > There is absolutely no empirical standard to evaluate the
| performance of AI systems. How is this anything else but a
| gut feeling?
|
| Why do you think this?
|
| There are loads of tests of their performance. Common one
| right now is to give LLMs the same exams we put humans
| through, leading to e.g. the graph on page 6:
| https://arxiv.org/pdf/2303.08774.pdf
|
| Are they the best tests? Probably not! But they are
| _definitely_ empirical.
| dkjaudyeqooe wrote:
| But LLMs are good at those tests because they've seen
| (some version of) the answers on the internet.
|
| Give students concurrent access to the internet and I'm
| sure they can pass all sorts of tests.
| ben_w wrote:
| An irrelevant counterargument, IMO.
|
| First, students only get good after studying -- education
| is not some magic spell cast by the teacher that only
| operates on a human's immortal soul. As we should not
| dismiss what students learn just because we _could_ look
| it up, it is strange to dismiss what GPT _has_ learned
| just because it _could_ be looked up.
|
| Second, the GPT-3 (and presumably also GPT-4) training
| set is about 500e9 tokens, which is what? Something like
| just a few terabytes?
|
| We've been able to store that in a pocket for years now
| without being able to do almost any of the things that
| GPT can do -- arbitrary natural language synthesis let
| alone arbitrary natural language queries -- on a
| computer, even when we programmed the rules, and in this
| case the program learned the rules from the content.
|
| Even just a few years ago, SOTA NLP was basically just
| "count up how many good words and bad words are in the
| text, the sentiment score is total good minus total bad."
|
| That difference is what these test scores are showing.
| zone411 wrote:
| Why wouldn't we be able to evaluate their performance and
| compare them to humans? The purpose of test datasets is to
| do just that, and new ones are created every day. By
| combining several of them, we can create a decent
| benchmark. We could even include robotic abilities but I
| think this is not necessary.
|
| Let's say: adversarial Turing test + MMLU + coding
| competence (e.g. AAPS or Leetcode) + ARC (IQ-type test) +
| Montezuma's Revenge and other games like Stratego or
| Diplomacy + USMLE (medical exam) + IMO (math) + self
| driving + ...
|
| You can even make it harder: have human judges blindly
| evaluate new scientific papers in math or theoretical
| physics for acceptance, see if AI can create highly-rated
| new apps, write a highly-rated book, compose a hit song...
| mitthrowaway2 wrote:
| > How do people determine the "current rate of progress"?
| There is absolutely no empirical standard to evaluate the
| performance of AI systems.
|
| I would measure using something similar to Yudkowsky's
| challenge: _" What is the *least* impressive feat that you
| would bet big money at 9-1 odds *cannot possibly* be done
| in 2 years?"_ [1]
|
| Pay a panel of experts to list their predictions each year,
| including an incentive to get it right, and then measure
| the percentage of those predictions that fail anyway.
|
| [1]
| https://twitter.com/ESYudkowsky/status/910566159249899520
| AnimalMuppet wrote:
| Naw, it's simple. We're talking about an AI achieving human
| abilities, well, we can protect against dangerous AIs just as
| well as we protect against dangerous humans...
|
| Oh. Oh dear.
| javajosh wrote:
| Remember that AI work with electrons, and we are of atoms.
| We should focus on where electrons control atoms, and
| reduce those points. Of particular concern is that AI may
| be a very strong investor with the right prompts. AI could
| also figure out how to use any other software. Which can be
| used to gain access to any marketplace, including the dark
| ones. Which means AI can use money (electrons) to pay
| others to modify the world (atoms).
|
| Of course, there is already a problem, as you point out.
| Humans shouldn't have access to these markets either!
|
| But yeah to specifically prevent electron-on-atom violence
| we need to limit AI physical degrees-of-freedom. by
| limiting marketplaces. National/global security, not
| personal morality, should guide these new regulations.
|
| We need to end all drive-by-wire automobiles, and
| electronic locks. Too many services are habituated to act
| on electronic signals without human confirmation -
| particularly the police. There needs to be an unbroken
| verbal chain between the person who saw an event and the
| person doing the law enforcement. Breaks in the human chain
| should be treated very seriously -they should be treated as
| firing offenses, at least. There are many other similar
| changes we're going to need to make.
|
| Some folks aren't gonna like this. Regulations are
| inherently evil, they say. Maybe the mental model should be
| more like we're the big bad gorilla in the cage. But now
| there's a tiger in the cage. Regulation restrains the
| tiger. Also, some folks aren't gonna like it no matter what
| change you need. The fact of not liking it doesn't mean we
| don't need it, and it doesn't mean it won't get done. We
| have to trust that our leaders don't want to die.
|
| And besides, the world will adapt. It always does. AI isn't
| optional, there's no putting the genie back in the bottle -
| and personally I don't want to. But I also don't want to be
| stupid about the stakes. Getting our whole species killed
| for lack of foresight would be deeply, deeply embarrassing.
| Loughla wrote:
| I really like your take, but I do not believe it is
| realistic to expect the response to advanced technology
| options to be - use even less technology. In the past,
| new tech has led to integration of new tech. I believe
| that is the inevitable outcome of AI, and especially AGI
| once that's a thing.
|
| The tool is too attractive not to use. The tool is too
| fun not to use. The tool is too dangerous to let out of
| the box, but that is exactly why we'll do it.
|
| We're curious little monkeys, after all. "What do you
| think will happen" absolutely is a survival strategy for
| our species. The problem is when we encounter something
| that is so much more advanced than us, even if that
| advance portion is just access to multiple systems of our
| own creation.
|
| To summarize: I think you make a good point, but I think
| we're fucked eventually anyways.
|
| I can't wait for the inevitable "does my AI have the
| right to freedom" case in the supreme court when I'm in
| my 90's.
| javajosh wrote:
| No need to be pessimistic. Humans are quite powerful, we
| have billions of years of brutal iteration in us. I think
| we can handle AI, even AGI, if we exercise even a modicum
| of care. It will probably take some major calamity to
| convince people to take precautions, I just hope it's not
| that bad. It probably won't be world-ending, so cheer up!
| blibble wrote:
| > I think we can handle AI, even AGI, if we exercise even
| a modicum of care.
|
| HN itself has been spammed relentlessly with people
| hooking it up to everything they can think of in an
| attempt to get a worthless reward (karma)
|
| now imagine there's money, power or territory up for
| grabs instead
|
| we are completely fucked
| ajmurmann wrote:
| > There needs to be an unbroken verbal chain between the
| person who saw an event and the person doing the law
| enforcement
|
| Leaving everything else aside, how would this look in
| practice? I think these conversations would need to be in
| person, since voice can already be faked. Would I need to
| run to the police station when I need help?
| javajosh wrote:
| How would it look? If I am a state security person with a
| gun, and I'm asked to invade someone's home, I would
| expect to get a face-to-face meeting with the person who
| really believes this is necessary, with the evidence laid
| out.
|
| If _that_ is too much trouble to ask, then is justice
| even possible?
| ajmurmann wrote:
| Someone is breaking into my house. I'm hiding in my
| closet from the intruders. How do I get the police to
| come to my house and help me?
|
| Another scenario: I'm a police officer and I'm on patrol.
| My dispatcher had someone come to the police station to
| tell them that they think their neighbor is experiencing
| a home invasion. Does the dispatcher just page me and I
| now drive back to the police station to verify and then
| drive back out to the home invasion?
| javajosh wrote:
| _> Someone is breaking into my house. I'm hiding in my
| closet from the intruders. How do I get the police to
| come to my house and help me?_
|
| Lord, give me patience.
|
| Call 911. The dispatcher broadcasts the problem over the
| radio, and a LEO responds. The dispatcher is a relay that
| verifies probable cause. The chain of human contact is
| unbroken between you, the 911 dispatcher, the LEO taking
| the call. The chain is not broken.
|
| Compare this to a machine that spits out warrants, which
| are distributed to officers, who never even speak to
| anyone about the case, do not know the subject of the
| warrant, and simply execute the warrants.
| morinoandou wrote:
| While AI is not yet at a level where we need to worry about it
| harvesting us for batteries, I think there is still reason to
| worry. It's easy to think of ways that malicious or overly
| profit-driven actors could misuse AI for great harm. Internet
| forums are already full of shills and sock puppets, and
| chatbots seem likely make the problem much, much worse. Things
| like advertisers masquaraded as regular users subtly pushing
| for their own products, or even foreign states using AI
| chatbots to spread misinformation and mistrust among the
| populace, etc. Chatbots enable such manipulation efforts at
| previously unseen scales.
| SuoDuanDao wrote:
| If you consider a corporation to be an AI that requires real-
| time training data from humans, AI is _already_ harvesting us
| for batteries. I 've heard the opinion that that's what the
| Matrix was actually about.
| brandon272 wrote:
| Given the technology available as of today it seems like it
| would be fairly trivial to start deploying LLMs to post
| context-aware propaganda from entirely manufactured
| identities on internet forums and communities. At this point
| I am just assuming that there are parties actively working on
| this.
|
| In fact, I think the hard part of the whole thing would be
| trying to make the network traffic and source IPs look
| legitimate to the site owners/admins and it will be
| interesting to see how the battle unfolds to start
| authenticating content posted in communities as having come
| from an actual human.
| ben_w wrote:
| If we had started taking greenhouse gasses seriously a century
| sooner, they'd never have built up enough to be a problem.
| ReaderView wrote:
| [dead]
| strken wrote:
| From reading this, I don't get the impression that Lanier has any
| objective reason to believe the world won't be destroyed as the
| direct result of AI. If he does have a reason, the reporter
| certainly doesn't devote any space to analysing it, or to explain
| why dying from AI-induced insanity is different from being
| destroyed.
| whalesalad wrote:
| Is going insane not a form of being destroyed?
| rambojohnson wrote:
| Utterly weary am I of the ceaseless banter regarding the
| pessimistic, antiquated artificial intelligence discourse
| perpetuated on Hacker News... boomer doomerism. boring. doesn't
| help that the dude looks like he crawled out of the last century.
| testHNac wrote:
| Let's fix that - The Danger is that People who are already
| driving us insane will start using AI to make their job a little
| easier.
|
| Social Media companies will soon be taken to task by News media
| for allowing AI created content on their platforms.
|
| 2024 is going to be interesting.
| pjc50 wrote:
| News media are also going to get into the AI clickbait race. I
| wonder if the first accidental misrepresentation of a generated
| image as reality has already happened. And the first
| intentional one.
| omginternets wrote:
| This. I am definitely worried about the implications of AI, but
| just like algorithmic advertising and television before it,
| it's amplifying something that's already out there, not
| creating an ontologically different threat.
|
| The issue is that quantity can become a quality.
| yamtaddle wrote:
| One aspect I expect to see play out:
|
| Any entity interested in either the truth, or in maintaining
| some kind of reputation, will need to keep humans in the loop
| when using these systems. Language models might multiply e.g.
| ad copy output 10x per worker, and allow micro-targeted
| campaigns that were impractical before, but it won't allow,
| say, a _1000x_ increase, until or unless we can trust these
| systems not to produce undesirable output when not checked by a
| human. Ads are tied to brands which will hesitate to put their
| reputations in the hands of language models without a human
| verifying that the output is OK. Likewise, any entities wishing
| to use these to help with writing illuminating, factual works,
| may see a large benefit, but it 'll be limited. 2x, 5x,
| something like that.
|
| Propaganda, though? Misinfo campaigns, astroturfing, where you
| hide behind sockpuppets and shell companies anyway? Who gives a
| shit if one out of every few hundred messages isn't quite
| right? Worst case, you burn a sockpuppet account. _Those_ can
| leverage these to the fullest. 1000x output per person
| involved, compared with, say, 2016 and 2020, may actually be
| something we can expect to see.
| wobbly_bush wrote:
| > Propaganda, though? Misinfo campaigns, astroturfing, where
| you hide behind sockpuppets and shell companies anyway?
|
| Why just limit there? The chatbot companies can introduce ads
| where the answers are influenced by whichever company that
| buys the ads. Looking for information on nutrition? Some fast
| food company might "insert an ad" subtly changing the text to
| favor whatever the company wants.
| m0llusk wrote:
| Collapse of society: It's generated troll memes all the way down.
| RamblingCTO wrote:
| It's already driving me crazy with low-effort spam bullshit
| powered by anything GPT.
| grej wrote:
| https://archive.is/yJaph
| mikrl wrote:
| Are we not already insane?
|
| Do you not see sociopathic tendencies in our industry already?
|
| Has our humanity not already been tested by existing and
| conceptually simpler technologies?
| studentik wrote:
| > Are we not already insane?
|
| As long as we aim to separate objective truth from objective
| false it seems that sanity is still present.
| pjc50 wrote:
| The one thing AI does not even attempt to do.
|
| (and arguably can't, because it really _is_ a "brain in a
| jar" being fed an entirely controlled view of the universe)
| AnimalMuppet wrote:
| The one thing _this form_ of AI does not even attempt to
| do.
| smm11 wrote:
| This may kill the WWW as we know it. I can almost see an
| alternate 'web' finally taking off, but not really, as the
| majority just want TikTok left alone.
| Mobius01 wrote:
| I've read this short story about the singularity years ago,
| written by a scientist from UW-Madison and although the writing
| isn't great, it has always stayed with me. Recent developments
| made me think of it, and the premise is precisely that - the
| group that develops AGI uses it to control the markets and drives
| everyone else insane through economic disruption, while staying
| entirely opaque.
|
| https://www.ssec.wisc.edu/~billh/g/mcnrsts.html
| [deleted]
| motohagiography wrote:
| The best analogy would be that AI will do to culture today and
| over the next decade or so what pop music did to culture in the
| 1950's onward. The criticisms of pop music were widely mocked as
| superstitious religious fundamentalism, "devil music," etc, but
| even as an afficianado for rock'n'roll and alternative music, it
| really does give you a head full of garbage. Punk was fun, but I
| could probably do without exalting the sniveling self pity that
| made it appealing. For example, if your beliefs about love and
| relationships come from 20th century love songs, you're likely a
| rube, or worse, needy and codependent. Arguably, the cliche of
| the boomer narcissist is the direct result of being relentlessly
| propagandized by the music industry to believe in similarly
| inferior things. Folk music is mostly dialectic materialist
| conspiracy theorists singing hymns to their oppressors. Pre-
| internet, young people fully tribalized based on their taste in
| pop music. Sure, it's culture, it's entertaining, and some of it
| is even beautiful, but it was designed to exploit your
| sentimentality. If you think pop music tugged at the heart
| strings, just wait until the kids get ahold of AI platforms.
|
| Imo, the products of AI will be at least as ecstatic and even
| addictive as pop music and social media, and the cultural
| consequences will likely rhyme. The antidote to all these trends
| was always the counterfactual that maybe, just maybe, people will
| find some higher principle to form their identities around and be
| able to experience these amusements objectively without imagining
| themselves as "becoming" something as a result of using them, but
| who are we kidding, they'll believe whatever entertains them.
| Imagine all the people.
| sdwr wrote:
| I'm 100% on board with the first paragraph. There has to be
| some mid- or late- century philosopher who described the
| phenomenon, how pop music creates a dependency or need out of
| thin air, and then half-fills it. Like how junk food is just
| salty and fatty enough to keep you wanting more. It
| overpromises, building up a flattering epic narrative.
|
| Ends up nudging value systems towards vapid, shallow, needy
| modes of being.
| eternalban wrote:
| Great comment. Don't wish to agree with the final conclusion on
| mostly sentimental reasons.
|
| Parental 'prompting' may be effective. Possibly, whoever gets
| to first frame a given dynamic will establish a lasting bias.
| "Songs can be pleasing but take care to filter out the lyrics.
| It is much easier to write mushy sentimental nonesense. Keep
| that in the back of your mind as you listen." That, imo, should
| certainly be a part of responsible parenting and _innoculating_
| against _ideational viruses_.
| pjc50 wrote:
| > Folk music is mostly dialectic materialist conspiracy
| theorists singing hymns to their oppressors.
|
| This is probably the most insane piece of music criticism I've
| ever read. I guess the crazification AI has claimed its first
| victim.
|
| > The criticisms of pop music were widely mocked as
| superstitious religious fundamentalism, "devil music," etc
|
| Almost always with a strong racial bias.
| motohagiography wrote:
| The devil music accusation predated rock'n'roll with early
| blues artists taking gospel music they learned in church and
| making the words about day to day life. Stories about Robert
| Johnson meeting the devil at the crossroads and adapting the
| minor pentatonic (and ancient) scale but adding the flat blue
| notes have a very rich history. It was something that got in
| your head and changed how we experienced the world.
|
| I can see how people could think folk music had some kind of
| altruistic purity, but it's still a viral expression of a
| certain kind of animus that distinguished it from country. I
| also think this kind of folk-animus is related to how it may
| be worth reflecting on why others tolerate it when you imply
| someone is racist or insane, almost to the point of ignoring
| it altogether.
|
| I would bet LLMs are already able to create similar "scissor
| statements" that are as viral as pop songs, and comments like
| mine in the previous sentence that are structured to provoke
| specific anxieties and reactions in their readers. It's one
| thing for an LLM to write little essays, but once we train it
| on literary fiction - which is designed to speak the language
| of memory and provoke strong emotional repsonses - it becomes
| much more viral and dangerous socially. Imagine a catchy song
| we can't get out of our heads, but instead of dancing or
| laughter, it provokes humiliation and cruelty? My asshole-
| statement was manually calibrated, and it has the same
| abstract form and structure as a joke, but with the edge of
| that negative form. An LLM can do it at scale. Someone using
| an AI model to produce those won't be doing it to improve
| human discourse and relations. That's the risk I think is
| worth addresing.
| pjc50 wrote:
| > My asshole-statement was manually calibrated, and it has
| the same abstract form and structure as a joke, but with
| the edge of that negative form. An LLM can do it at scale.
|
| Right, so now we've dealt with the initial trolling and
| callout, and moved on from weird statements about art ..
|
| > Someone using an AI model to produce those won't be doing
| it to improve human discourse and relations. That's the
| risk I think is worth addresing.
|
| This I actually agree with. We're going to drown in
| automated trolling. Human discourse is going to get worse,
| which usually happens in ways that get people killed.
| sdwr wrote:
| I mean, it's not completely off-base.
|
| https://www.youtube.com/watch?v=YdVMGKOFIwY
|
| https://www.youtube.com/watch?v=413Fl0ScUsc
| yamtaddle wrote:
| > > Folk music is mostly dialectic materialist conspiracy
| theorists singing hymns to their oppressors.
|
| > This is probably the most insane piece of music criticism
| I've ever read. I guess the crazification AI has claimed its
| first victim.
|
| Eh, it's a concise expression of an idea Charlie Brooker (of
| Black Mirror) and others have been promoting in their work
| for years. The famous-at-least-on-here _Meditations on
| Moloch_ covers it, IIRC. Not really out-there or new.
| Capitalism learned to pull the teeth of any new
| counterculture, and turn it into a product, and the history
| of postwar pop culture is basically its learning to do that,
| then doing it over and over. The same observation dates to at
| least the '80s, from actual humans, it's not some gibberish
| mash-up created by an AI.
| rcktmrtn wrote:
| I've had some eye-opening moments where I went back to music I
| listened to growing up and was revolted with myself when I
| actually read through the lyrics and thought about what they
| meant. It really is like there is something like a spell that
| comes over you when listening to certain music that opens you
| up to hearing and helping propagate messages you wouldn't
| otherwise.
|
| The internet irony culture uses the same mechanism of passing
| the message off as "only a joke". But the fact is that even if
| you say that, there is only so far words can be divorced from
| their meanings. And even if the network that propagates them
| originally is just doing it ironically, eventually someone
| _will_ take the message seriously. There is a quote I wish I
| could remember along the lines of "what the father accepts as
| a lie, the son will accept as a truth".
___________________________________________________________________
(page generated 2023-03-23 23:02 UTC)