Post AW6PtzXeP9w04IZEDQ by emilymbender@dair-community.social
(DIR) More posts by emilymbender@dair-community.social
(DIR) Post #AW5ahRLwb4NktdLiE4 by emilymbender@dair-community.social
2023-05-27T18:50:46Z
0 likes, 2 repeats
The way that the media are fawning over @geoffreyhinton 's supposed "whistleblowing" is just appalling. Here's the latest from Bobby Allyn at NPR:https://www.npr.org/2023/05/27/1178575886/-the-godfather-of-ai-warns-of-ai-possibly-outperforming-humans& a short dissection:>>
(DIR) Post #AW5ahS3C0Gyd3laFdY by emilymbender@dair-community.social
2023-05-27T18:50:59Z
0 likes, 0 repeats
NPR hasn't provided a transcript (yet), but I've typed up some bits:Scott Simon: Geoffrey Hinton is known as the godfather of artificial intelligence. He helped create some of the most significant tools in the field. But now he's begun to warn loudly, and passionately, that the technology may be getting out of hand.>>
(DIR) Post #AW5ahTBNnJ7eZSQJ2e by emilymbender@dair-community.social
2023-05-27T18:52:09Z
0 likes, 0 repeats
I know the "godfather of" framing can be applied to any field, but we should pay attention to the way it slips in extra anthropomorphization here. Saying, e.g. that Laurel Aitken is the godfather of ska doesn't carry the same risk of thinking that ska music is like a person.But more to the pt: How is it that media reporting on Hinton's sudden switch to "warning" so often forgets to ask who else has been talking about dangers of this tech and what in particular have they been warning of?>>
(DIR) Post #AW5ahU4KV14hJs8Bge by emilymbender@dair-community.social
2023-05-27T18:52:30Z
0 likes, 0 repeats
A bit further into the piece, @BobbyAllyn explains neural nets like this:Allyn: In 2012, Hinton and two of his students at the University of Toronto built what's called a neural network. It's called that because it's a geeky computer system that kind of operates the way a brain works, like the way neurons work.>>
(DIR) Post #AW5ahUjS27y5NPN1ma by emilymbender@dair-community.social
2023-05-27T18:53:05Z
0 likes, 1 repeats
That's verging on misinformation. "Neural nets" are called that because they are built out of software components that were roughly inspired by a 1950s idea of how neurons work. Promoting the narrative that they are "like brains" insidiously promotes the idea that ChatGPT et al are "thinking".>>
(DIR) Post #AW5al8eI527q2qh67E by fcktheworld587@social.linux.pizza
2023-05-27T18:55:25Z
0 likes, 0 repeats
@emilymbender so few people recognise this - especially publicly, in media. It's fucking insane
(DIR) Post #AW5asOCAg30yHwzzlI by emilymbender@dair-community.social
2023-05-27T18:53:21Z
0 likes, 0 repeats
Allyn: But now Hinton has left Google and is sounding the alarm. Huh, what might he be sounding the alarm about? Is he lifting up the voices of people who have been documenting the actual harms being done by corporations in the name of AI?Nope. Full-on doomer, instead.Hinton: These things could get more intelligent than us, and could decide to take over. And we need to worry, now, about how we prevent that happening.>>
(DIR) Post #AW5asP1DcFqcqGslKS by fcktheworld587@social.linux.pizza
2023-05-27T18:56:42Z
0 likes, 0 repeats
@emilymbender perhaps he was given some sort of NDA super-retirement package to "sound the alarm" without any meaningful info, in order to drum up further hype for AI
(DIR) Post #AW5asQvyUiqEmaRbQu by emilymbender@dair-community.social
2023-05-27T18:53:39Z
0 likes, 0 repeats
Allyn: He came to this position recently after two things happened. First, when he was testing out a chatbot at Google and it appeared to understand a joke he told it. That unsettled him. Secondly, when he realized that AI that can out-perform humans is actually way closer than he previously thought.>>
(DIR) Post #AW5asTIjh4Ek7l6oBk by emilymbender@dair-community.social
2023-05-27T18:53:55Z
0 likes, 0 repeats
This just goes to show that knowing the math behind the algorithms isn't enough to protect folks from falling for their own ability to make sense of synthetic text (and imagine a mind in there). You also need expertise in linguistics or psychology or...Also: Check the presupposition introduced by "realized" in Allyn's sentence there. This means that he is introducing into the common ground as uncontroversial that "AI that can out-perform humans" is close.>>
(DIR) Post #AW5cTIB86ti0IRfkTQ by fcktheworld587@social.linux.pizza
2023-05-27T19:14:37Z
0 likes, 0 repeats
@apophis @emilymbender those are definitely good points. But, in terms of conspiracy, all they would need is this guy and an executive, no? I can get millions upon millions in free marketing by throwing this dude a million, or something similar. Simple transaction
(DIR) Post #AW5d9vYw4WAwbe8a1I by CassandraZeroCovid@mastodon.social
2023-05-27T19:22:19Z
0 likes, 1 repeats
@fcktheworld587 @emilymbender And distract from possible regulation that would mitigate harms arising now.
(DIR) Post #AW6PtxD14uEypitim8 by emilymbender@dair-community.social
2023-05-27T18:54:16Z
0 likes, 0 repeats
Hinton: I thought for a long time that we were like 30-50 years away from that. So I call that far away from something that's got greater general intelligence than a person. Now I think we may be much closer. Maybe only 5 years away from that.This remark actually makes no sense to me. 30 years isn't really that far off. Whatever he's worried about now wasn't worth worrying about given an extra 25 years to prepare? >>
(DIR) Post #AW6Pty1M3kVTLqRvEm by emilymbender@dair-community.social
2023-05-27T18:54:42Z
0 likes, 0 repeats
Allyn [re the "AI pause" letter]: Hinton refused to sign the letter, because it didn't make sense to him.Hinton: The research will happen in China if it doesn't happen here. Because there's so many benefits of these things. Such huge increases in productivity.*sigh* cue the gratuitous xenophobia. The #AIpause letter didn't make sense, but not for that reason. Here's what we (listed authors of the Stochastic Parrots paper) said about it:https://www.dair-institute.org/blog/letter-statement-March2023>>
(DIR) Post #AW6Ptyod6XvDofVH2e by emilymbender@dair-community.social
2023-05-27T18:54:58Z
0 likes, 0 repeats
Allyn: Some of his warnings do sound a little bit like doomsday for mankind.Hinton: There's a serious danger that we'll get things smarter than us fairly soon and that these things might get bad motives and take control.🙄>>
(DIR) Post #AW6PtzXeP9w04IZEDQ by emilymbender@dair-community.social
2023-05-27T18:55:16Z
0 likes, 0 repeats
Allyn: Hinton isn't talking about a robot invasion of the White House, but more like the ability to create and deploy sophisticated disinformation campaigns that could interfere with elections.Uh, what? Human bad actors using synthetic media for disinformation purposes is a real concern, one which many folks have been raising for years. You don't need an "AGI" with "bad motives" to get there. It's a problem now.>>
(DIR) Post #AW6Pu0EBqzxiCETCWO by emilymbender@dair-community.social
2023-05-27T18:56:00Z
0 likes, 1 repeats
Hinton: This isn't just a science fiction problem. This is a serious problem that's probably gonna arrive fairly soon and politicians need to be thinking about what to do about it now.Your AI doomerism is a science fiction problem ,and focusing politicians on it is in fact a distraction tactic away from the harm that Google, Microsoft, OpenAI, Anthropic, MidJourney, et al are doing NOW. And away from harmful government applications of surveillance tech and automated decision making.>>
(DIR) Post #AW6Pu0xv6yXeU3ring by emilymbender@dair-community.social
2023-05-27T18:56:26Z
0 likes, 0 repeats
To the media: Wow is it a bad look that one old white guy comes out belatedly saying "AI is bad!" and you fawn all over him, while failing to connect with the scholars (mostly not white men) who have been documenting the actual problems with "AI".>>
(DIR) Post #AW6Pu3BSraZ3MeDZBo by emilymbender@dair-community.social
2023-05-27T18:57:16Z
0 likes, 1 repeats
You could even use Hinton's "realization" as a hook and then pivot to covering the work of and talking with:Ruha BenjaminSafiya NobleCathy O'NielSasha Costanza-ChockBrandeis MarshallDeb RajiAbeba BirhaneMeredith WhittakerKarla Ortizand of course Timnit Gebru and Meg Mitchell, who were fired by Google over our paper discussing the dangers of large language models (aka, to Hinton "AI").>>
(DIR) Post #AW6Pu5Ya2cF8iv33Uu by emilymbender@dair-community.social
2023-05-27T18:57:34Z
0 likes, 0 repeats
Such coverage would be so much more beneficial to the public, helping everyone understand the real issues and setting the stage for meaningful regulation.
(DIR) Post #AW7alCQWQG8u4uCdeK by PixelRefresh@masto.ai
2023-05-28T00:26:08Z
0 likes, 1 repeats
@emilymbender ChatGPT might not think, but I do.