Post AVPtWUvaLF92TSSsSW by Eiregoat@nicecrew.digital
(DIR) More posts by Eiregoat@nicecrew.digital
(DIR) Post #AUFPxlKWUOo57GhKF6 by evan@cosocial.ca
2023-04-02T15:49:04Z
1 likes, 1 repeats
A major difference between LLMs and cryptocurrencies is this:For cryptocurrencies to be valuable to *me*, I need them to be valuable to *you*, too.If you don't believe in crypto, the value of my crypto goes down.This isn't the case for LLMs. I need enough people to be interested in LLMs that ChatGPT stays available, but other than that, your disinterest in it is only a minor nuisance.In a market, I benefit from your refusal to use it. AI-enhanced me is competing with plain ol' you.
(DIR) Post #AUFPxmOoVvpiQriGZM by evan@cosocial.ca
2023-04-02T15:51:24Z
1 likes, 1 repeats
Probably the biggest threat to me and my use of AI is that you squawk enough to get the technology restricted to only "authorized" people -- governments, big companies.I think the horse may be out of the barn on that one.
(DIR) Post #AUFPxnT6XSrLkSjCtc by evan@cosocial.ca
2023-04-02T16:10:25Z
0 likes, 1 repeats
The closest analog to the current situation with LLMs is human cloning.https://en.wikipedia.org/wiki/Human_cloningIn 1996, with the successful cloning of "Dolly" the sheep, there was a flurry of concern about the ethical dangers of human cloning.Many countries and international organizations moved to outlaw or "pause" cloning of humans until we as a society figured out how to use the technology ethically.Almost 30 years later, human cloning is still not a mainstream technology.
(DIR) Post #AUFPxoWgbdJp1rPa7M by evan@cosocial.ca
2023-04-02T16:11:22Z
1 likes, 1 repeats
Do I know how we should use cloning? No, I don't.Do I think we used this 30-year pause to carefully think over, as a society, how to use cloning technology ethically?Absolutely not. We're no more ready for it than we were in 1996.
(DIR) Post #AUFPxpZCjkvYFxb6gK by evan@cosocial.ca
2023-04-02T16:13:24Z
0 likes, 1 repeats
Probably the biggest difference between cloning and AI is that human cloning was still theoretical in 1996, without practical use.In 2023, 100 million people per month are using ChatGPT.I don't think that many people are going to be willing to give up a useful tool for theoretical dangers that aren't reflected in their daily usage.
(DIR) Post #AUFQCF1QtL71G6flA0 by fifilamoura@eldritch.cafe
2023-04-02T16:33:41Z
0 likes, 1 repeats
@evan Well, yes, self-centeredness, immediate gratification, convenience and devaluation of culture and elevation of "productivity" are why we can't tackle climate change either. I'd say LLMs are more like fossil fuels and cars than cloning* so it's useful to think of discussions about present and future harms, potentials and unintended consequences in that context and not just how convenient it is for us.*Cloning basically ended up being a bit useless for mass production of animals for industry and in terms of cloning people so that's why it's now relegated to cloning people's favorite pet. Cloning itself isn't banned, it's experimenting with cloning humans that's banned for ethical reasons related to causing suffering. Most debates around medical ethics are ultimately around suffering and dignity/autonomy. CRISPR also made cloning pretty uninteresting medically since you can now alter genes in more targeted ways in a living person.)*Most people I know playing with ChatGPT are marketers or programmers, everyone else I know seems to be far more into playing with image generating versions (unsurprisingly to me since most people see writing as work and not play, undoubtedly ChatGPT will be a boon to people who hate writing or want to generate SEO texts).*Medicine has ethics and is highly regulated because it's so easy to do harm to humans, it has still done great harm at times. The tech industry does not have ethics built into it, is very loosely regulated at the best of times and puts money above humans continually. This also makes cloning and LLMs entirely different things and not very comparable.
(DIR) Post #AUFQDz5P70ScPhMSLQ by Ronkjeffries@mastodon.social
2023-04-02T16:14:44Z
0 likes, 0 repeats
@evan interesting perspective. Are you following AI safety. AI ethics e.g. Connor Leahy and others who fear we may have already crossed the Rubicon with AI developments?
(DIR) Post #AUFQDzquGOSSn1aOO0 by evan@cosocial.ca
2023-04-02T16:17:40Z
0 likes, 1 repeats
@Ronkjeffries only barely. Honestly, I think there's a lot of millenarian thinking there.The idea that we will create a virtual God who judges us, finds us irredeemable, and casts us into a lake of fire seems to reflect more how we think about ourselves than about how technology works.
(DIR) Post #AUFQE17FZ77qi0ExV2 by evan@cosocial.ca
2023-04-02T16:58:16Z
0 likes, 0 repeats
@Ronkjeffries I'm also really suspicious of commercial disinformation.OpenAI isn't the first player to build an LLM. We know Google's had them privately for years, and they're a relatively transparent company.Probably others are out there, kept in secret.OpenAI made theirs public, and that made everyone else hopping mad.That's one reason you see so many breathless news articles about how the AI wants to kill all humans and rob blind people and so forth. Black-ops PR by competitors.
(DIR) Post #AUFQKJlT8v1oHg3aKm by Jonathanglick@mstdn.social
2023-04-02T15:54:19Z
0 likes, 0 repeats
@evan Right. I think, though, that it’s maybe more than LLMs ‘needing enough people to be interested.’ Somewhat like crypto, for AI to be widely adopted and integrated, it needs popular support to prevent heavy regulation *despite belief of serious potential risks and a lack of immediately evident social benefits.*
(DIR) Post #AUFQKKLcyTx45oySh6 by timbray@hachyderm.io
2023-04-02T15:56:57Z
0 likes, 0 repeats
@Jonathanglick @evan On another dimension, crypto & LLMs are similar in that they are both really expensive, requiring large investments in infrastructure, and with a high carbon cost. (Not what you’re talking about, I know.)
(DIR) Post #AUFQKL2sNgXwFxD06a by evan@cosocial.ca
2023-04-02T16:41:59Z
0 likes, 0 repeats
@timbray @Jonathanglick this is true!About 2-3% of global emissions comes from computer technology, and it would be a big help to get them under control.Video games and streaming video are by far the biggest emitters, but search engines, ecommerce and social networks come in pretty close, too.I'd love to see more public providers using carbon offsets to balance out their emissions, and carbon taxes may help more.It's an important topic to bring up!
(DIR) Post #AUFQKLd2DFTC467sSu by Jonathanglick@mstdn.social
2023-04-02T16:47:46Z
0 likes, 0 repeats
@evan and I really liked your broader point — that crypto is necessarily a ‘multi-user game,’ whereas AI is not.
(DIR) Post #AUFQKMGjpdEG3EhaLo by evan@cosocial.ca
2023-04-02T16:54:28Z
0 likes, 1 repeats
@Jonathanglick it's not *strictly* true. OpenAI uses the conversations people have with ChatGPT to train the next generation model. So more people having better conversations with ChatGPT is somewhat better for me. But not entirely necessary, in the same way as crypto.
(DIR) Post #AVPhazSZZEGQsNGoIi by clacke@libranet.de
2023-05-07T13:46:19Z
3 likes, 4 repeats
@evan The dangers of "AI", really the dangers of accelerated automation of capitalism with increasing complexity and diminishing human control and oversight, are neither theoretical nor future tense.They are not reflected in the majority's daily usage, but they've been here for decades and are growing exponentially.It's not about sentient machine overlords, it's about concentrated power to human overlords abstracted away from the harm they're causing.
(DIR) Post #AVPtWUvaLF92TSSsSW by Eiregoat@nicecrew.digital
2023-05-07T16:07:50.342924Z
0 likes, 0 repeats
Why would you even compare these to begin with? Kinda seems like a post explaining how cars and oranges aren't the same thing.