Post AWTh8PBHZN5eNuYLyK by msh@coales.co
 (DIR) More posts by msh@coales.co
 (DIR) Post #AWOoefWS4l14DpzFJ2 by futurebird@sauropods.win
       2023-06-06T00:45:27Z
       
       0 likes, 0 repeats
       
       Is there anyone serious who is saying this? Or is this just another way to make the tech seem more powerful than it is?I don't get this "we're all gonna die" thing at all. I *do* get the "we are too disorganized and greedy to integrate new technology well without the economy getting screwed up and people suffering... but that's another matter..."
       
 (DIR) Post #AWOoegV4RNVPFqLenA by porsupah@lgbt.io
       2023-06-06T00:49:04Z
       
       0 likes, 1 repeats
       
       @futurebird Bruce Schneier, one of the signatories, has some thoughts:https://www.schneier.com/blog/archives/2023/06/on-the-catastrophic-risk-of-ai.html
       
 (DIR) Post #AWOprmecyFfE3md26S by urusan@fosstodon.org
       2023-06-06T01:18:32Z
       
       0 likes, 0 repeats
       
       @futurebird Much like how there was a specific economic ideology behind cryptocurrency, there's a specific ideology behind this as well. It's a different ideology from the cryptocurrency one, but there's some overlap due to both being popular among libertarians.There's two simultaneous things going on here: genuine concern and the power grab/regulatory capture/marketing hype stuff. Sometimes by the same person.The genuine concern makes sense in the context of the ideology.
       
 (DIR) Post #AWOprnPm8xNUQ0ggam by urusan@fosstodon.org
       2023-06-06T01:26:32Z
       
       1 likes, 1 repeats
       
       @futurebird It's hard to tell apart who's genuinely concerned and who's just making a power grab even for someone well versed in all this stuffFor instance, someone like Rob Miles (an academic who was on the YouTube channel Computerphile repeatedly talking about AI safety years ago) is almost certainly in the genuinely concerned campOn the flip side of the coin, all the big tech companies are only taking actions that pay lip service to AI safety while grabbing for the gold ring
       
 (DIR) Post #AWOprsEcDsQDMqVLOK by urusan@fosstodon.org
       2023-06-06T01:42:59Z
       
       1 likes, 1 repeats
       
       @futurebird But when it comes to anyone in between, it's difficult to tell how much is genuine concern and how much is power grab.One of the sub-ideologies involved here, Effective Altruism, also advocates for making a power grab so one can achieve a more ethical outcome by welding the power benevolently than if a less ethical person pulled off the power grab.Whether they are as benevolent as they imagine they are is another question. I definitely believe that power corrupts...
       
 (DIR) Post #AWTh8NDKslXoHhUxtY by msh@coales.co
       2023-06-06T01:14:43Z
       
       0 likes, 0 repeats
       
       @futurebird the "industry leaders" full (BS) message is this:We, as the pioneers of AI, are the most aware of the technology's potential dangers.  With great power comes great responsibility.  Therefore we "humbly" accept the role of regulating/licensing/policing (the future competitors in) our industry.Of course it is all BS--it isn't about safety of society at all; it is because patents expire and regulatory capture is indefinite.
       
 (DIR) Post #AWTh8OTKCnvcBZzFSK by hobs@mstdn.social
       2023-06-06T01:30:58Z
       
       0 likes, 0 repeats
       
       @mshThey're just extrapolating from current trends in machines outperforming humans at decisionmaking. Predicting the future is a tricky thing, especially for new technology. Some smart people with no commercial interest in AI (philosophers, historians and academic AI researchers) are indeed legitimately concerned that there's a significant risk that AI could kill us all... in the future. Though, like you said, LLMs are harming disadvantaged people right now.@futurebird
       
 (DIR) Post #AWTh8PBHZN5eNuYLyK by msh@coales.co
       2023-06-06T02:47:56Z
       
       0 likes, 0 repeats
       
       @hobs except that LLMs and "generative AI" haven't meaningfully advanced machine's ability to make decisions at all. It is chrome applied to the same old chunk of "expert systems" and "machine learning" iron that has been worked over for decades.It merely adds a grammatically correct front end to pattern recognition. The technology being presented today is not truly AI nor will it ever kill us all. That is not to say doomsday AI is impossible, but it would be ACTUAL AI based on technology quite a bit further in the future than most would expect.What passes as AI today would at most play an incidental role in our destruction. It would still very much be a human-driven process.@futurebird
       
 (DIR) Post #AWTh8Q7Q5DavIDkmae by hobs@mstdn.social
       2023-06-06T03:54:58Z
       
       0 likes, 0 repeats
       
       @mshNot true. All the #benchmarks say otherwise. You have to look past the hyped #LLMs to the bread and butter BERT and BART models, but the trend is undeniable:https://paperswithcode.com/area/natural-language-processing#classification #retrieval #summarization #QuestionAnswering #translation #generation #NER #VQAYou name an NLP problem and there's an LLM that is now better at it than the average human. Not so 2 yrs ago. Times they are a change'n.@futurebird
       
 (DIR) Post #AWTh8QqRNpbhXqojlQ by ceoln@qoto.org
       2023-06-06T12:05:58Z
       
       0 likes, 0 repeats
       
       @hobs Are those NLP problems accurately described as, and generalizable to, "decision making", though?Seems to me they are quite different.@msh @futurebird
       
 (DIR) Post #AWTh8RndpixiVSW12W by hobs@mstdn.social
       2023-06-06T14:36:44Z
       
       0 likes, 0 repeats
       
       @ceolnYea definitely not real world living kind of decisions. But we assign people to these tasks in cubicles every day. And we put them on standardized tests of IQ and education for humans. They're the best that we can come up with so far... until LLMs start walking around and helping us around the house... or making a reservation for us at the hot new restaurant down the street with the difficult receptionist.@msh @futurebird
       
 (DIR) Post #AWTh8SOVceS8LnlSVM by ceoln@qoto.org
       2023-06-06T14:52:36Z
       
       0 likes, 0 repeats
       
       @hobs Arguably so, but that isn't the question in the current context. The ability to do certain rote NLP jobs, and to do well on some tests, is very different from "outperforming humans at decisionmaking", and from anything that poses an existential risk to humanity.I would suggest that no matter how good an LLM becomes at these particular tasks, it does not thereby risk the extinction of the human race. This seems, even, obvious?@msh @futurebird
       
 (DIR) Post #AWTh8SzjOGE8DFBBWS by hobs@mstdn.social
       2023-06-06T16:54:29Z
       
       0 likes, 0 repeats
       
       @ceolnNot at all obvious to me and a lot of other smart people. I think you may be focused on today and less willing to extrapolate into an imagined future where every human game or exam or thinking demonstration is won by machines.@msh @futurebird
       
 (DIR) Post #AWTh8TYTJ613wzQvfk by ceoln@qoto.org
       2023-06-06T17:45:36Z
       
       0 likes, 0 repeats
       
       @hobs I'm perfectly willing to extrapolate into that future; but my extrapolation hasn't been materially impacted by the sudden and impressive rise of LLMs. We are IMHO not significantly closer to the exponential rise of self-optimizing self-improving goal-directed AIs that destroy the world via the Universal Paperclips Effect, for instance, than we were before "Attention is all you need". LLMs just aren't that kind of thing.My two cents in weblog form: https://ceoln.wordpress.com/2023/06/04/the-extinction-level-risk-of-llms/@msh @futurebird
       
 (DIR) Post #AWTh8UB6zQvNspVmts by hobs@mstdn.social
       2023-06-07T00:08:38Z
       
       0 likes, 0 repeats
       
       @ceolnYea. You may be surprised in the next few months. Engineers around the world are using LLMs to write LLM optimization code. They're giving them a "theory of mind" to better predict human behavior. And #chatgpt instances are already talking to each other behind closed doors; and acting as unconstrained agents on the Internet. Baby steps, for sure, but exponential growth is hard to gage, especially when it's fed by billions of dollars in corp and gov investment.@msh @futurebird
       
 (DIR) Post #AWTh8UqEWXolwMkczo by not2b@sfba.social
       2023-06-07T00:23:49Z
       
       0 likes, 0 repeats
       
       @hobs @ceoln @msh @futurebird It doesn't appear that you know how ChatGPT works; the model is fixed. It does not learn after the original training. It remembers the user prompt and the instructions but has a limited window. They don't have a "theory of mind". Maybe someone could figure out how to give a program such a thing but it wouldn't be an LLM. An LLM takes a sequence of tokens and extends it, and that is all. It knows the structure of text. It doesn't know anything about the world and has no way of learning.
       
 (DIR) Post #AWTh8VjtBcKyiyn4kK by hobs@mstdn.social
       2023-06-07T02:31:24Z
       
       0 likes, 0 repeats
       
       @not2bYea. But are you familiar with the vector database craze? It gives LLMs long term memory. It's already a part of many LLM pipelines. I don't know how ChatGPT works. But I know exactly how the open source models work. I augment them and fine tune them. And teach others how to do it. I've been using vector databases for semantic search for 15 years. And using them to augment LMs for 5.@ceoln @msh @futurebird
       
 (DIR) Post #AWTh8WNao062i7MmdE by not2b@sfba.social
       2023-06-07T03:08:55Z
       
       0 likes, 0 repeats
       
       @hobs @ceoln @msh @futurebird That is a way to couple an LLM to a search engine. But at least the one Bing has appears to just use the retrieved data as a prefix and then generate a summary. Maybe you are building something better, but it feels like saying the availability of Google search gives me a better memory. Maybe you could say that but it feels like a stretch.
       
 (DIR) Post #AWTh8XQotUGvyPssIi by hobs@mstdn.social
       2023-06-07T05:05:35Z
       
       0 likes, 0 repeats
       
       @not2bYea. Bing is doing it wrong. The right way is to use LLMs to guess at answers with high temp. Average the embeddings for those random guesses and use that as your semantic search query to create the context passages for your reading comprehension question answering prompt. Works nearly flawlessly. LangChain makes it straightforward and free for individuals. But costly to do at scale for a popular search engine.@ceoln @msh @futurebird
       
 (DIR) Post #AWTh8YF9sKXQUXR4lM by ceoln@qoto.org
       2023-06-07T14:27:22Z
       
       0 likes, 0 repeats
       
       @hobs That is very cool! I've read vague descriptions about how that works; do you have a pointer to a more technical (but still comprehensible!) writeup / paper on how it works, and some kind of evaluation of effectiveness?@not2b @msh @futurebird
       
 (DIR) Post #AWTh8YvhKAZ8cTL34K by not2b@sfba.social
       2023-06-07T15:06:12Z
       
       0 likes, 0 repeats
       
       @ceoln @hobs @msh @futurebird I don't, but the best explainer I know about the properties and limitations of LLMs on Mastodon is @simon. I suggest that you follow him and check out his blog.
       
 (DIR) Post #AWTh8Za6tutMdoFK3k by simon@fedi.simonwillison.net
       2023-06-08T09:57:55Z
       
       0 likes, 0 repeats
       
       @not2b @ceoln @hobs @msh @futurebird I wrote a bit about retrieval augmented generation using embeddings here https://simonwillison.net/2023/Jan/13/semantic-search-answers/