Post AUTXlJIJHHJpcwabIG by pronoiac@mefi.social
 (DIR) More posts by pronoiac@mefi.social
 (DIR) Post #AUORowB2Doxe3ti9i4 by DrewKadel@social.coop
       2023-04-06T21:43:08Z
       
       8 likes, 30 repeats
       
       My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:
       
 (DIR) Post #AUPcY2PBj5o8WnVdke by chromatic@kolektiva.social
       2023-04-07T11:19:40Z
       
       0 likes, 0 repeats
       
       @DrewKadel Douglas Adams also predicted this sort of Artificial "almost, but not quite, entirely unlike" Inteligence:"He had found a Nutri-Matic machine which had provided him with a plastic cup filled with a liquid that was almost, but not quite, entirely unlike tea.The way it functioned was very interesting. When the Drink button was pressed it made an instant but highly detailed examination of the subject's taste buds, a spectroscopic analysis of the subject's metabolism and then sent tiny experimental signals down the neural pathways to the taste centers of the subject's brain to see what was likely to go down well. However, no one knew quite why it did this because it invariably delivered a cupful of liquid that was almost, but not quite, entirely unlike tea."
       
 (DIR) Post #AUPjPNAnqEUI0mb12e by PCOWandre@jauntygoat.net
       2023-04-07T03:13:11Z
       
       2 likes, 0 repeats
       
       @DrewKadel No introspection and only waiting to generate the next line of a conversation. Sounds like quite a few humans!
       
 (DIR) Post #AUPjPNcSBQbbOXX68m by Moon@shitposter.club
       2023-04-07T16:22:15.337982Z
       
       4 likes, 2 repeats
       
       @PCOWandre @DrewKadel people who think ChatGPT thinks have failed the Reverse Turing Test.
       
 (DIR) Post #AUPjgBs57CFDskrIAK by thatguyoverthere@shitposter.club
       2023-04-07T16:25:26.301680Z
       
       1 likes, 0 repeats
       
       @Moon @PCOWandre @DrewKadel what is thinking though? I am starting to wonder if it isn't just predicting the next event. I walk down the stairs and my cat "thinks" I am going to the living room so he barrels down to beat me to the sliding doors. He doesn't realize I'm going to the kitchen so his actions are wrong, but he thought he knew what I was doing.
       
 (DIR) Post #AUPjjIK3aOhml7QLQW by thatguyoverthere@shitposter.club
       2023-04-07T16:25:59.935865Z
       
       0 likes, 0 repeats
       
       @Moon @DrewKadel @PCOWandre disclaimer: not sayin chatgpt things
       
 (DIR) Post #AUPk4Ilj7cSqDyNpJo by Moon@shitposter.club
       2023-04-07T16:29:44.847212Z
       
       8 likes, 3 repeats
       
       @thatguyoverthere @DrewKadel @PCOWandre LLMs more or less just string language tokens together using a mathematical model that people find pleasing. thinking is more closely related to reasoning rather than consciousness. LLMs mostly don't reason although there is some research that suggests that a small amount of reasoning is an emergent property, and there's a couple modified LLMs that can recognize and do math problems.Math problems actually help distinguish the two concepts. LLMs will give you an answer that sounds pleasing but may or may not be correct because it's not actually reasoning. If it's correct it's because it consumed enough data related to that specific problem that the correct answer was statistically likely.
       
 (DIR) Post #AUPkOPj0y6Exafhlia by jeffcliff@shitposter.club
       2023-04-07T16:33:23.706473Z
       
       0 likes, 1 repeats
       
       > a small amount of reasoning is an emergent propertyhttps://www.lesswrong.com/posts/8QzZKw9WHRxjR4948/the-futility-of-emergence
       
 (DIR) Post #AUPkfxlndwwG6NNM6y by Moon@shitposter.club
       2023-04-07T16:36:32.085060Z
       
       0 likes, 0 repeats
       
       @jeffcliff @thatguyoverthere @DrewKadel @PCOWandre https://openreview.net/forum?id=yzkSU5zdwDthere is some very recent criticism of the paper on here, I'm learning about it now.
       
 (DIR) Post #AUPlwnwnW66GVy79sW by kirbyV2@pwnage.nyanide.com
       2023-04-07T16:50:51.018531Z
       
       0 likes, 0 repeats
       
       @DrewKadel DUH
       
 (DIR) Post #AUPmjMTqQX1XW6j3HU by Moon@shitposter.club
       2023-04-07T16:59:34.558271Z
       
       1 likes, 0 repeats
       
       @ageha @PCOWandre @thatguyoverthere @DrewKadel i agree that empiricism comes before rationality but once you have the actual rules it becomes systemic reasoning at that point. also agree that most of what people do isn't reasoning, but it does incorporate reasoning among other things. llm models also don't resemble the modeling that humans do in any way. i guess not really an argument against that they reason. I am trying to think through the difference between a model that was created by a brain that evolved to be good at physical survival and a model created by an LLM that evolved to be good at sounding pleasing to a human. I feel like it is the difference between humans that survive in nature where their efforts have to correlate with eating, and humans that survive in weird governments where you eat by professing weird beliefs that make your leaders happy but don't directly help or harm your body.
       
 (DIR) Post #AUPoocOGtuLQDXklDU by Moon@shitposter.club
       2023-04-07T17:22:55.795540Z
       
       2 likes, 0 repeats
       
       @ageha @PCOWandre @thatguyoverthere @DrewKadel I agree that humans don't do it at all times but we have a model that is capable of being handed another arbitrary model and reasoning and making predictions about it by rigorously applying the rules of that model. LLM has nothing like that, also it is a model it does not can haz a model. On the other hand I have met humans at parties that employ LLMs on many topics. I do it myself when I need to sound smart. It makes sense to me to make a distinction between sounding and being smart, but if course my brain would say that to me.
       
 (DIR) Post #AUPpJs0Wt2sXd7YAGu by Moon@shitposter.club
       2023-04-07T17:28:36.431141Z
       
       2 likes, 0 repeats
       
       @ageha @PCOWandre @thatguyoverthere @DrewKadel i found the paper on LLM reasoning interesting but yeah I don't think bigger models are actually gonna have emergent flexible reasoning capability. it's a cool implausible science fiction concept like teleportation or eskimos
       
 (DIR) Post #AUPqMkpI8vHmBGEUmu by thatguyoverthere@shitposter.club
       2023-04-07T17:40:21.633738Z
       
       1 likes, 0 repeats
       
       @Moon @ageha @DrewKadel @PCOWandre I think there will come a point where it's going to be very difficult to distinguish. The LLM is only one part of the whole thing. That teaches it how to speak in a way that we can understand. But with supervised training you can teach it to act on ideas you communicate to it. I think making a distinction between a well trained computer and a human when it comes to whether or not one is reasoning or not might be more of a subjective preference at a certain point.I don't think that means computers will be "conscious" or anything like that. That's an entirely different muddy puddle to stomp around in. I just think that thinking seems to be something like predicting the outcome of a series of events. sometimes adding events to the list to achieve a desired end state. Maybe that would be reasoning. I don't know. It just seems like we want to reserve these words for humans because we consider ourselves special.
       
 (DIR) Post #AUQDuABVBFT1KVWN60 by fcktheworld587@social.linux.pizza
       2023-04-07T22:04:05Z
       
       1 likes, 0 repeats
       
       @DrewKadel it's just predictive text in makeup
       
 (DIR) Post #AUQSH5voYeN69r0obI by adamasnemesis@social.adamasnemesis.com
       2023-04-08T00:43:39Z
       
       0 likes, 0 repeats
       
       @DrewKadel It's excellent.
       
 (DIR) Post #AURUdXAB8peU94SzDc by DrewKadel@social.coop
       2023-04-07T21:30:10Z
       
       1 likes, 0 repeats
       
       @ageha @PCOWandre @Moon I @thatguyoverthere I do think that there are ways to enhance or incorporate ChatGPT into a more useful research assistant-parsing queries & directing some to best research databases or something--then coming back to the LLM for what it's good at--however the basic illusions will still be the same, even if it becomes better than Google & Wikipedia combined.
       
 (DIR) Post #AURUfCIZgcZK0wBe6a by ewdocparris@writing.exchange
       2023-04-07T15:53:39Z
       
       1 likes, 0 repeats
       
       @DrewKadel And the reason business people love the idea is they don't know what their workers do. They only know what their workers sound like. ChatGPT can sound exactly like their workers. The problem is they need their workers to actually do the stuff not sound like it.Businesses leveraging this tech will all eventually fail.
       
 (DIR) Post #AUS9YRFZnJkMTmgEgC by ignaloidas@not.acu.lt
       2023-04-08T20:24:34.233Z
       
       0 likes, 0 repeats
       
       @Moon@shitposter.club @thatguyoverthere@shitposter.club @DrewKadel@social.coop @PCOWandre@jauntygoat.net tbh I don't expect trying to get reasoning out of LLMs to have any success. François Chollet has made a pretty damn reasonable reasoning challenge for computers in ~2019 and best solutions as of now can solve only about 30% of the tasks, and the algorithms for that as of now have zero neural networks. https://pgpbpadilla.github.io/chollet-arc-challenge
       
 (DIR) Post #AUSN8Ofi9oZaJNNhEO by Zerglingman@freespeechextremist.com
       2023-04-08T22:56:57.681164Z
       
       0 likes, 0 repeats
       
       @DrewKadel Haha int(random()*1000) go brrrrrrrrrr
       
 (DIR) Post #AUTXlHZxegFYJ6pfWq by pronoiac@mefi.social
       2023-04-07T04:33:43Z
       
       0 likes, 1 repeats
       
       @DrewKadel alt text:Something that seems fundamental to me about ChatGPT, which gets lost over and over again:When you enter text into it, you're asking "What would a response to this sound like?"If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it *is* doing something else.It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.
       
 (DIR) Post #AUTXlJIJHHJpcwabIG by pronoiac@mefi.social
       2023-04-07T04:36:51Z
       
       0 likes, 0 repeats
       
       @DrewKadel alt text in previous reply, @diazona @clacke
       
 (DIR) Post #AUThPCg6CB6CHHoWLA by roland@giersig.net
       2023-04-09T14:15:11Z
       
       1 likes, 0 repeats
       
       @MoonAnd how does ChatGPT differ in that kind of "mathematical" thinking from the way most people think about mathematics? 🤔 Yes, ChatGPT is built as a text predictor. That's very similar in how humans think and act. We also are association machines. We predict the future all the time. 🤷‍♂️ @PCOWandre @thatguyoverthere @DrewKadel
       
 (DIR) Post #AUTiBZ6K89R3in8nRY by thatguyoverthere@shitposter.club
       2023-04-09T14:27:32.616566Z
       
       0 likes, 1 repeats
       
       @roland @Moon @PCOWandre @DrewKadel There was a paper on the effects of GPS on the human brain that I read a while back I wish I could find again. It described the process of determining a route in our brain as heavily reliant on our ability to predict the future. According to the paper, things like traffic, construction, speed limits, etc are taken into account to try to calculate each potential path, and each time our prediction doesn't align with the actual future the brain recalculates to try and assess whether a change in route is required.
       
 (DIR) Post #AUUCZXbYLIUsei63rE by roland@giersig.net
       2023-04-09T19:38:08Z
       
       1 likes, 0 repeats
       
       @thatguyoverthere Well, most human sports heavily rely on our capabilities to predict the future. We have to mentally keep track of the spacial position of several objects and predict where those objects will be in the next few seconds.  If you want to feel that prediction engine in our heads in action, just close your eyes while walking and notice that you can go on for several seconds without the urge to open your eyes.@PCOWandre @Moon @DrewKadel
       
 (DIR) Post #AUUVdduminW4VulI2K by ec670@pawoo.net
       2023-04-09T23:41:39Z
       
       1 likes, 0 repeats
       
       @thatguyoverthere @roland @PCOWandre @Moon @DrewKadel INCREDIBLE!  Scientists spend millions to prove the obvious
       
 (DIR) Post #AUUWlDkK04Rxcj2qf2 by thatguyoverthere@shitposter.club
       2023-04-09T23:54:14.223968Z
       
       0 likes, 0 repeats
       
       @ec670 @roland @PCOWandre @Moon @DrewKadel yeah I'm sure it wasn't cheap. It was interesting, but the larger point of the study was that using  GPS may have a negative impact on gray matter in certain regions of the brain.
       
 (DIR) Post #AUUX8kelo1vCM0fW4G by ngaylinn@tech.lgbt
       2023-04-06T21:58:26Z
       
       0 likes, 0 repeats
       
       @DrewKadel Love this. We want so badly for ChatGPT to produce answers, opinions, and art, but all it can do is make plausible simulations of those things. As a species, we've never had to deal with that before.
       
 (DIR) Post #AUUX8lQyumUClXE1DM by ec670@pawoo.net
       2023-04-09T23:58:28Z
       
       0 likes, 0 repeats
       
       @ngaylinn @DrewKadel Yes.  It took like $600,000 in rented GPU cycles for them to train midjourney (or maybe the other one) - and all it does is produce shoddy, often inferior art similar to the art it was trained on.That’s not creativity.  That’s the exact opposite of creativity.
       
 (DIR) Post #AUUb6UleJGOAJR494q by dew_the_dew@nicecrew.digital
       2023-04-10T00:42:54.371113Z
       
       4 likes, 1 repeats
       
       LLMs can't think but neither can HR catladies or medical billing specialists or chief diversity officers
       
 (DIR) Post #AUXDKE1BZ7XuzfE7G4 by zeropolitics@noagendasocial.com
       2023-04-11T07:00:36Z
       
       0 likes, 0 repeats
       
       @DrewKadel __ Exactly, people don't understand this at all.
       
 (DIR) Post #AXaxoLAKBnoqZh3wzQ by onan@dobbs.town
       2023-07-11T20:01:49Z
       
       0 likes, 0 repeats
       
       @DrewKadel "It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation."Just like a real person!
       
 (DIR) Post #AxIKM9GEgNDTZy0Ihk by ftranschel@norden.social
       2023-04-07T14:17:57Z
       
       1 likes, 0 repeats
       
       @ngaylinn @DrewKadel Well we had - conceptually this is *exactly* the same as with Eliza - just two orders of magnitude more sophisticated and two orders of magnitude more connected.At its core it's really just people lacking technical understanding hallucinating an antropomorphization of a conditional probability distribution.With ChatGPT, the interface is the innovation, *not* the model.
       
 (DIR) Post #AxVeaLK39u9NBCJSIi by masek@infosec.exchange
       2025-08-22T08:53:52Z
       
       0 likes, 0 repeats
       
       @DrewKadel The problem is: human answers are often along these lines too.I see the problems coming due to AI as well. But there may be another side effect: dispelling myths about human intelligence as well.
       
 (DIR) Post #AxVeaMoZbu9zo9bLW4 by rysiek@mstdn.social
       2025-08-22T09:03:32Z
       
       0 likes, 0 repeats
       
       @masek the difference is that humans do have a model of the world and can, if they choose to, actually reason based on it.Spicy autocomplete can't.I'd be very careful comparing text extruders to some intellectually lazy humans.@DrewKadel
       
 (DIR) Post #AxVeaO0JBl8pUq6ERk by masek@infosec.exchange
       2025-08-22T09:20:30Z
       
       0 likes, 0 repeats
       
       @rysiek I don't say that both are identical, but speculate that both share some underlying mechanisms.I consider it eery that in certain conditions (e.g. lack of sleep, drunkenness) I can observe mistakes that would also be typical to an AI.The model of the world is implemented differently through different humans and rather absurd in some cases.I would gladly discover  that there is a fundamental difference. But currently I am not convinced.I plan to re-read https://www.amazon.com/dp/B08BT2THMN again in the context of AI. That book is an interesting read I can recommend as it quite sobering in terms of "human intelligence".@DrewKadel
       
 (DIR) Post #AxVeaOp09Hgu23oiSe by rysiek@mstdn.social
       2025-08-22T09:28:45Z
       
       0 likes, 0 repeats
       
       @masek > I would gladly discover that there is a fundamental difference.My problem with this statement is that it's like saying "I'd gladly discover unicorns don't exist".An important element of the AI hype shtick is making extraordinary claims – like "LLMs think like humans" – and then demanding people who object to those extraordinary claims to provide proof. That's not how burden of proof works.Just to be clear: I am not saying you are personally pushing the hype.@DrewKadel
       
 (DIR) Post #AxVeaPcz9RfoX5CdN2 by masek@infosec.exchange
       2025-08-22T09:39:09Z
       
       1 likes, 0 repeats
       
       @rysiek As a "numbers person" I am deeply skeptical of the AI hype. I don't see any way how the investment (money, resources) can be recouped. I am afraid of the collateral damage once that house of cards crumbles (been in the thick of the dotCom bubble).I object to "AI thinks like humans" at least as much as to "AI is completely different from humans". Both statements are an hypothesis not yet validated by data.The problems with both statements is that our understanding of "human intelligence" (or "human creativity") is rather poor (as the linked book explains). Though I have some criticism with the book as well.We once defined the Turing test as gold standard for testing intelligence and that one fails us now abysmally.On the other hands we see dumbness elevated to state ideology in the most powerful country (as of now) in the world.We live in interesting times in the proverbial chines sense.@DrewKadel