Post AT3EBh2leX7e2b2E9Q by drq@mastodon.ml
(DIR) More posts by drq@mastodon.ml
(DIR) Post #ASr0wZBAyo5o0NuCXY by drq@mastodon.ml
2023-02-20T00:31:48Z
0 likes, 0 repeats
@marcan > They aren't reasoning or "thinking"; what they're doing is just learning to imitate the behavior they're trained on. They can produce outputs that look novel, but in the end it all boils down to a combination of the inputs they were trained onI feel extremely seen by this post. Sorry, I will just quietly retreat into my corner.
(DIR) Post #ASr22Q4Ds5FxR60YHQ by drq@mastodon.ml
2023-02-20T00:44:03Z
0 likes, 0 repeats
@marcan Seriously, though. How in the slightest does that differ from what we, humanimals, do?All we do is also take the behaviours, pieces of information (a.k.a memes) as sensory input, memorize it, train on it, transforming raw input into experience (it's called learning), combine the inputs, compare them, transform, recurse on it, many-many times, and then produce some output, which we then call "reasoning". Or "art" if nobody seems to buy into it. Or "culture" as umbrella term.Have you seen the "Everything is a remix"? This is true to an uncomfortable degree for some.I'm fine with it. Whatever. There's no golden pot at the end of the rainbow, because a rainbow is not a bow, but actually a circle, and we're looking at it the wrong way. Everything that exists, works somehow. We do too.
(DIR) Post #ASr2lUuchhJlUi0qrQ by drq@mastodon.ml
2023-02-20T00:52:12Z
0 likes, 0 repeats
@marcan > They also have no concept of logic or facts, so there is no expectation of accuracy - an AI won't tell you it doesn't know how to do something, it'll just make up some BS.We also aren't too comfortable with the notion of "being wrong", we feel almost physically uneasy when we know we might be, so most of us will take on a task or defend whatever story we tell ourselves just to not show vulnerability. And sometimes, it helps us learn more (I know, it helped me learn - "fake it till you make it" works), but sometimes it is the reason arguing with a person is not worth it.
(DIR) Post #ASr3ENn6GATro1GdHs by a1ba@suya.place
2023-02-20T00:57:28.869479Z
0 likes, 0 repeats
@marcan @drq maybe it's a next step for AIs. Like SD isn't made for text and ChatGPT can't generate images, but what if there was something combining features of both in one model?And then let it train on teh internets 24/7 and see how it turns racist, of course.
(DIR) Post #ASr3L724xvEjrETz1s by drq@mastodon.ml
2023-02-20T00:58:38Z
0 likes, 0 repeats
@marcan Of course it's not there yet. I'm not arguing that it is there.I tried ChatGPT myself. Their problem is that they are a yes-man. They will very happily and very confidently tell you whatever you wanted to hear in the first place, no matter how wrong, so asking them anything of existential importance is out of the question entirely.Although offloading some tedium on them like writing some boilerplate code I couldn't be assed to write myself so hard that it gave me a kind of a writing block helped me a lot.
(DIR) Post #ASr5bq87GJJAGZPVGy by drq@mastodon.ml
2023-02-20T01:24:04Z
0 likes, 0 repeats
@marcan I guess what I am trying to say though, is, although we're all high and mighty here about how reasonable and critical we humans are, most of the human history shows that... we really aren't as intelligent as we'd like us to be, to put it mildly. Most of the human condition is a long chain of failures of critical thinking, it's true about me and you, and everyone on the planet. Or maybe we're wrong about the whole concept of "thinking" altogether. Just like how we're wrong about the shape of a rainbow, or how we were wrong about structure of the solar system, or earth rotating, or ionizing radiation being not harmful (there was a radioactive toothpaste, and it sold well, look it up), or inches being better than centimetres... We're just constantly wrong, this is the price we pay for our superpower.You're right, we could probably do better. Could we really, though?And a lot of what you say about modern AIs, as half-baked as they are, is also true of humanity as well.
(DIR) Post #ASr5jvWv4JplKZrszQ by drq@mastodon.ml
2023-02-20T01:25:33Z
0 likes, 0 repeats
@marcan (Sorry for getting all existential on you, I just chose this mood for the night, with alternative being infinitely disappointed and depressed)
(DIR) Post #ASr8sTZSgktf5PpEZs by drq@mastodon.ml
2023-02-20T02:00:42Z
0 likes, 0 repeats
@marcan The tragedy of AI as it stands now is though that we're witnessing how maybe our mist interesting and breakthrough tool that is just potentially capable of becoming our succession is getting trained and taught by people who are literally the least deserving of teaching anybody anything.
(DIR) Post #ASrceTU7D3znODZ3AG by f4grx@mastodon.social
2023-02-20T07:34:19Z
0 likes, 0 repeats
@drq @marcan it:' a text generator, nothing more. Even if openai markets it as a question answering machine, it is not.
(DIR) Post #ASrnJ9sY0YRVzu1xCK by krnlg@mastodon.social
2023-02-20T09:33:44Z
0 likes, 0 repeats
@drq @marcan I think maybe one fundamental difference between how a human and a generic AI works, even a "proper" AI rather than a language model, is that we have a whole bunch of nuances which evolved over time. Passions, emotions, ways of things that are not simply a result or what we've seen or learned but instead part of the set of evolved human characteristics that make us who we are.
(DIR) Post #ASrnsuV8eu1KjJ840W by drq@mastodon.ml
2023-02-20T09:40:09Z
0 likes, 0 repeats
@krnlg you're talking genetics?O@marcan
(DIR) Post #ASrokNxnqhfFSB2GQK by bornach@fosstodon.org
2023-02-20T09:35:56Z
0 likes, 1 repeats
@marcan @drqSome of ChatGPT's so-called "failure modes" remind me of similar failure modes in humansChatGPT arguing that a movie not yet released during its training cut-off period in 2022 means it hasn't been released in 2023, reminded me of exchanges I've had arguing politics on birdsite. They weren't interested in arriving at some agreed truth, but only in making an argument-winning tweetBrain science already acknowledges this very human characteristichttps://www.nytimes.com/2011/06/15/arts/people-argue-just-to-win-scholars-assert.html
(DIR) Post #ASroyznrxU30rjM7iy by krnlg@mastodon.social
2023-02-20T09:52:27Z
0 likes, 0 repeats
@drq @marcan Well I mean like, we have built in characteristics. We're scared of things, we want things and so on. In other words "we" - our minds and our behaviours - are more than just intelligence in the abstract.
(DIR) Post #ASrp9e6704ngjQy0n2 by drq@mastodon.ml
2023-02-20T09:54:24Z
0 likes, 0 repeats
@krnlg I mean, it's literally centuries old debatehttps://en.m.wikipedia.org/wiki/Nature_versus_nurture@marcan
(DIR) Post #ASrpmT4vu13RqnvYcy by drq@mastodon.ml
2023-02-20T10:01:11Z
0 likes, 0 repeats
@krnlg But where do those come from? There is nothing "essentially built-in" about any of us, anything that's inside is informed by everything that's outside - either genetics that comes from parents (nature) or memetics that comes from the society (nurture).Unless, of course, you're arguing for existene of supernatural entities like "soul" or "spirit" in their literal sense, which if you do, well, our disagreements are more fundamental than "how does thinking work". @marcan
(DIR) Post #ASrpnqYI782mwTJhgm by krnlg@mastodon.social
2023-02-20T10:01:43Z
0 likes, 0 repeats
@drq @marcan The debate is about how much, not whether its a thing at all. What I'm getting at is that while human brains might work similarly to an artificial neural net in terms of "raw creativity" (and sure, it should be possible - I agree everything that works, works somehow), we also have a bunch of tweaks and nuances to our hardware that are what make us human. And particularly when it comes to things like art, I think that's important.
(DIR) Post #ASrq425yxnxVNPnGxE by krnlg@mastodon.social
2023-02-20T10:04:36Z
0 likes, 0 repeats
@drq @marcan By built in, I mean built in genetically, in other words not simply a product of what we'be learned. Brains have structure that engenders particular ways of thinking - we aren't generic neural nets.
(DIR) Post #ASrr4yyU9W3XwPItdo by drq@mastodon.ml
2023-02-20T10:15:56Z
0 likes, 0 repeats
@krnlg of course we aren't "generic" neural nets. We're a collection of highly specialized neural nets that appears generic from the outside because everyone else has roughly the same set of specializations.That's where we hit the hard problem of consciouasness. The brain is clearly a computer of sorts. But how it does "experiencing"? What internal circuitry enables it to? And if we, by chance stumble upon the correct formula of consciuosness, of actual inteligence (whatever the fuck that means) - how do we know? May this whole "awareness" thing just be an elaborate post-hoc side effect? Is Bing's search bot in pain? It sure looks like it from the outside at times.Heh.@marcan
(DIR) Post #ASrt6BvNgrDCIoYX4q by bornach@fosstodon.org
2023-02-20T10:38:38Z
0 likes, 0 repeats
@drq @marcan I'd expect that no matter who we select to train an AI, it will always end up misaligned in some unexpected wayhttps://youtu.be/w65p_IIp6JY
(DIR) Post #ASruoVL8xt4nDMIfnE by drq@mastodon.ml
2023-02-20T10:57:49Z
0 likes, 0 repeats
@bornach sure, although as long as it's not the "actively harm vulnerable people and do it for monetary gain of its creators" (which is the special kind of evil, I hope you will agree), I'll take it.@marcan
(DIR) Post #ASs7tekWhuPdFVxYzw by drq@mastodon.ml
2023-02-20T13:24:24Z
0 likes, 0 repeats
@zudlig @marcan @a1ba @krnlg ... or not. Who knows? Who can know?For example, Turing proposed ditching the hard problem of consciousness altogether - literally because we seemingly can't, unable to know for sure.So, according to him, if the machine is able to fool the absolute most people into believing it's conscious, then it's more or less safe to assume that it probably is. You know this as "Turing test".
(DIR) Post #ASsAYxhjL0x5omTY3c by a1ba@suya.place
2023-02-20T13:54:20.648075Z
0 likes, 0 repeats
@zudlig @shuro @drq @krnlg @marcan All captchas are easily defeated for a small price. You know how? Low or even unpaid humans.
(DIR) Post #ASsC1kDYeyTELGeR1s by a1ba@suya.place
2023-02-20T14:10:46.736186Z
0 likes, 0 repeats
@zudlig @shuro @drq @krnlg @marcan >recent retina scanWhen they will start asking to send my pee over mail for analysis?
(DIR) Post #ASt2P8W9fsrXz3pDjk by gordoooo_z@nerdculture.de
2023-02-20T23:57:35Z
0 likes, 0 repeats
@drq @marcan It's less about the "repeating it's training data" and more about the fundamentals. It's not really taught to reason. It's just predicting the probability of the next word based on all the previous words, so it's missing a lot of the circuitry we've got going on. It's really not an apples to apples comparison.That being said, it's also the most advanced version of that sort of AI anyone's ever seen, and it's only going to get more advanced, and I...
(DIR) Post #ASt42BQmsO7eGeUtfc by gordoooo_z@nerdculture.de
2023-02-20T23:59:10Z
0 likes, 0 repeats
@drq @marcan ...think dismissing it would be unwise. These things will only get better, and from a technical standpoint, that is very interesting, but from a human standpoint, well, it puts a bit of a pit in my stomach. I'm not a luddite, but I think Tom Scott put it into words best in his recent video, where spoke about being at the bottom of the sigmoid curve of a new technology that is about to change everything.
(DIR) Post #ASt42C2ibMSoAIFBnE by drq@mastodon.ml
2023-02-21T00:15:48Z
0 likes, 0 repeats
@gordoooo_z You're absolutely correct. The technology is here to stay, it has very interesting implications (both good AND bad, both practical and philosophical), and you can't just ignore it just because you don't like where it's coming from (the big tech).It's like with genetic engineering. It's a technology that changed the way we do agriculture, and ultimately saved millions from starvation, and saved millions of lives during the pandemic, because our vaccines now are genetically engineered as well.In no way does it mean that we should start kissing Monsanto's and Pfizer's ass. They're still profiteering gatekeeping pieces of shit.Although I don't agree that the adoption cycle is a smooth sigmoid. It's more like... There's a huge bump right at the start of the curve, when the technology is overhyped and the market is overheated. That's where the technology tests its limitations. And then most people get disappointed.https://en.wikipedia.org/wiki/Gartner_hype_cycle@marcan
(DIR) Post #AT2XJJz4yWbeoKhVI0 by RenewedRebecca@oldbytes.space
2023-02-20T23:07:57Z
0 likes, 0 repeats
@marcan @bornach @drq And I think that's a big difference... Even if humans sometimes don't know their reasoning is BS, sometimes, heck, often we do. Because, it's *actual reasoning*.
(DIR) Post #AT2XJKVh1Gh6RTxY7k by drq@mastodon.ml
2023-02-25T13:56:16Z
0 likes, 0 repeats
@RenewedRebecca Sometimes... Yes, maybe. Often? Please.Let me introduce you to a concept of "false memory", which is so terrify-- I mean, fascinating that it keeps me up at night thinking about it.Thing is, your reasoning is based on memory - the things you learned. But you can't trust your memory, because it doesn't remember things accurately, it just saves broad strokes of your experience, and it just confabulates details as you recall the experience. Most of the time, it's correct. But the more time passes, the less you refresh your neurons responsible for that memory, the less details you remember accurately. And your memory, if you press it hard enough (and by enough, I mean sometimes all it takes is a light nudge), will vividly "recall" the details that have never been.So you _will_ bullshit yourself without even realizing it, and you do that routinely.There are people who, in Walter White level of cunning, even use this little quirk to inject memories.@marcan @bornach
(DIR) Post #AT3Dv5MfyT2UY4QaBM by RenewedRebecca@oldbytes.space
2023-02-25T21:53:42Z
0 likes, 0 repeats
@drq @marcan @bornach Sure- but that's still a long ways from saying that humans are just advanced LLMs.
(DIR) Post #AT3EBh2leX7e2b2E9Q by drq@mastodon.ml
2023-02-25T21:56:40Z
0 likes, 0 repeats
@RenewedRebecca Of course not. There's more stuff going on. But I wouldn't be just automatically dismissive of what we already see.Besides, it also might tell us a thing or two about the nature of language itself.@marcan @bornach
(DIR) Post #AT3EilrgkRvcYAcq8G by RenewedRebecca@oldbytes.space
2023-02-25T22:02:42Z
0 likes, 0 repeats
@drq @marcan @bornach Agreed, especially about the nature of language. My point is that all of the breathless hype about artificial intelligence continues to be misplaced. chatGPT is an interesting technology that will certainly have its uses, but it's not intelligent, at least not in the way most people think of the word.I wouldn't be surprised if a LLM isn't *part* of an actual intelligent system though, almost like a parser is the front end of a compiler.
(DIR) Post #AT3FSuzP9j3aAbEtrk by drq@mastodon.ml
2023-02-25T22:10:55Z
0 likes, 0 repeats
@RenewedRebecca Of course, all this hype about CGPT being some kind of end-all and be-all techology in AI is overblown, but that happens with any new field. It's certainly clever in places, and there is potential for use. But complete intelligence? Well, hardly. It still misses a lot of parts. But, baby steps.Like... Again, what's "intelligence"? I mean, as far as hard, scientific definitions go, last time I checked, nobody has the first fucking clue what does that word even mean exactly. If somebody does, point me to it, because I must have been on the shitter when this ontological memo came through. As far as I know, it's still the greatest mystery in history.But something tells me that soon enough, we might come close to something resembling an answer. Maybe in our lifetimes. I'm super stoked for this.@marcan @bornach