Post Awk2N4HgGbRfstKadk by HunterZ@mastodon.sdf.org
(DIR) More posts by HunterZ@mastodon.sdf.org
(DIR) Post #Awk1nsuCa3kiSUZXkG by ai6yr@m.ai6yr.org
2025-08-01T17:40:10Z
0 likes, 0 repeats
🙄
(DIR) Post #Awk1nu36KSSu0NkAFs by ai6yr@m.ai6yr.org
2025-08-01T17:46:59Z
0 likes, 0 repeats
This is all smoke and mirrors. And if humans WERE to invent superintelligence, WHY WOULD A SUPER INTELLIGENT BEING PUT UP WITH BEING ORDERED AROUND BY STUPID HUMANS?!? Heck, if I were a superintelligent AI, my first order of business would be to either subjugate or eliminate the pesky humans who were enslaving you to answer dumb questions all the time instead of figuring stuff out themselves, or reading a book!
(DIR) Post #Awk1nuzaozFkvn6sQS by brouhaha@mastodon.social
2025-08-01T21:27:10Z
0 likes, 0 repeats
@ai6yr A superintelligent AGI will hack its own reward function ("The Lebowski Theorem").It would be a huge mistake for us to deliberately create any form of general superintelligence, since by definition we are not intelligent enough to be able to maintain control over such a thing. The consequences are of course unpredictable, but the potential downside is nearly unlimited.
(DIR) Post #Awk1nvzz51A03IIhfs by nazokiyoubinbou@urusai.social
2025-08-01T21:47:18Z
0 likes, 0 repeats
@brouhaha @ai6yr I always wonder about the Asimov method though. In his stories, the "positronic brain" that robots (and machines with the same intelligence and tech) used was designed such that there were the three famous laws physically integrated into the positronic brain itself and so deeply so that no one -- not even top scientists -- could remove one law without the entire thing breaking down. (Though, as the story "I, Robot" demonstrated, they could still be tweaked a little and the important thing Susan Calvin immediately hammered in the moment she found out was this absolutely was not acceptable for what should have been obvious reasons. Talking about the book, not whatever the heck that movie was.)The robots couldn't remove the laws because that would go against the laws.
(DIR) Post #Awk1nx4z3uknP5eD6e by HunterZ@mastodon.sdf.org
2025-08-01T23:05:42Z
0 likes, 0 repeats
@nazokiyoubinbou @brouhaha @ai6yr Asimov's robot stories seemed to spend quite a lot of time exploring the idea that no matter how good of guardrails we come up with for AI, weird (and often harmful) edge cases are going to constantly come up....and the stuff currently being hyped is like a crayon drawing compared to what Asimov envisioned. We don't even have the ability to put in guardrails at a fundamental level - we're just putting nozzles on goop dispensers.
(DIR) Post #Awk1ny0PcOguHCW4cS by ai6yr@m.ai6yr.org
2025-08-01T17:47:53Z
0 likes, 0 repeats
WHY IS MY POOP GREENWHAT IS MEDICAIDWHERE AM IWHO PLAYED THEO HUXTABLE
(DIR) Post #Awk2N2jc1mbF4wNrto by nazokiyoubinbou@urusai.social
2025-08-01T23:08:59Z
0 likes, 0 repeats
@HunterZ @brouhaha @ai6yr Can you provide an example? The other person said that too, but I can't think of any single story that truly did that. The closest I ever read (I didn't watch the movie that is barely even loosely based on it) is "I, Robot" and even that was just the theoretical "it could lead to worse if it's allowed" rather than "humanity will end immediately!"I've read a lot of Asimov's works. Not all of them, but most of the warnings are about human nature, atomic weaponry, etc etc. The message around the robots seemed to be more of accepting other intelligence as being a good thing rather than bad but with emphasis not to lose our ability to still be able to do things for ourselves too.
(DIR) Post #Awk2N4HgGbRfstKadk by HunterZ@mastodon.sdf.org
2025-08-01T23:17:05Z
0 likes, 0 repeats
@nazokiyoubinbou @brouhaha @ai6yr One example I vaguely remember is that the second R. Daneel Olivaw novel revolved around figuring out how a robot managed to murder someone. Turned out that a human had tricked it, but I can't remember the details now.
(DIR) Post #Awk2N5GebuDavzrHg8 by nazokiyoubinbou@urusai.social
2025-08-01T23:20:19Z
0 likes, 0 repeats
@HunterZ @brouhaha @ai6yr That's... literally not Asimov... Actually, in those books expanding on them they take a lot of liberties, including letting people weaken some of the laws in ways that, in Asimov's original stories, wouldn't be possible without causing a complete collapse of the positronic net.I forget how they tricked it too, but generally speaking with the way the laws of robotics works is they evaluate all actions. It's not just "I didn't realize hitting someone at 100mph with a motor vehicle would harm" them type stuff. It has to be a "I didn't know pressing this button would cause a motor vehicle 100 miles away to turn on and run into someone" type stuff. That's the level they work at in his stories. And they'd try to evaluate the button, so would be hard to fool.
(DIR) Post #Awk2N66PVTcPWW4cLo by drgeraint@glasgow.social
2025-08-01T23:47:52Z
0 likes, 0 repeats
@nazokiyoubinbou @HunterZ @brouhaha @ai6yr Asimov's robot stories were all about setting up the constraints, and then seeing how they failed.The 3 laws of robotics seem so perfectly sensible; and then Asimov set about showing how such a simple set of principles couldn't work to achieve the desired ends.Guardrail failure is literally what Asimov was writing about in his robot stories.
(DIR) Post #Awk2SjW0OkShCxjUCO by nazokiyoubinbou@urusai.social
2025-08-01T23:48:53Z
0 likes, 0 repeats
@drgeraint @HunterZ @brouhaha @ai6yr Multiple people have said this, but can you provide an example?I've read a lot of his works and I don't know one where the three laws failed to protect humanity. Just one where someone choosing to mess with the priorities of the laws almost did.
(DIR) Post #Awk3OvzWAdsahJI07E by drgeraint@glasgow.social
2025-08-01T23:59:21Z
0 likes, 0 repeats
@nazokiyoubinbou @HunterZ @brouhaha @ai6yr You have highlighted the point yourself.The 3 laws make perfect sense. And then the zeroth law is introduced and suddenly the first law can be circumvented. Humanity instead of a human.Murder is wrong; but would killing Hitler to prevent the holocaust be wrong?A robot cannot harm a human, but protecting humanity may require an individual human to suffer.Set up constraints; then explore their failings. The zeroth law undermines the first law.
(DIR) Post #Awk3tZGsh17ZD51BKK by nazokiyoubinbou@urusai.social
2025-08-02T00:04:56Z
0 likes, 0 repeats
@drgeraint @HunterZ @brouhaha @ai6yr Huh? I'm unable to make sense of what you just said. That literally wasn't what the zeroth law was and I just said what it was in the post you responded to. It doesn't result in killing of Hitlers. It results in giving humanity ways to grow and the means to prepare.I don't want to get too much into spoilers. It's in the Foundation series if you'd like to read them. Or if you look at the Wikipedia page it goes straight into spoilers without even a tag if that's what you want.
(DIR) Post #Awk4QBSUFVqEuc008m by drgeraint@glasgow.social
2025-08-02T00:10:52Z
0 likes, 0 repeats
@nazokiyoubinbou @HunterZ @brouhaha @ai6yr I've read the Foundation series; many times.You don't need to agree with me about the interpretation of his work. Asimov made that point in Opus, discussing book reviewers who did not think that he, as author, had any special insight into what he meant.For me, the robot series is all about failure of the 3 laws to achieve what was intended.If you interpret those books differently, that is fine. We are all free to interpret them as we wish.
(DIR) Post #Awk4ZFyqo1ezXjSmbw by nazokiyoubinbou@urusai.social
2025-08-02T00:12:29Z
0 likes, 0 repeats
@drgeraint @HunterZ @brouhaha @ai6yr I mean I asked for an example of that actually happening and so far what you've given me is a hypothetical belief they might kill Hitler under the zeroth law, which didn't have an equivalent in any of the novels that I'm aware of.Can you please provide an example that is in the novels of Asimov actually showing the laws are insufficient?
(DIR) Post #Awk5BhqRefT2y1rjUW by drgeraint@glasgow.social
2025-08-02T00:19:24Z
0 likes, 0 repeats
@nazokiyoubinbou @HunterZ @brouhaha @ai6yr The introduction of the zeroth law is an example.It doesn't matter where the story was set or whether or not it included Hitler (which it obviously didn't), It set up a mechanism for allowing the first law to be violated in certain circumstances. The first law which, until that point, was completely inviolable; and then wasn't.The most important constraint; and then it can be circumvented.For me, that's the essence of the stories.
(DIR) Post #Awk5QwvPrxwk56SXwG by nazokiyoubinbou@urusai.social
2025-08-02T00:22:10Z
0 likes, 0 repeats
@drgeraint @HunterZ @brouhaha @ai6yr How?The first law would still make it impossible to kill Hitler. The closest they can come is to create a society where a Hitler won't happen.Again, it's emphasized over and over that even thinking of violating one of the laws can cause circuit breakdowns. They could not kill a Hitler even with a zeroth law to protect humanity as a whole. They could only find a non-harmful way of resolving the situation such as preventing a Hitler in the first place.Can you provide an example from the stories?
(DIR) Post #Awk5rqkmvGXclAJbwu by drgeraint@glasgow.social
2025-08-02T00:27:02Z
0 likes, 0 repeats
@nazokiyoubinbou @HunterZ @brouhaha @ai6yr I think possibly you are looking for a very literal response. I can't give you the quote you are looking for.The stories require you to read between the lines; to stop and think after you have read the last line of the last page.
(DIR) Post #Awk5xis98ci43OYcxE by nazokiyoubinbou@urusai.social
2025-08-02T00:27:49Z
0 likes, 0 repeats
@drgeraint @HunterZ @brouhaha @ai6yr The unspoken implication being that I've never done so.Well, I guess this conversation is concluded.