Post ASaMI9YViv0EFvdUEi by jasonetheridge@qoto.org
 (DIR) More posts by jasonetheridge@qoto.org
 (DIR) Post #ASaMI9YViv0EFvdUEi by jasonetheridge@qoto.org
       2023-02-11T02:11:11Z
       
       1 likes, 0 repeats
       
       I'm sure this is unoriginal, but it seems that with ChatGPT and similar AI text bots, we have created philosophical zombies (p-zombies).They have learned to talk like us, based on everything we've said on the internet. However, there's no sentience present at all.In other words, we have created a (mostly) convincing simulacrum of a human that we can text chat with. But it has no mind, no sense of self, no consciousness. There is no risk of it becoming self-aware, because that's not how these neural networks work.Is this a step on the path towards AGI (Artificial General Intelligence)? Yes. But even AGI doesn't mean sentience. It leads to a fascinating ethical question: what rights does a p-zombie have?If it talks like a human, but effectively the lights are on but no one's home, do we treat it like one of us? For now, I'd say no; they just smart machines, constructs created to serve us. Ultimately, the test for AI rights has to be sentience, not convincing repartee.