Post AjXOIDPHf2e0EJr64e by Phosphenes@glasgow.social
(DIR) More posts by Phosphenes@glasgow.social
(DIR) Post #AjXAxFsvx9R6QlOnx2 by ZachWeinersmith@mastodon.social
2024-07-02T22:35:32Z
0 likes, 0 repeats
Offhand thought:I don't know if neural nets are the royal road to AGI, but I think objections of the form "but it just [simple thing]s!" are risky. Here's my thought: if evolution produced all the complex stuff we can do, doesn't it seem much easier to evolve a system by just duplicating some functional part extensively, rather than one that follows some wildly complex ruleset?Put another way, should we expect the magic to emerge from something LIKE neural nets?
(DIR) Post #AjXBAIJ6G8Sxhtr6um by LouisIngenthron@qoto.org
2024-07-02T22:37:46Z
0 likes, 0 repeats
@ZachWeinersmith As a game developer who is intimately familiar with the concept of "emergent gameplay", hell yes. Everything about intelligence just seems to be made up of many simple rules interacting in complex ways.
(DIR) Post #AjXBbM60HT1pHCdLSy by karadoc@aus.social
2024-07-02T22:42:45Z
0 likes, 0 repeats
@ZachWeinersmith Well yeah. That's why people thought to try artificial neural networks in the first place - to roughly mimic the neural network of our own brains.Of course, we shouldn't just expect magic to appear though. Just like you can't get a functional human by just sticking a bunch of brains together, we probably can't get a functional AI by just sticking artificial neural networks together. (But that's why people are doing systematic experimentation with how layers interact, and how to pre-process inputs etc.)I'd personally find the whole thing extremely interesting and exciting if we weren't staring down the barrel of a dystopian nightmare. Money corrupts everything. :(
(DIR) Post #AjXBccXKdjEcYY1GtM by JustinH@twit.social
2024-07-02T22:42:58Z
0 likes, 0 repeats
@ZachWeinersmith I think it's important to discern between "intelligence" and "sentience" or even "personhood" and "humanlike". Corporations (for example) are intelligent "beings" (that some would consider to be people), but few humans would describe as being relatable.Even if there is magic consciousness going on inside the "mind" of a corporation, it doesn't mean what I think most people mean when imagining what AGI looks like.
(DIR) Post #AjXCExYExLKijhj3om by ucblockhead@hulvr.com
2024-07-02T22:49:54Z
0 likes, 0 repeats
@ZachWeinersmith I think it's a lot like the way people in the pre-flight era tried to fly by copying birds. It was kinda/sorta on the right track, but on the other hand, "add more/bigger wings" wasn't what got people into the air. It was trial and error and improved understanding of how flight actually worked.I think neural nets are somewhat related to part of how AGI might happen, but I also think that we won't succeed without *understanding* what intelligence is.
(DIR) Post #AjXCWrb7pVBRU5c2QC by hllizi@hespere.de
2024-07-02T22:53:06Z
0 likes, 0 repeats
@ZachWeinersmith Also, assertions like "an LLM is just this or that" doen't amount to much by themselves without an argument that humans are not.
(DIR) Post #AjXGjRoXP5Wb6C8sLI by mdiluz@mastodon.gamedev.place
2024-07-02T23:40:07Z
0 likes, 0 repeats
@ZachWeinersmith neural networks in general? Sure. LLMs or other tools optimised to hallucinate content that appears human made? No, not really.
(DIR) Post #AjXJ48fA0DR1gxkYiG by lana@mstdn.science
2024-07-03T00:06:21Z
0 likes, 0 repeats
@ZachWeinersmith I think that argument has a PR function more than anything else. It never comes up in academic discussions (in my experience) because it obviously makes no sense, but I see academics use it when talking with non-academics. Because many people are swayed by equally simple and nonsensical arguments like "it makes sensible sentences, it must be alive" ....
(DIR) Post #AjXJ7KW8NtrZKJ398K by aliengasmask@mas.to
2024-07-03T00:06:58Z
0 likes, 0 repeats
@ZachWeinersmith ANNs are pretty far from the physical chemical ones. The problem with this angle is the physical/chemical/biological reality is extremely complex, and the maths behind ANNs is fairly basic in comparison.I wouldn't say it's the simplicity of the approach that's the problem, it's the simplicity of the substrate it's working in that's different.
(DIR) Post #AjXNlSuazbuElHsvPE by MyLittleMetroid@sfba.social
2024-07-03T00:59:02Z
0 likes, 0 repeats
@ZachWeinersmith The way I see it we can't truly separate human (or animal) intelligence from the biological imperatives. Intelligence is a weird thing that evolved as a way for biological lifeforms to survive and thrive better. Intelligence is a better way to want things (food, a mate, not to get eaten…).No one has the first clue for how to make a computer want something.
(DIR) Post #AjXOIDPHf2e0EJr64e by Phosphenes@glasgow.social
2024-07-03T01:04:57Z
0 likes, 0 repeats
@ZachWeinersmith I've always thought there must be some central rules to intelligence, that there are many messy ways to implement it but the principle itself would be a single thing.That no matter what, it would look beautiful.
(DIR) Post #AjXR8JINDEOvSYak6K by dancingtreefrog@mastodon.social
2024-07-03T01:36:47Z
0 likes, 0 repeats
@ZachWeinersmithI think the first artificial general intelligence will arise from the swarms of bots, worms, and other network tools such as network management systems, realtime stock trading systems, covert attack systems, etc.Along the lines of how the AI, Jane, arose in Orson Scott Card's book Xenocide.
(DIR) Post #AjXRjcvLCGup3Dc1Sq by natrhein@c.im
2024-07-03T01:43:32Z
0 likes, 0 repeats
@ZachWeinersmith Hard agree. Literally all computer programs are *just* boolean algebra. That hasn't prevented computers from changing the world.
(DIR) Post #AjXRqrLjxcE5opPQoK by dan131riley@federate.social
2024-07-03T01:44:50Z
0 likes, 0 repeats
@ZachWeinersmith I think evolution gets underestimated? It takes a way of encoding the information of life in a form that can reproduce and evolve, balancing fidelity of replication and variability. And that has to be adapted to a fitness landscape that provides a useful gradient. I guess I'd agree that AGI could emerge from seemingly simple things, but I'd also say those things are likely to be much more subtle than we think. Simple, but subtle.
(DIR) Post #AjXo1zPWWo42zqtJU8 by zauberlaus@chaos.social
2024-07-03T05:53:21Z
0 likes, 0 repeats
@ZachWeinersmith the scary bit is the part where you might not realize you created something with a consciousness because there is no easy test and the agents before it felt pretty similar
(DIR) Post #AjXoffEwwPIM1QrbXM by zeborah@mastodon.nz
2024-07-03T06:00:29Z
0 likes, 0 repeats
@ZachWeinersmith Maybe? But not from neural nets that are divorced from the feedback mechanisms that come from a meaningful connection to reality. You can tell a kid something is hot all you like, but she'll never really *understand* that until she connects the word to the heat coming in through her skin. (Ideally a merely unpleasant-heat rather than a scalding-heat of course...)
(DIR) Post #AjXzXFFcmc5Ib6Mpbk by Vecna@mastodon.online
2024-07-03T08:02:15Z
0 likes, 0 repeats
@ZachWeinersmith I think the entire branch of Evolutionary Computation would like a word. :-)
(DIR) Post #AjYIIVkr2ShCZHf9KS by gbwust@mastodon.social
2024-07-03T11:32:29Z
0 likes, 0 repeats
@ZachWeinersmith There are signals that the magic is already there.
(DIR) Post #AjYtjSHZkkXjvDg9Ng by Philomorph@fosstodon.org
2024-07-03T18:31:49Z
0 likes, 0 repeats
@ZachWeinersmith I've been promoting neural nets as the path to AGI since I first learned Prolog in the 90s. But no one listens to me.