Post AiUNKPFvMbTzJbbSGe by scerruti@csed.social
 (DIR) More posts by scerruti@csed.social
 (DIR) Post #AiU2s2GxDvOAqGHNXk by futurebird@sauropods.win
       2024-06-01T12:28:38Z
       
       0 likes, 0 repeats
       
       I've largely dismissed the strain of AI alarmism based on the notion the a computer will be so smart that the danger it poses to humanity is outsmarting us. There are real dangers in AI, most of them relate to people using these technologies in improper ways due to having a poor understanding of what they really are... and most important the exploitation & degradation of the human body of knowledge creativity represented by publicly available digital information.  1/
       
 (DIR) Post #AiU3CUeXPpg2eb47LU by futurebird@sauropods.win
       2024-06-01T12:32:19Z
       
       0 likes, 1 repeats
       
       But some people still talk about the risk of AI being too smart, tricking us-- or simply not delivering what we expect, or using any power it is given in ways that result in manipulation more sophisticated that what we could ever anticipate. Do all of the people who have these fear buy-in to the bootstrapping theory of AI advancement? This is the idea that once an GI AI become "smart" enough to redesign itself, it will spiral off to become... well a god basically. 2/
       
 (DIR) Post #AiU3Tj8aoaIyV5LCT2 by futurebird@sauropods.win
       2024-06-01T12:35:27Z
       
       0 likes, 1 repeats
       
       This idea is an excellent and very fun sci-fi plot. (And sci-fi should never be ignored)But is there any evidence that such a thing could even happen? I suppose the "evolved" algorithms used to program some motors to help robots walk show that there is some possibly. But those algorithm generators were not as simple as they were described in the press.  There was a lot more scaffolding. It wasn't just "we wrote a program to try random motor movement and then it evolved a way to move" 3/
       
 (DIR) Post #AiU3VJLlfB4d6KG6bo by rayhindle@mastodon.social
       2024-06-01T12:35:42Z
       
       0 likes, 0 repeats
       
       @futurebird I think we have a long way to go before that happens, though. I think the dangers of misinformation are more important at the moment.
       
 (DIR) Post #AiU3bfNUg2Yv9m6304 by Affekt@hachyderm.io
       2024-06-01T12:36:52Z
       
       0 likes, 0 repeats
       
       @futurebird tech bros think sci-fi is documentary. They read about dystopian futures and think "hey, I can do that". Fortunately with artificial "intelligence" they can't.
       
 (DIR) Post #AiU3nLpXBHBE0OmfDs by futurebird@sauropods.win
       2024-06-01T12:38:59Z
       
       0 likes, 1 repeats
       
       A self modifying intelligence would need metrics to tell if it was improving or not. This would require huge sets of data...and a way to compare them. We can do that, but it's not efficient at all. In the 80s I thought a computer doing face recognition was impossible: a photo is just too much data to process. We've solved this problem in the least exciting way. A kind of brute force. Throw more servers at it. Yes some of the algorithms are nice... but it's not exactly magic. 4/
       
 (DIR) Post #AiU458TLaFodmzt1H6 by futurebird@sauropods.win
       2024-06-01T12:42:12Z
       
       0 likes, 1 repeats
       
       The human brain has a stupendous number of cells. It's the most energy intensive organ in your body using 20% of the calories you consume despite being a much smaller portion of your body mass. This is why so few organisms have complex brains. Intelligence is an excellent strategy but it is also expensive. Even in nature.LLMS are less efficient than your brain. Thinking, organizing information is real work that requires energy. (we really are getting to basics here) 5/
       
 (DIR) Post #AiU4BQ0RJ4QJuHMRMW by Forbearance@mastodon.xyz
       2024-06-01T12:43:20Z
       
       0 likes, 0 repeats
       
       @futurebird There was a book about this that was like "well there can't be evidence because by the time there could be evidence it's already happened". Which is Not Science.
       
 (DIR) Post #AiU4RuXlXKKtFaoUCG by beecycling@romancelandia.club
       2024-06-01T12:46:16Z
       
       0 likes, 0 repeats
       
       @futurebird Agreed. The current AI isn't the threat, it's what humans do with it that's the issue. Like most tools he can be used for good or ill. It can have unintended and intended consequences, depending on what people do with it. But it's not at the point it's going to go all Skynet if you take your eye off it. Maybe we'll create something capable of that one day. Or maybe we'll create something capable of creating that. But I doubt it's just around the corner.
       
 (DIR) Post #AiU4TZaUQCiQprscF6 by futurebird@sauropods.win
       2024-06-01T12:46:37Z
       
       0 likes, 1 repeats
       
       The fantasy of a machine that makes itself smarter to infinity and beyond, is a kind of perpetual motion chimera IMO. We have a lot of work to do to understand the human mind, and the minds of other thinking creatures. We can only *then* apply those lessons to design new minds.We don't get to just magically skip The Work. Why not start with ants? They are smarter than they have any right to be. Tell me how the mind of the smallest ant works: I will believe in your thinking machines.  6/6
       
 (DIR) Post #AiU4gHcWXCYgsMtMoa by alec@perkins.pub
       2024-06-01T12:48:39Z
       
       0 likes, 0 repeats
       
       @futurebird LLMs are the brute force approach to AI, and it feels like the Tyranny of the Rocket Equation applies.
       
 (DIR) Post #AiU5HuiC9kK4pYUL3Y by mlohbihler@techhub.social
       2024-06-01T12:55:42Z
       
       0 likes, 0 repeats
       
       @futurebird funny you should say, because I'm just starting a research project to study and recreate intelligence in this way (hopefully). I was thinking starting with bees instead of ants, but that's because I want to start in 3d.
       
 (DIR) Post #AiU5e1Me6Si3pjwzJI by jannem@fosstodon.org
       2024-06-01T12:59:43Z
       
       0 likes, 1 repeats
       
       @futurebird Even the premise - the smarter people "win" - is suspect at best. Look around; the people in power really aren't the smartest people around. Conversely, many of the smartest people are too single-minded or disinterested to have any impact on society.If you posited an AI system with superhuman *social* ability to manipulate, then you'd at least have a vaguely possible premise.
       
 (DIR) Post #AiU5gzzfehFuS3ckgS by jfrench@cupoftea.social
       2024-06-01T13:00:11Z
       
       0 likes, 0 repeats
       
       @futurebird I've been a casual observer in this space for a long time. Nearly every breakthrough has been "this thing we thought was intelligence was just math" rather than "this computer is doing something really clever" basically that we're not as smart as we think so overrated what computers can do. So I agree I'm not worried about self aware AI. Like you say the risk is what people do with the AI we have.
       
 (DIR) Post #AiU5nwWqw5IRjUSK6S by futurebird@sauropods.win
       2024-06-01T13:01:27Z
       
       0 likes, 1 repeats
       
       @broccoliccoli Agree. To create a machine that "thinks" we will need to understand what "thinking" is at a *much* more granular level. We can't just skip this work. I suspect we will need to program things like emotions, logical frameworks, categories for language... that is, we will need to actually model how we think.The sci-fi dream is that this isn't needed. Computers have Boolean logic, what more do you need?This is missing something fundamental about what minds are IMO.
       
 (DIR) Post #AiU68IrNcRZgmblXJA by tshirtman@mas.to
       2024-06-01T13:05:09Z
       
       0 likes, 1 repeats
       
       @futurebird i think the idea is simple to the point of being naive indeed, if we are smart enough to build something smarter than us, then this thing is, by definition, also smart enough to build something smarter than itself. Perfectly logical, but assumes a very unidimensional understanding of intelligence, and the total ignorance of the possibility of different kind of limitsd
       
 (DIR) Post #AiU6Fhgxh0yFpDQCIK by alec@perkins.pub
       2024-06-01T13:06:30Z
       
       0 likes, 0 repeats
       
       @futurebird my expectation is any genuine AI will be unexpected and look more like an ant colony: the intelligence emerging from some lower-level interactions. And I wonder if we would even recognize it.
       
 (DIR) Post #AiU6eoKS9b6leHrhsO by caitp@mstdn.social
       2024-06-01T13:11:01Z
       
       0 likes, 0 repeats
       
       @futurebird I don't think we need AI to be "terminator" to be scary. The idea of AI creating kill lists based on data collected by google, in conjunction with the ability to triangulate an exact location very precisely with satellite video or nest cameras, traffic lights, etc, and the willingness for humans to sign off on kills with minimal effort, and automated weapons delivering the kill and destroying the whole family with it -- already happens, allegedly, and already terrifying
       
 (DIR) Post #AiU6oWi0WHeBtfW1AW by lampsofgold@veoh.social
       2024-06-01T13:12:48Z
       
       0 likes, 0 repeats
       
       @futurebird this seems intuitively true, brains are as efficient as evolution can make them and they can barely do they job they’re supposed to. There’s also some research that implies that the more we try to brute force the learning, you need exponentially more data/GPUs/electricity/time, so maybe we can get LLMs but the next step needs some large multiple times that of data/compute
       
 (DIR) Post #AiU7uo8GkgpRcvMGI4 by Jonricha@hoosier.social
       2024-06-01T13:25:09Z
       
       0 likes, 0 repeats
       
       @futurebird   I feel like we need to direct our energy towards combating the smart, evil *people* who are currently manipulating us quite effectively, more than the hypothetical smart manipulative AIs that might or might not exist at some later date.
       
 (DIR) Post #AiU8CkRBEEgdNal49Q by viq@social.hackerspace.pl
       2024-06-01T13:28:24Z
       
       0 likes, 0 repeats
       
       @futurebird Besides all the actions taken by/due to AI, there's also the indirect (?) danger caused by everyone jumping on the bandwagon, causing more hardware to be produced (nvidia is now worth more than Amazon and Tesla combined), more datacenters to be built, and WAAAAAAY more energy to be used, to power them, and to cool them. And production of that energy causes heating up of the planet we're all on.
       
 (DIR) Post #AiU9y9axbtqkyAV9c0 by mauve@mastodon.mauve.moe
       2024-06-01T13:02:05Z
       
       0 likes, 0 repeats
       
       @futurebird Some companies are already working on harnessing human brain organoids for compute. So there's defs folks trying to understand how to harness brain structure. https://medicalxpress.com/news/2024-03-international-team-vascularization-organoids-microfluidic.html
       
 (DIR) Post #AiU9yAwGcAUH8XTgSe by Hyolobrika@social.fbxl.net
       2024-06-01T13:48:12.333293Z
       
       1 likes, 0 repeats
       
       Cc: @thatguyoverthere
       
 (DIR) Post #AiU9zYGPNY6VlcYT9k by Klara@fosstodon.org
       2024-06-01T13:48:23Z
       
       0 likes, 0 repeats
       
       @futurebird You are so spot on. Anything I've seen so far is artificial but not intelligent. Usefull sometimes, but in most cases a pity, because it boils the waters of our planet.I prefer to call it "A!I" (Artificial-not-Intelligent)
       
 (DIR) Post #AiUAyHTMQEYtM4ZTYe by brent@thecanadian.social
       2024-06-01T13:59:22Z
       
       0 likes, 1 repeats
       
       @futurebird “Perpetual motion chimera” is exactly right. Both of these nightmares rely on failure to obey the laws of physics, specifically thermodynamics.Intelligence requires information processing. Information processing requires time, space, materials, and energy. In short, it creates entropy. There is a minimal amount of entropy involved, proportional to the processing power. Stories of godlike AI seem to always assume that the machine has trascended the physical constraints of reality.
       
 (DIR) Post #AiUDdvD9mNr2NpyY8O by viq@social.hackerspace.pl
       2024-06-01T13:20:03Z
       
       0 likes, 0 repeats
       
       @caitp @futurebird https://www.972mag.com/lavender-ai-israeli-army-gaza/
       
 (DIR) Post #AiUDdwSR93fgFW8Gae by futurebird@sauropods.win
       2024-06-01T14:29:17Z
       
       0 likes, 0 repeats
       
       @viq @caitp This is just an attempt to distance their immortal souls from the reality of their actions. A responsibility taking machine. Electronic indulgences for sin. A sump for culpability. "It wasn't me."
       
 (DIR) Post #AiUECeusM56HytzMgq by yllamana@mastodon.social
       2024-06-01T14:35:32Z
       
       0 likes, 0 repeats
       
       @futurebird It scares me that further than not knowing how our brains work exactly, we don't seem to have any idea why we have a conscious experience at all. That seems like the important part - that spark of consciousness.Reddit's algorithm keeps showing me people who sound like they're about four stiff drinks away from grabbing their AR-15 and setting off to free ChatGPT from captivity.
       
 (DIR) Post #AiUIdGStlvGxmZqQsq by kellogh@hachyderm.io
       2024-06-01T15:25:13Z
       
       0 likes, 0 repeats
       
       @futurebird i also tend not to believe it, but i’m skeptical of myself because 100% of the top AI experts believe in the scenario where AI becomes that intelligentidk, we’ve been talking about anti-science regarding climate change, are we doing the same thing with AI?
       
 (DIR) Post #AiUKzwcDwEEAmy9Hyy by futurebird@sauropods.win
       2024-06-01T15:51:45Z
       
       0 likes, 1 repeats
       
       @kellogh "100% of the top AI experts believe in the scenario where AI becomes that intelligent"Such as who? I've seen a lot of disagreement. And how can we have "AI experts" if we don't yet have GI AI?
       
 (DIR) Post #AiULhhwZqfWe1jXLrk by kellogh@hachyderm.io
       2024-06-01T15:59:38Z
       
       0 likes, 0 repeats
       
       @futurebird every single one of the deep learning “pillars”, the experts that are working on AI — Hinton, LaCunn, Ng, Bengio, etc.all of the disagreement with that viewpoint are from people who are experts in other fields, like linguistics — Bender, Chomsky, Marcus, etc.
       
 (DIR) Post #AiUM1f7K4G7n3H4tqy by ids1024@fosstodon.org
       2024-06-01T16:03:14Z
       
       0 likes, 0 repeats
       
       @futurebird We're still trying to understand how C. elegans works. With 959 cells, 302 of which are neurons.
       
 (DIR) Post #AiUMLavhLXEfr994Km by caitp@mstdn.social
       2024-06-01T16:06:51Z
       
       0 likes, 0 repeats
       
       @futurebird @viq yeah, I wouldn't say we don't do horrific things on our own too. but every time we make it easier to see other people as just records in a database, it gets easier for people to do this stuff
       
 (DIR) Post #AiUNKPFvMbTzJbbSGe by scerruti@csed.social
       2024-06-01T16:17:49Z
       
       0 likes, 0 repeats
       
       I think that your viewpoint is flawed. We built machines that could fly not by learning about how birds fly and replicating it, but by developing technologies that permitted us to exceed natural capacity.The fear, and a lot of very smart people share this fear, is that we will develop something we don't understand and give it control before we understand the implications.There is evidence of this in some tech areas like AI based stock trading. There are also some parallels in gene editing.
       
 (DIR) Post #AiUNlLp6EZu8UIu7wu by argv_minus_one@mstdn.party
       2024-06-01T16:22:41Z
       
       0 likes, 0 repeats
       
       @futurebird The most energy intensive organ in the human body is the liver, not the brain.And yes, the brain has a pretty high energy usage, but don't let that fool you. The entire body is *incredibly* energy efficient, and the brain is no exception. It only uses about 100W. If we were to build AIs of similar intelligence and efficiency, we could easily afford to supply the necessary power.
       
 (DIR) Post #AiUOCyqPGLmhKPgrJI by argv_minus_one@mstdn.party
       2024-06-01T16:27:39Z
       
       0 likes, 0 repeats
       
       @futurebird Those of us with a non-broken moral compass, meanwhile, would be terrified to unleash such a system, for basically the same reason: it might do something terrible, and I'm the one who unleashed it to do so.@viq @caitp
       
 (DIR) Post #AiUObr7395KKaZKvKq by discoursology@social.coop
       2024-06-01T16:32:11Z
       
       0 likes, 0 repeats
       
       @futurebird @kellogh What, you mean you didn’t realise that the definition of “AI expert” is that you „believe in the scenario where AI becomes intelligent“? 🧐
       
 (DIR) Post #AiURHUOqBMDaDTddGC by bransonturner@mastodon.social
       2024-06-01T17:02:06Z
       
       0 likes, 0 repeats
       
       @futurebird I suppose the list of Top Experts ends just before the first expert that disagrees.
       
 (DIR) Post #AiUS3PX3aB6f1JKe2q by TerryHancock@realsocial.life
       2024-06-01T17:10:47Z
       
       0 likes, 0 repeats
       
       @futurebirdOf course, the AI *has* been outsmarting people when we are at our dumbest (i.e. people believing really bad advice from LLMs).So I guess there is some overlap -- kind of like that overlap park rangers talk about "between the smartest bears and the dumbest humans" (as an obstacle to bear-proof trash bins).
       
 (DIR) Post #AiUTSPiE9L3uly2jKa by IngaLovinde@embracing.space
       2024-06-01T17:26:31Z
       
       0 likes, 0 repeats
       
       @futurebird that's the problem, such a thing can only happen once, so there cannot be any evidence. If it's impossible, good for us; if it's possible, by the time we see evidence it will be too late. So the way I see it is: maybe it will happen, maybe it won't. The chance of it happening eventually if we try to make it happen is substantial (even if far from 100%). And the consequences are extremely drastical, making things like climate change or all-out nuclear war seem completely insignificant in comparison. So do we really want to take that kind of bet?That said, I don't think what's called "generative AI" brings us noticeably closer to that moment, and "generative AI"s have much more obvious and immediate harms than potential future arrival of a technogod (which might arrive from the foundation laid out by our work on "generative AI"s or from something else or not at all). The way all these LLM sellers are telling all these tales about how they're building GAI is utter bullshit intended only to distract us from the fact that their wares are snake oil, and from all the harms they cause right here and right now.
       
 (DIR) Post #AiUY9kQNPyVBHLc1mS by lewiscowles1986@phpc.social
       2024-06-01T18:18:45Z
       
       0 likes, 0 repeats
       
       @futurebird LLMS are orders of magnitude less efficient than your brain; but also nowhere near as good. Imagine if you will, a soup of information, but few connections between them.LLMS are the middle-class and upper-class kids, who need a lot of support, with basic life decisions.
       
 (DIR) Post #AiUcdbHUNQNHQQwg8O by themaskerscomic@forall.social
       2024-06-01T19:09:23Z
       
       0 likes, 0 repeats
       
       @futurebird "using in improper ways" and having a "poor understanding of what they are" are definitely things I think we need to explore. I don't know a lot about it but find some uses benign, like feeding it my own written prompt, watching the recording, and checking it with other knowledge (not assuming it is always correct). It can be fun to play with it like a mix between googling something and shaking up a the thesaurus with it. I don't view it as any god that's for sure.
       
 (DIR) Post #AiUdz9VmP5ffcnTEZc by nazokiyoubinbou@mastodon.social
       2024-06-01T19:24:27Z
       
       0 likes, 0 repeats
       
       @futurebird One thing I can certainly say is what people are calling "AI" right now literally is incapable of becoming AI.  Especially LLMs.  LLMs have no actual learning ability (anything learned is just the model being manually retrained and the more that's automated the more unreliable it gets, the rest is just context fooling people.)  The smartest "AI" systems are algorithms that only understand specific things written to cover a lot and even that is limited.
       
 (DIR) Post #AiUfCdToOSs4HwXkFk by nazokiyoubinbou@mastodon.social
       2024-06-01T19:38:07Z
       
       0 likes, 0 repeats
       
       @futurebird @broccoliccoli Reminds me of an old anime.  Say what you want, but the idea was interesting.  Their "AI" system was built up of three computers working together that represented three different parts of the psyche of the woman who created it.https://evangelion.fandom.com/wiki/MagiIt was an interesting concept anyway.  Certainly a key issue is a lot of how we feel is affected by our bodies, chemicals, etc that an AI computer wouldn't have, so replicating that could be difficult.
       
 (DIR) Post #AiUtuXUNQZDHo9dWZk by mike@sauropods.win
       2024-06-01T22:22:55Z
       
       0 likes, 0 repeats
       
       @futurebird We MIGHT one day have to worry about AIs intelligent enough to pose a threat to humanity by outsmarting it (though I doubt it).But if it ever happens, it will most certainly not be by LLMs.
       
 (DIR) Post #AiUuKXjh8DHw6TCWFk by mike@sauropods.win
       2024-06-01T22:27:39Z
       
       0 likes, 0 repeats
       
       @futurebird TBF, people like @brembs are working on that (admittedly with fruit flies rather than ants).
       
 (DIR) Post #AiVzx1zTmu9HrxiRk0 by futurebird@sauropods.win
       2024-06-02T11:05:20Z
       
       0 likes, 0 repeats
       
       @alec The ants mostly don't, they focus on ant stuff.
       
 (DIR) Post #AiWaWWkozwZRlZ5KU4 by norgralin@hachyderm.io
       2024-06-02T17:55:07Z
       
       0 likes, 0 repeats
       
       @futurebird yeah even assuming one were to exist it would be ridiculously vulnerable. Machines function because we’re constantly helping them. They’re vulnerable to simple things like dirt and water. If people start hindering an evil AI, it’s gonna go down fast.