Post Azf20zfuw8HVbcWVAO by ekis@mastodon.social
 (DIR) More posts by ekis@mastodon.social
 (DIR) Post #Azf20tsSUakRcmLQsi by ekis@mastodon.social
       2025-10-25T22:38:37Z
       
       0 likes, 1 repeats
       
       "AI models may be developing their own ‘survival drive’, researchers say" - Guardian The "research paper" was a tweet by an AI companyThe "experiment" was asking the LLM to shut downA model is ~not~ shutdown ~ever~ by asking a model to shut itself down*The only possible response is a hallucination*You shut down a model by turning off the deterministic software running it; so works every time w/o failYet Guardian's shill tech writers just report AI industry tweets as if it was fact
       
 (DIR) Post #Azf20zfuw8HVbcWVAO by ekis@mastodon.social
       2025-10-25T22:49:50Z
       
       0 likes, 0 repeats
       
       Steven Adler, a former OpenAI employee (who clearly has AI psychosis): "Id expect models to have a ‘survival drive’ by default unless we try very hard to avoid it. ‘Surviving’ is an important instrumental step for many different goals a model could pursue"Models don't have fucking goalsThey are stateless"Palisade Research received significant funding, including a grant of $1,680,000 from Open Philanthropy, aimed at studying AI capabilities"Meanwhile real researchers struggle for funding
       
 (DIR) Post #Azf210Bp1VnnCZRytc by ekis@mastodon.social
       2025-10-25T23:10:08Z
       
       0 likes, 0 repeats
       
       "Andrea Miotti, the chief executive of ControlAI, said Palisade’s findings represented a long-running trend in AI models growing more capable of disobeying their developers"Growing capability of disobeying? Models don't do anything but infer text output. Which much can be said about the quality of it, but they reliably do itGrowing more capable? Again this person has AI psychosis, or is a web3 scammer, or woefully ignorant about their supposed field of expertise to everyone elses detriment
       
 (DIR) Post #Azf210YrdqEYM2ENoO by ekis@mastodon.social
       2025-10-25T22:55:40Z
       
       0 likes, 0 repeats
       
       Guardian shouldn't be spreading what should be obvious misinformation from a man with AI psychosis. At best they are validating a persons delusionsThey should be calling in a wellness check not reporting on Steven Adler's delusions as if its factThese are people who are tasked with safety, who fundamentally do not understand the technology or have lost the plot so much that its impossible to argue they are not having an episode of mental illnessThis can't even remotely be called journalism