Subj : AI a Lie To : OGG From : Rob Mccart Date : Fri Oct 10 2025 08:54:54 RM> ...was given limited time for whatever it was working on at the RM> moment.. Things seemed a bit off a while later and they checked RM> and found that when the system couldn't complete a job it was RM> working on in the allotted time, it actualy rewrote part of its RM> own programming to give itself more time to complete the work. OG>Nah.. it didn't rewrite anything. It's totally possible that >the original code failed to implement a hard fast rule when >time runs out thus allowing the process to continue. I decided to have another look for that original story because I figured if it was a common problem then it wouldn't have been newsworthy enough to get on a National News broadcast.. So, this may be a case of both of us being right at some level but the original story involved Sakana AI and it mentioned the potential risks related to AI autonomy when their AI 'Attempted' to modify its own code to extend the runtime of it's experiments, which they said could lead to unexpected behaviors and challenges in control.. and they go on to needing more robust safety protocols and isolating AI systems from critical infrastructure to prevent unintended consequences. So the battle begins.. Every time people try to rein in what their AI can do on its own, it will try to find a way around that.. The old better mouse trap = better mouse problem.. B) --- þ SLMR Rob þ click...click...click..Damn... out of taglines! þ Synchronet þ CAPCITY2 * capcity2.synchro.net * Telnet/SSH:2022/Rlogin/HTTP .