Subj : Everybody's under pressure to do more with less - Why Okta says y To : All From : TechnologyDaily Date : Wed Oct 01 2025 18:15:08 Everybody's under pressure to do more with less - Why Okta says you need an AI agent governance strategy, and sooner rather than later Date: Wed, 01 Oct 2025 17:04:00 +0000 Description: The threats are proven, so stay one step ahead, Okta warns users. FULL STORY ====================================================================== AI agents are all the rage - in fact, a recent study confirmed 96% of European businesses reported using or planning to use AI agents by 2026. AI agents inherently need a whole host of permissions to be able to act on a users behalf; everything from your calendar to payment details, and even potentially sensitive company information. That leaves the potential for that access to fall into the wrong hands, or even for an AI agent to go rogue and carry out tasks you didnt approve of. Theres a lot of pressure all round to adopt AI agents as companies across the globe bid to become more productive - but many are doing so before putting proper guard rails in place. What are AI agents? At the recent Oktane 2025 event , we heard all about the importance of securing AI - but what exactly does this mean, and how can you do it? We spoke to Oktas EMEA CISO Stephen McDermid and Auth0 President Shiv Ramji to find out just how important it is to secure these Non-Human Identities (NHIs). Artificial intelligence agents are pretty self explanatory - theyre software systems which autonomously perform tasks on behalf of a user. They leverage generative AI models, and can simultaneously process information, whether it be voice, text, video, or code. Theyre a shiny new type of technology that a lot of people are using both in their personal and private lives. But, as with all technology, cybersecurity experts warn these agents need to be used with caution. In order to carry out tasks on your behalf, the AI agent must have access to your systems, like your calendar, email, loyalty schemes, and even credit card information in many cases. Its inevitable that people will use the technology, but this needs to be done carefully, McDermid says. That's why it's so important because the reality is people will go and play with it," he notes. They'll go and try to be innovative, as you've heard, everybody's under pressure to do more with less - AI is a very quick way of doing that, but it's also a very quick way of opening up some risks, exposing data, exposing your users potentially as well. I think that's why it is a growing concern. (Image credit: Future) What are the risks? This, of course, represents significant risks if the agent isnt properly secured. AI is gullible, and it can easily be manipulated. Whilst this is great when you want it to prioritize your work, it means cybercriminals are equally able to trick the models into working for them. If your AI is compromised, that could leave you exposed on a number of levels. It's sensitive data leakage, your private information is exposed and the risk is everything from legal to financial, to even running afoul of regulations in different countries, Ramji warns. Its not just theoretical, either. The risks are well illustrated by the recent issues with the McDonalds AI recruiting platform that was tasked with recruitment - although the weak link in this case was an incredibly basic password (123456), the AI agent had access to all the data it had harvested, including personal information - and in total, 64 million records were exposed, highlighting the dangers of asking an AI agent to handle information. How can you secure them? Unfortunately, securing these NHIs isnt a simple task; With AI, there's a bigger pressure to implement that change. I think it's probably harder because AI is moving so fast that it doesn't have the [same level of] governance, McDermid says. Okta is on a mission to secure AI by bringing agents into your identity security fabric. Their platform helps identify risky configurations and manage agent permissions to ensure your agent only has access to whats necessary - not for any longer than it needs. Everybody has to start putting the security in place before they start playing with AI because unfortunately, you've seen the headlines, there's already been some examples of [breaches] going on, McDermid points out. Okta also helps maintain continuous security for active agents by detecting and responding to anomalous or high-risk behaviour - meaning youll be alerted if your agent looks like it might be going rogue. Least-privilege access is standardized to secure the authentication process for agents, and a clear audit trail helps keep track of each agent to ensure you stay compliant and on top of each agent acting on your behalf. A gap in the regulations? Okta has also introduced a new set of standards, Cross App Access (XAA). These look to help the whole industry protect itself, establishing protocols to help security teams stay one step ahead of threat actors. It's not going to be the silver bullet, McDermid warns. It's not going to stop all of these attacks in future, but certainly it gives us the best opportunity to make sure we get technical capabilities there and within the products and within the services we're offering. These standards being adopted across the board are part of a wider effort for security teams to work together to protect themself and their colleagues. As McDermid points out, threat actors are already collaborating, which gives them an upper hand with new practices and attacks; Threat actors are actually sharing techniques, they're sharing platforms. They're working together as a cohort and customers and organizations don't. I think that's where we need to improve. Learning from one another is a crucial part of protecting the industry, McDermid argues. Its easy to get intimidated or overwhelmed by the seemingly constant string of attacks we see in the headlines, but to really address the risks, teams need to learn from these incidents and measure up their own security tools against these attacks. You have to keep trying and keeping governance over these controls and maintaining that cyber hygiene because I think if you're not aware of what the attacks look like then you're not assessing your own exposure against them. He doesnt warn people away from using the technology - quite the opposite. AI agents will be used whether companies approve or not, so its important to put policies in place to ensure that use is safe - sooner, rather than later. You might also like Take a look at our picks for the best AI tools around These are the 5 top automated artificial intelligence tools you can try right now Microsoft - UK can help drive the global AI future, but only with the proper buy-in ====================================================================== Link to news story: https://www.techradar.com/pro/security/everybodys-under-pressure-to-do-more-wi th-less-why-okta-says-you-need-an-ai-agent-governance-strategy-and-sooner-rath er-than-later --- Mystic BBS v1.12 A49 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .