What does 2026 hold for IT governance?
A trove of documents from I-Soon, a private contractor that competed for Chinese government contracts, shows that its hackers compromised more than a dozen governments, according to cybersecurity firms SentinelLabs and Malwarebytes – Copyright AFP/File Daniel LEAL
In 2026, businesses should be shifting their IT governance focus to tackle ‘non-human’ identity risks and clamp down on unauthorised actions. These are the views, and predictions, of Paul Walker, field strategist, at the company Omada.
Walker provides four key predictions to Digital Journal that he expects corporations to be contending with within the IT security space.
Prediction 1: By 2026, if you’re not treating NHIs as first-class citizens in your identity program, you’re fundamentally exposed
On the subject of security and identity, Walker says: ““Traditional identity governance and administration (IGA) was built for humans. We’re discovering huge numbers of machine identities that have never been governed. OWASP released their Top 10 Non-Human Identity Risks for 2025, and “improper offboarding” is number one. The fact that “improper offboarding” ranks as the number one risk reveals a fundamental gap: organizations have no systematic process for deprovisioning machine identities when services are deprecated, applications are sunset, or integrations are discontinued.”
As an example, Walker raises: “Consider what happens when a development team spins up a service account for a proof-of-concept project. That credential often persists long after the project ends, maintaining broad access to production databases or cloud resources. Multiply this scenario across hundreds of development initiatives, and you have thousands of orphaned credentials each representing a potential attack vector.”
In terms of the actions of rogue actors, Walker’s concern is: “Attackers increasingly target these “ghost” identities precisely because they’re unmonitored and frequently over-privileged. If your IGA can’t see it, you can’t govern it. The proliferation has been exponential. Cloud-native architectures, microservices, DevOps automation, and AI agents have each contributed to an explosion of machine-to-machine authentication. Every CI/CD pipeline, every containerised application, every automated integration creates new credentials that often live indefinitely, accumulate privileges over time, and remain invisible to traditional IGA platforms that were architected in an era when “identity” meant a person with an employee ID.”
Prediction 2: The privilege creep problem will continue to worsen, especially for machines
Privilege creep refers to the gradual accumulation of excessive access rights or permissions by users, often leading to security vulnerabilities within an organisation. On this subject, Walker thinks: “With human users, we at least have some natural forcing functions. People change roles, leave companies, trigger offboarding workflows. Not ideal, but it’s something. Over-permissive access is the norm, with identities being granted more permissions than necessary, increasing the likelihood of privilege abuse and unauthorized actions. Unlike humans where we might notice someone has “Finance Analyst + HR Admin + Sales Manager” roles, machine identities accumulate permissions across platforms in ways that are completely opaque. ”
Walker is concerned: “Here’s the dirty secret that’s hard to admit: access reviews are failing. That’s not because people don’t do them, but because they’re rubber-stamping exercises that miss the real risk. ”
As to the complexities: “SaaS scale is what makes this extremely challenging: NHIs are managed ad hoc by different teams like DevOps, IT, and data science without clear security accountability. Nobody owns these identities. The developer who created it left two years ago. The project moved teams three times. When you try to right-size permissions, nobody can tell you what it actually needs versus what it has. A primary obstacle in managing non-human identities is the difficulty in identifying their status accurately due to ambiguous ownership.”
Prediction 3: The gap between “digital transformation” and “basic identity hygiene” will remain catastrophically wide
The process of digital transformation can lead to security issues, if badly handled. According to Walker: “2025 marked an inflection point where non-human identity security transitioned from niche concern to mainstream crisis. It is surprising that in late 2025, mature organizations with significant security investments could still be completely paralyzed by compromised machine credentials that hadn’t been rotated in years and social engineering attacks on third-party helpdesks.”
Citing an example: “Take Jaguar Land Rover’s catastrophic breach that forced a complete global production shutdown that lasted over four weeks and cost an estimated £50 million per week. Another example is Marks & Spencer’s devastating Easter weekend attack via a third-party vendor compromise that shut down online operations for six weeks, resulting in £270-440 million in combined losses. What makes these incidents particularly alarming is the attack vector: both breaches originated through compromised non-human identities in partner systems service accounts, API keys, and third-party access tokens that had never been properly governed, rotated, or monitored. These weren’t theoretical risks. They were billion-dollar disasters caused by the exact NHI governance failures that security experts had been warning about. The UK government provided the first-ever government-backed loan ($2 billion) to a company for a cyber incident, signalling this was considered a national economic crisis, not just a corporate problem.”
Prediction 4: Regulators will demand that autonomous agents become glass boxes, not black boxes
Greater transparency about the use of AI and autonomous agents is likely to be demanded by regulations. Walker observes: “The EU AI Act and California’s transparency laws now mandate that organisations document every decision made by AI agents, justify its reasoning, and maintain complete audit trails of what systems agents accessed and what actions they took. High-risk AI systems must enable users to interpret outputs and understand how decisions were made. Translation: if your agent autonomously executes a transaction, fires an employee, or denies a loan, you’ll need to explain exactly why it made that decision in terms regulators and affected individuals can understand. The age of “the AI decided” as an acceptable answer is over.”
What does 2026 hold for IT governance?
#hold #governance