
Jesper Olsen
Chief Security Officer (EMEA North)
Palo Alto Networks

February 2026
This is sponsored content.
Cyberattackers are increasingly leveraging AI and machine learning (ML) to execute more advanced and sophisticated attacks, amplifying the scale and impact of the threat. As a result, the perception among many organisations is that the scales are tipping in favour of the attackers.
To counter this perception, a proactive, AI-driven defence strategy must be adopted. To meet this moment, how can IT and security leaders cut down on complexities within AI-driven platforms and gain visibility and control across security systems to better protect the enterprise?
Jesper Olsen, Chief Security Officer (EMEA North) at Palo Alto Networks, addressed this question on how to defend organisations in the AI era in his session at the recent Benelux CISO Community Executive Summit. Olsen shared strategies to combat the evolving threat landscape, create a defense with conventional and new approaches and leverage AI to scale security across the organisation in his session, Preparing for Brand New Threats - Fighting AI with Precision.
AI’s true power is not in replacing human intelligence, but in accelerating breakthroughs by equipping experts with tools to move faster and smarter.
AI: Unprecedented Opportunity, Unprecedented Risk
Olsen began the presentation by highlighting both the extraordinary opportunities and the significant risks that have emerged with the adoption of AI. While AI is “fundamentally redefining what’s possible in business and society,” it is also introducing new and unprecedented risks.
Olsen shared some of the key risks challenges for organisations:
- Accountability and Trust
As Olsen said, “AI systems don’t get ‘fired’ for mistakes – humans do.” There’s an inherent risk in over-trusting the output from AI, especially when the reasoning is opaque or unverifiable. Olsen noted that it’s necessary “to stop and question and verify” AI-generated results. - Data Leakage and Manipulation
AI models may be manipulated, may leak sensitive data, and often cannot verify information against authoritative sources – contributing to a lack of trust. - Skill Erosion
Olsen shared that if organisations begin to rely too much on AI, foundational skills among practitioners can erode, creating long-term talent and expertise gaps. - Project Failure
According to Gartner, over 40% of agentic AI projects will be canceled by 2027 due to costs, unclear outcomes, and immature risk management. - Shadow AI
Organisations are experiencing a proliferation of unsanctioned GenAI apps – on average, there are 66 GenAI apps per company, according to Olsen. This increases risk exposure and often outpaces policy and oversight. - Expanding Attack Surface
Modern AI apps, especially agentic systems, require broad data and system access, increasing the risk of excessive permissions, blind spots, and supply chain vulnerabilities.
There is a growing reality that we cannot ignore: as AI adoption accelerates, so does the risk surface that enterprises are being forced to confront.
A Structured, Lifecycle Approach to Securing AI
Olsen shared that a recent study found that only 6% of enterprises have an advanced AI security framework in place. To remedy this, he outlined a security strategy for the AI era that integrates security into every stage of the AI lifecycle. In the planning stages, Olsen cited the need to align AI projects with business needs and regulatory requirements, conduct a security assessment of the risks, ethics, and data sensitivity, and define what security success looks like, including criteria and allowable data access.
As your organisation decides whether to build or select a security framework, Olsen recommended the following actions:
- Define your security strategy and risk appetite, including considerations for a platform-based model and a roadmap for agents.
- Ensure that you foster a culture of validation internally; don’t trust, but verify outputs.
- Establish AI governance and operating models, including use policies and workforce AI guidelines, data governance and guardrails, and AI model provenance.
- Document your organisation’s compliance and assurance.
In addition, Olsen shared that IT and security leaders should proactively assess and manage AI-related risks by conducting thorough risk assessments and integrating secure development practices throughout the development lifecycle. CIOs and CISOs should implement robust testing – such as AI red teaming – and establish GenAI-specific incident response plans, while also monitoring usage, vetting third-party models for provenance and compliance, and maintaining oversight of changes and model behavior. This comprehensive approach ensures that AI systems remain secure, compliant, and resilient against evolving threats.
From the technology side, Olsen suggested that executives can strengthen AI security by enforcing robust data governance and lineage, scanning models and artifacts for transparency, and securing prompts and input/output channels. They should also manage agent identities and secrets, restrict tool permissions, and isolate environments to ensure safe execution. Comprehensive observability, unified audit logs, and controlled access – both managed and unmanaged – are essential for maintaining oversight and protecting AI systems.
Key Takeaways from the Session
- Differentiating between "AI for security" and “security for AI.” Olsen noted that “AI for security” uses artificial intelligence to detect and respond to cyber threats, improving an organization's overall security posture. In contrast, "security for AI" focuses on protecting the AI systems themselves – their data, algorithms, and outputs – from external threats and vulnerabilities. He shared, “The first is about using AI as a tool to enhance security, while the second is about securing the AI technology itself."
- Creating end-to-end lifecycle coverage and why it matters now. Olsen said that most organisations are experimenting with AI, but true transformation is lagging. Pilots stall at scale, according to an MIT Report he cited. As the AI revolution is accelerating, so is the risk surface. Without robust, lifecycle-based security, organizations risk not only project failure but also regulatory penalties, data breaches, and loss of trust.
- Securing AI is not optional – it’s foundational to realizing its promise. AI can deliver extraordinary value, but only if organizations proactively secure it at every stage – starting with a robust framework and factoring in full lifecycle protection. This requires new tools, processes, and mindsets, combining continuous monitoring, robust guardrails, and expert-led risk management to ensure AI is both powerful and safe.
For more discussions on securing AI and other critical topics for security leaders, join your local Gartner CISO Community. Or, if you are already a member, sign in to the app to explore opportunities to collaborate and exchange best practices with your CISO peers.
Special thanks to Palo Alto Networks.
Join a Community
Find your local community and explore the benefits of becoming a member.