Build Your AI Security Roadmap With the SAIL Framework

The SAIL Framework provides a practical, lifecycle-oriented strategy to manage AI-specific risks and build trustworthy AI systems. Developed by practitioners, for practitioners.

Shifting Upwards
The AI Security Challenge
Understanding SAIL
Shifting Upwards

AI introduces a powerful new abstraction layer—one that makes autonomous decisions and operates beyond human oversight.

To address its inherent risks, we must shift our security focus upwards: from simply protecting code to securing the business logic and processes AI now controls.

This “Shift Up” approach calls for purpose-built controls specifically designed to safeguard the AI decision-making layer, preventing risks from cascading into critical business impacts.

The AI Security Challenge

Why a Unified Framework is Essential

Over the past year, we have collaborated with experienced AI and cybersecurity leaders to understand their core challenges with AI security.


These conversations inspired us to create SAIL,
a framework that provides an overarching and practical approach for safeguarding AI systems across the entire AI lifecycle.

Uncharted Threat Waters

New AI-specific threats like prompt injection and model theft require specialized defenses beyond traditional security controls.

No common language

AI, security, and governance teams lack shared context and unified processes for AI-specific risks.

Lack of skills and knowledge

Organizations struggle to find expertise bridging AI and security, while industry best practices remain fragmented and evolving.

Navigational Overload

Conflicting frameworks and standards leave teams without a clear, actionable roadmap for implementing comprehensive AI security

Understanding SAIL

What is the SAIL Framework?

In essence, SAIL provides a holistic security methodology covering the complete AI journey, from development to continuous runtime operation. Built on the understanding that AI introduces a fundamentally different lifecycle than traditional software, SAIL bridges both worlds while addressing AI's unique security demands.


SAIL's goal is to unite developers, MLOps, security, and governance teams with a common language and actionable strategies to master AI-specific risks and ensure trustworthy AI. It serves as the overarching framework that integrates with your existing standards and practices.

Framework phases

SAIL

Your AI Security North Star

The SAIL Framework is a process-oriented methodology that systematically adds security to each phase of the AI journey. It provides a practical approach to unite development, MLOps, security, and governance teams around a common language to manage specific risks.

Last updated: 1.7.2025

1

Plan: AI Policy & Safe exp.

1. AI Policy & Safe experimentation

This foundational phase establishes AI security policy frameworks aligned with business objectives, regulatory requirements, and overall AI governance. It covers identifying AI use cases, assessing compliance needs, defining risk-based protection, and setting up secure AI experimentation environments for policy alignment validation. This phase incorporates dedicated threat modeling to proactively identify novel failures and inform architecture decisions. It also establishes initial data and model governance definitions, formalizing the introduction and vetting processes for new data or models.

2

Code/ No Code: AI Asset Discovery

2. Code/ No Code: AI Asset Discovery

This initial phase focuses on identifying, cataloging, and vetting all AI assets - including models, datasets, no code platforms and code components, whether developed in-house or sourced externally. This comprehensive inventory is crucial not only for understanding the AI system's composition and potential vulnerabilities but also for meeting emerging AI regulatory requirements.

3

Build: AI Security Posture Mgmt.

3. Build: AI Security Posture Mgmt (AI-SPM)

The Build phase is dedicated to performing a deep risk analysis of the AI assets identified in the discovery phase. It involves intelligently understanding, mapping, and graphing the landscape of these AI assets and their interconnections to establish a clear picture of the system's security posture and potential attack surfaces. Using protection requirements from the Plan phase, organizations can prioritize security controls for each AI asset based on risk levels and identify residual risks.

4

Test: AI Red Teaming

4. Test: AI Red Teaming

In the Test phase, AI systems undergo rigorous security assessments that simulate adversarial behaviors to uncover vulnerabilities, weaknesses, and risks. Unlike traditional AI testing focused on functionality and performance, AI Red Teaming goes beyond standard validation to include intentional stress testing, simulated attacks, and attempts to bypass safeguards, alongside validating security configurations (hardening). The depth and intensity of red teaming activities should align with the protection requirements of the AI-supported business processes, ensuring appropriate testing rigor for each risk level.

5

Deploy: Runtime Guardrails

5. Deploy - Runtime Guardrails

The Deploy phase ensures that AI systems are released into production with necessary runtime guardrails and security configurations activated. These measures are critical for the secure transition and ongoing operation, providing protection against runtime application security threats that may emerge once the system is live.

6

Operate: Safe Execution Env.

6. Operate - Safe Execution Env

During the Operate phase, AI systems, particularly agentic systems like coding agents and AI tools like MCP servers, run within secure and controlled execution environments. This phase implements sandboxing and zero-trust strategies to isolate AI agents from critical infrastructure and sensitive data while enabling their productive operation.

(Sandbox)

7

Monitor: AI Activity Tracing

7. Monitor - AI Activity Tracing

This phase continuously monitors system activity and collects telemetry. It is essential for detecting anomalies or potential attacks, also for generating audit trails and evidence required for regulatory compliance. This phase triggers automated responses such as containment or rollback upon detection. Monitoring also identifies when end-of-life conditions are met, initiating structured decommissioning procedures to safely archive relevant components and formally close the lifecycle loop.

We would like to extend our gratitude to the following for providing valuable feedback throughout the development of this framework:

Acknowledgements

Assaf Namer

Head of AI Security

Brandon Dixon

Former Partner AI Strategist

Steve Paek

Director, AI Security

Robert Oh

Digital & Information Officer (CDIO)

Sean Wright

CISO

Tomer Maman

CISO

Nir Yizhak

CISO & VP

Bill Stout

Technical Director, AI Product Security

Erika Anderson

Senior Security and Compliance

Raz Karmi

CISO

Manuel García-Cervigón

Security & Compliance Strategic Product Portfolio Architect

Steve Mancini

CISO

Vladimir Lazic

Deputy Global CISO

Allie Howe

vCISO

Kai Wittenburg

CEO

Ben Hacmon

CISO

James Berthoty

Founder & CEO

Steven Vandenburg

Security Architect

Mor Levi

VP Detection, Analysis & Response (DAR)

Chris Hughes

Founder

Francis Odum

Software Analyst Cybersecurity Research

Colton Ericksen

CISO

José J. Hernández

VP & Chief Information Security Officer

Moran Shalom

CISO

Casey Mott

Associate Director, Data & AI Security

Dušan Vuksanovic

CEO

Individual contributors

Cole Murray

AI Consultant

Generate Security

Matthew Steele

CPO

Individual contributors

Fabian Libeau

GTM Lead

Ready to
Get Your SAIL Analysis?

See how your AI security measures up against the SAIL framework and get a personalized roadmap for improvement - powered by the Pillar Security platform.

Step 1 of 2
Please enter valid work email
Next
Please enter your first name
Please enter your last name
Submit

See how your AI security measures up against the SAIL framework and get a personalized roadmap for improvement.

Thank you

We've received your message, and we'll follow up via email shortly