Last week, the Biden-Harris Administration made a landmark announcement by issuing the first-ever National Security Memorandum (NSM) on Artificial Intelligence (AI). This decisive action acknowledges that advancements at the forefront of AI technology will have significant implications for national security and foreign policy in the near future. The NSM emphasizes the importance of the United States leading the development of safe, secure, and trustworthy AI systems.
In tandem with the NSM, the administration released the Framework to Advance AI Governance and Risk Management in National Security. This framework outlines a coordinated approach to harness the power of AI in national security applications while effectively managing associated risks. It sets forth guidelines for responsible AI adoption, emphasizing the need for governance structures, risk assessments, accountability mechanisms, and transparency. The framework aims to ensure that AI technologies are developed and used in ways that uphold civil liberties, privacy, and international norms.
Below, we analyze the key technical requirements outlined in this framework and how Pillar Security's technology stack aligns with and supports these critical national security objectives.
| **Requirement** | **Technical Specification** | **Pillar’s Platform Capabilities** |
|---------------------------------------------|----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
| Risk and Impact Assessments | Complete an AI risk and impact assessment before using new or existing high-impact AI systems. This includes identifying intended purposes, expected benefits, and potential risks. | Pillar's [Discover module] (https://www.pillar.security/platform/ai-development) provides continuous mapping of all AI assets, including applications, models, prompts, and tools. By tracking the entire flow of data and interactions, we offer comprehensive visibility that aids in early risk identification and effective risk assessments before deploying high-impact AI systems. |
| Testing AI Systems | Test AI systems sufficiently in realistic contexts to confirm they perform as intended and achieve expected benefits while mitigating risks. | With Pillar's [Evaluate module] (https://www.pillar.security/platform/ai-development), you can conduct tailored red-teaming exercises designed for your specific use cases. Pillar's engine automatically simulates realistic attack scenarios, helping you uncover hidden vulnerabilities and improve your defenses. |
| Independent Evaluation | Conduct independent evaluations specific to the intended purpose and deployment of AI systems, making evaluations available to AI Governance Boards or equivalent. | Pillar facilitates independent evaluations through comprehensive assessment tools and detailed reporting. Our platform's auditing capabilities provide insights into AI system performance and risks, supporting accountability and informed decision-making by governance boards. |
| Human Oversight | Ensure appropriate human consideration and oversight of AI-based decisions, establishing clear human accountability for decisions and actions. | Pillar's [Enforce module] (https://www.pillar.security/platform/ai-usage) implements safety and security policies across AI applications with configurable human-in-the-loop workflows and role-based access controls (RBAC), ensuring human oversight and accountability over AI system decisions and actions. |
| Reporting Mechanisms | Maintain processes and protections for AI operators to report unsafe, anomalous, inappropriate, or prohibited uses through appropriate channels. | Pillar includes secure reporting and feedback mechanisms within its platform, allowing operators to flag issues related to AI system operations,findings and anomalies. |
| Monitoring and Testing | Regularly monitor and test the operation, efficacy, and risks of AI systems, making assessments available to operators. | Pillar's [Observe module] (https://www.pillar.security/platform/ai-application) monitors the quality of your AI applications in real-time, sending alerts when problematic outputs are generated. Using our [Evaluate module] (https://www.pillar.security/platform/ai-development), operators can easily configure and run red teaming exercises on AI systems. |
| Periodic Human Reviews | Conduct periodic human reviews to assess changes in context, risks, benefits, and agency needs related to AI systems. | Pillar supports scheduled reviews and audits of AI systems by providing ongoing analytics and comprehensive reports, allowing for human assessment of AI resilience to risks and alignment with organizational objectives. |
| Emerging Risk Mitigation | Mitigate emerging risks identified through monitoring, reviews, or other mechanisms. | Pillar's adaptive protection, powered by threat intelligence, swiftly mitigates emerging risks. By analyzing extensive datasets of real-world app interactions, we deliver precise alerts with minimal false positives, enabling proactive security updates as new threats emerge. |
| Escalation Processes | Maintain processes for internal escalation and senior leadership approval for AI uses posing significant risks or affecting international norms. | Our platform includes escalation protocols and configurable notifications. Risk assessments and security alerts can be routed to appropriate stakeholders, ensuring senior leadership is involved in decisions regarding high-risk AI applications. |
| Data Handling Guidelines | Establish guidelines for handling AI models trained on sensitive or improperly obtained data. | Pillar empowers teams to establish strict data handling policies by ensuring AI models and datasets comply with regulatory standards, and by managing sensitive information securely within the company’s own cloud premise. |
| Standards for AI Evaluation | Develop standards for AI evaluations and auditing. | Pillar aligns with leading risk frameworks like MITRE ATLAS, OWASP, and NIST AMF. Techniques and methods are mapped to these guidelines in order to provide a standardized way for AI system evaluation and benchmarking. |
| AI Inventory | Conduct an annual inventory of high-impact AI use cases, including descriptions, purposes, benefits, risks, and risk management strategies. | Pillar's platform assists in mapping AI systems and maintaining detailed records, offering insights into AI assets across development, production, and usage phases for compliance and oversight purposes. |
| Oversight and Transparency | Appoint a Chief AI Officer to oversee AI governance and risk management practices. | Pillar provides governance tools, dashboards and reporting that support Chief AI Officers in overseeing AI initiatives, ensuring alignment with policies and facilitating effective risk management. |
| Incident Documentation and Reporting | Document AI misuse, incidents, and lessons learned, integrating into existing reporting requirements, and making reports available to the public when appropriate. | Pillar includes incident tracking and reporting functionalities, helping organizations document issues, analyze root causes, and inform stakeholders while maintaining transparency and compliance. |
| Accountability Mechanisms | Update policies and procedures to ensure accountability for those involved in AI development, deployment, and use, including approvals by appropriate officials and mechanisms for reporting incidents of misuse. | Pillar's platform supports policy enforcement and compliance management, with features like role-based access controls (RBAC) and owner assignment, ensuring accountability and adherence to organizational policies. |
| Data Retention Policies | Review and update data retention policies and procedures, considering the unique attributes of AI systems and prioritizing enterprise applications. | Pillar provides data governance tools to manage data flow and retention in compliance with organizational policies. We enable tracking and control over data used in AI systems, ensuring secure and compliant data handling throughout the AI lifecycle. |
| Cybersecurity Risk Mitigation | Follow directives to mitigate cybersecurity risks associated with AI systems. | Pillar offers robust security features, such as adaptive guardrails in the [Protect module] (https://www.pillar.security/platform/ai-application), to defend against runtime and AI-specific threats, aligning with national cybersecurity directives and standards. |
| Shared Information with Oversight Officials | Ensure oversight officials have sufficient information, expertise, training, and funding to effectively carry out functions related to AI oversight, including advising on managing risks to privacy, civil liberties, transparency, safety. | Pillar's platform provides thorough visibility and detailed reporting of AI activities, supporting oversight officials with the necessary information to manage risks effectively. Actionable insights aid in advising on privacy, civil liberties, transparency, and safety concerns. |
| Public Transparency Reports | Periodically submit reports on activities associated with AI oversight, making them available to the public to the greatest extent consistent with protection of classified or controlled information and applicable laws. | We assist in generating comprehensive reports on AI activities, incidents, and risk management efforts. These reports can be tailored for public disclosure while ensuring sensitive information remains protected, helping you meet transparency obligations and build public trust. |
Conclusion
The National Security Memorandum on AI marks a pivotal shift in federal AI governance, setting clear expectations for secure and responsible AI deployment in national security contexts. As organizations adapt to these new requirements, implementing robust technical solutions becomes crucial.
Pillar Security stands ready to support this transition, offering advanced technology and expertise that align with federal guidelines while ensuring operational efficiency. Our commitment to security, transparency, and ethical AI aligns with the framework's requirements, enabling organizations to innovate with AI technologies while maintaining the highest standards of safety, privacy, and compliance.