New NDAA provisions address mission critical gaps in AI security and governance
Artificial intelligence (AI) is rapidly reshaping how the Department of Defense fights, moves, and decides. But the speed of adoption has outpaced the systems needed to govern it. Models are deployed without clear oversight. Dependencies are undocumented. And when things go wrong, it’s often unclear what was used, or who was responsible.
Historically, the federal government has lagged behind industry in setting guardrails for emerging technologies, often reacting only after harm is done. Congress deserves substantial credit for taking such bold and prompt action on the serious topic of AI risk.
The new National Defense Authorization Act (NDAA) provision reflects a forward-looking and deeply pragmatic understanding: that national security now depends not just on using AI, but on governing it. This legislation doesn’t just mandate compliance. It sets a clear standard for transparency, accountability, and control in how AI is deployed across the DoD.
Now, the Department must act.
What the NDAA Requires
The legislation proposes the Secretary of Defense to issue a DoD-wide AI cybersecurity and governance policy within 180 days. That policy states that the DOD will be required to:
- Address threats like model tampering, adversarial inputs, and prompt injection
- Cover the full AI lifecycle; from training to deployment to retirement
- Train the DoD workforce on AI-specific security risks
By August 2026, the Department will be required to report to Congress on implementation, gaps, and resource needs.
This isn’t a symbolic move. It’s a forced reckoning with the reality that we are deploying powerful AI systems faster than we can track, validate, or secure them.
The Compliance Challenge
Most agencies aren’t set up to meet these requirements, because doing so is genuinely hard.
Today, AI models often come from vendors or open-source repositories with little documentation. Lacking automated tools, assessing, tracking, and monitoring models is a highly manual and tedious process. Even internally developed models can be hard to account for: teams lack shared tools to track dependencies, confirm a model’s integrity and lineage, , determine legal usage, or document where and how a model is used. Security reviews are often ad-hoc or static. And these difficulties also apply to the datasets used to train AI models.
This is how AI risk goes undetected. And when it does, the consequences are real:
- Trusted models can contain vulnerabilities that can be exploited to alter their performance
- The DoD could unknowingly deploy public models or datasets illegally, since some licenses prohibit usage for defense or military applications
- AI policies could be (unknowingly) circumvented, such as using models derived from other prohibited models (e.g. due to legal restrictions, security, or ties to near-peer adversaries like China)

Why AI SBOMs Matter
One of the NDAA’s most significant requirements is the creation of an AI Software Bill of Materials (AI SBOM) for every AI system. That means every model, dataset, library, and dependency must be tracked and disclosed.
Agencies must now:
- Understand where models came from and who built them
- Understand the process of how models were trained and developed, including the underlying software and datasets
- Ensure the legal usage of models and datasets for DoD use cases
- Ensure full traceability and transparency of the AI stack
- Update procurement processes to enforce SBOM compliance
This isn't just about detecting vulnerabilities, it’s about managing models like strategic assets. Without that discipline, there’s no way to validate that AI systems are behaving as intended, or even know what’s deployed where.
Governance Is Just as Critical as Detection
It’s not enough to scan models for known risks. Agencies need governance mechanisms to:
- Track model provenance and updates
- Ensure policy compliance over time
- Flag unauthorized changes or shadow deployments
- Support audits, red-teaming, or incident response with reliable data
Today, most programs can’t answer basic questions like: What models are running in production? What data were they trained on? Have they been altered since deployment?
Without AI governance, you don’t just lose visibility, you lose control.
Manifest: Purpose-Built and Ready to Deploy
DoD agencies don’t have to look far for a solution. Manifest has been building toward this moment from day one—developing a platform specifically designed to meet the AI security, transparency, and governance challenges now mandated by the NDAA.
Our platform is already FedRAMP High authorized, IL5 accredited, and in use across the national security community. It can be deployed immediately to help agencies gain control over their AI systems without waiting years for custom development or integration.
With Manifest, agencies get:
- AI SBOM generation and analysis: Instant visibility into every model’s components, dependencies, datasets, and provenance
- Continuous AI assessment monitoring: Maintain real-time visibility across all AI assets to detect changes, assess risks, and ensure security posture
- Lifecycle tracking and monitoring: Trace changes over time, detect unauthorized modifications, and maintain version control
- Policy enforcement at scale: Automatically flag untrusted inputs, fine-tuning risks, and policy violations
- Audit and compliance readiness: Out-of-the-box alignment with NIST AI RMF, EO 14028, and DoD AI Ethical Principles
- Interoperability with existing tools: Seamlessly integrates into current cybersecurity and DevSecOps workflows
Manifest delivers what the NDAA requires, and what mission owners need: a trusted, deployable platform that turns AI transparency from an aspiration into reality. Talk to our team to learn how you can get a head start on the new AI SBOM requirements.
Final Take
The NDAA provisions don’t create new problems, they expose the ones we’ve already been living with. And they make clear what’s at stake: national security systems running on opaque, ungoverned AI are not just risky, they’re unacceptable.
Visibility, governance, and control over AI systems are now baseline requirements for doing business in the defense ecosystem. The question for every program office, acquisition lead, and mission owner is simple: can you see what’s inside your AI, and can you prove it?