AI Security Comes of Age in the FY26 NDAA

Daniel Bardenstein
December 17, 2025

What the New Law Gets Right and Where the Real Work Begins

Artificial intelligence (AI) adoption inside the Department of War (DOW) is accelerating faster than almost any previous technology shift. AI is already embedded in intelligence analysis, logistics, cyber operations, targeting, and kinetic weapons, and its footprint is growing by the day. And while the White House and Secretary Hegseth continue to forcefully promote AI adoption, AI security has severely lagged behind. 

The FY26 National Defense Authorization Act (NDAA) meaningfully changes that trajectory.

This year’s NDAA, finally meeting approval through Congress after the government shutdown, represents the most serious congressional effort to date to treat AI not as an experimental capability, but as a first-class national security asset that must be governed, secured, and procured accordingly. We commend Congress, particularly the Armed Services Committees and their staff, for advancing pragmatic, security-focused language that builds on existing cyber and acquisition frameworks rather than reinventing the wheel. At Manifest, we were honored to inform some of these discussions by sharing lessons from the field about AI supply-chain risk, transparency, and operational security.

The result is a bill that acknowledges a hard truth: AI systems inherit all the risks of software and introduce new ones that traditional security controls were never designed to handle. The FY26 NDAA doesn’t solve every AI security problem, but it establishes a foundation that matters.

What the NDAA Actually Does and Why It Matters

The AI security provisions in the FY26 NDAA are notable not for their rhetoric, but for how concretely they integrate AI into the Department’s existing security machinery.

1. A Department-Wide AI Security Baseline

Section 1512 requires the DOW to develop and implement a unified policy for the cybersecurity and governance of AI and machine learning systems. Importantly, this policy is lifecycle-based: it applies from development through deployment and sustainment, not just at authorization time.

The law explicitly calls out AI-specific threats, model tampering, adversarial attacks, data leakage, prompt injection, model extraction, jailbreaks, and supply-chain compromise. This is a crucial shift. AI is no longer treated as “just software,” but as a class of systems with distinct failure modes that demand tailored defenses.

2. AI Security Meets Procurement Reality

Equally significant is how the NDAA ties AI security to acquisition. The legislation directs the Department to amend procurement rules so that AI security requirements flow down to contractors, calibrated by risk and mission criticality. This aligns AI with the same acquisition levers that reshaped software security over the last decade.

For defense contractors and commercial AI vendors alike, the message is clear: security is no longer optional, and opacity will not be acceptable. Understanding what is inside the AI systems you sell to the Department—and where those components come from—will increasingly determine whether you can sell them at all.

3. Supply-Chain Transparency for AI

Perhaps the most forward-leaning element of the NDAA is its explicit linkage between software bills of materials (SBOMs) and AI systems. Congress makes clear that policies governing SBOMs should apply, to the extent practicable, to AI models, systems, and software, and encourages the use of model cards and similar transparency mechanisms.

This matters. AI systems are built from layered dependencies: open-source libraries, pre-trained models, fine-tuned weights, training datasets, tooling, and infrastructure. Without visibility into those components, securing AI is impossible. The NDAA recognizes that supply-chain transparency is foundational, not optional.

4. Workforce, Governance, and Senior-Level Oversight

Beyond technical controls, the NDAA reinforces AI governance through workforce training, continuous monitoring expectations, and senior-level oversight structures. From updated cybersecurity training to the establishment of an AI Futures Steering Committee, the law signals that AI risk is strategic, not merely technical.

Where the NDAA Falls Short

For all its strengths, the FY26 NDAA stops short of addressing several realities that DoW AI leaders already confront.

First, the law does not require a comprehensive, department-wide inventory of AI systems. You cannot govern or secure what you cannot see. As AI proliferates across programs, commands, and vendors, basic asset visibility becomes a prerequisite for any meaningful risk management.

Second, while the NDAA references supply-chain risk, it does not fully define how AI components should be inventoried, tracked, or monitored over time. AI systems change constantly as models are retrained, weights are updated, and datasets are refreshed.Static documentation is not enough.

Third, the legislation does not yet establish clear processes for identifying, disclosing, and mitigating AI-specific vulnerabilities over a model’s operational life. Traditional software vulnerability management has decades of process behind it. AI does not and the gap is growing.

These gaps are especially notable given recent national security guidance, including directives restricting the use of software and hardware from adversarial nations. Applying those principles to AI without detailed provenance and dependency data is effectively impossible.

Why SBOMs and AIBOMs Matter More Than Ever

The NDAA’s acknowledgment of AI supply-chain transparency is a critical first step, but the need goes further.

AI Bills of Materials (AIBOMs) extend the SBOM concept to AI systems, capturing not just code dependencies, but models, training data sources, weights, tooling, and lineage. With an AIBOM, mission owners can understand how a model was built, how it has changed, what it depends on, and where risk may enter the system.

Without this visibility, AI remains a black box. With it, security teams can trace exposure, assess impact, enforce procurement restrictions, and respond to compromise with speed and confidence. As AI becomes embedded in mission-critical systems, this level of transparency will become table stakes.

What This Means for DoW Security and Procurement Leaders, and the DIB

For Chief AI Officers, CISOs, and acquisition leaders, the implications are immediate:

  • AI systems must be treated as governed assets, not experimental tools
  • Vendors will increasingly be expected to provide transparency into their AI supply chains
  • Manual documentation and spreadsheets will not scale
  • Continuous monitoring, inventory management, and automated risk analysis will be required

Meanwhile, defense companies that sell AI-enabled software to the DOW must prepare to generate accurate SBOMs and AIBOMs at scale. 

The NDAA sets the direction. Execution will determine whether it succeeds.

How Manifest Helps Close the Gap

Manifest was built for exactly this moment.

We help government and defense organizations gain visibility into the software and AI systems they build and buy—generating SBOMs and AIBOMs, tracking AI inventories, analyzing dependencies, and integrating AI security into existing cyber and acquisition workflows. As a FedRAMP High Authorized platform, Manifest enables teams to operationalize the intent of the NDAA at mission scale, without slowing delivery.

Congress has set a clear signal: AI security is no longer optional, and opacity is no longer acceptable. Manifest helps turn that policy signal into operational reality.

If you’re responsible for securing, procuring, or governing AI inside the Department of Defense, now is the time to prepare.

The direction is set. The hard work begins now.

“Manifest knows the AIBOM and cybersecurity space, sees the problems arising, and always has a solution to showcase.”
Manager of Global Technology Legal Compliance,
Multinational Software Company
Secure your software supply chain today.
Get a demo