AI at Black Hat USA 2025

Daniel Bardenstein
August 14, 2025

Hot, Urgent, and Missing the Foundations

Last week in Las Vegas, I walked the Black Hat USA 2025 floor and sat in on conversations with CISOs, security engineers, and vendors. It’s hard to overstate it: AI was everywhere, not just in the marketing decks, but in the hallways, the briefings, and the private CISO dinners.

And yet, for all the noise and excitement, the gap between AI adoption and AI risk management is wide.

AI: Everyone’s Priority, Few Have a Plan

CISOs told me AI has moved into their top priorities for 2025. It’s not just “emerging tech” anymore — it’s part of board conversations, product roadmaps, and operational budgets. But this pressure is coming from the top down, not the ground up.

The gap?

  • No real inventories — Most orgs can’t answer basic questions like: What AI models are we running? Where are they hosted? Who trained them? On what data?
  • Blind spots in risk categories — Security teams might think about prompt injection or model poisoning, but miss legal exposures (e.g., copyrighted data in training sets) or business risks (e.g., models drifting and making bad decisions).
  • Fragmented tooling — There’s no single pane of glass for AI risk. Instead, teams are piecing together logs, vendor assurances, and manual spreadsheets.

What struck me most is that even seasoned security leaders don’t yet have a mental model for AI risk that’s as well-developed as the ones they use for software or infrastructure. AI is still being treated as “special” — which means it’s often outside existing governance and detection pipelines.

Why This Matters Now

When AI adoption was experimental, you could argue this was fine; low risk, low stakes. That’s no longer the case. I spoke to one CISO whose company is now using an AI-powered decision engine in a core product line. That model has been retrained three times in the past six months, with minimal change control and no documented model provenance.

If that model makes a bad call, or worse, is manipulated, the impact won’t be “interesting academic case study.” It will be a business outage, regulatory nightmare, or public breach of trust.

What’s Missing in AI Risk Programs

From my conversations, the missing pieces are consistent:

  • AI Bills of Materials (AIBOMs) — The AI analog to SBOMs, capturing models, datasets, dependencies, and provenance in a machine-readable format.
  • Lineage tracking and version control — Being able to say, “This is Model v3.1, trained on Dataset X as of May 2025, with weights modified on July 14.”
  • Threat modeling specific to AI — Moving beyond generic “protect the endpoint” thinking to account for model theft, data poisoning, adversarial examples, and prompt injection.
  • Integration into existing security tooling. So, AI assets are monitored, logged, and protected like any other critical system.
Still Drowning in Vulnerabilities

AI wasn’t the only pain point. Vulnerability management came up constantly, and the story hasn’t changed much from past years:

Too many findings.

Too little context.

Not enough automation to separate real threats from background noise.

The security teams that are winning here are finding, or building, tools that cut through the noise, automatically remove low-impact vulns, and elevate the ones that truly matter. Everyone else is still buried.

Third-Party Risk is Still a Blind Spot

Walking through the vendor hall was a reminder that our software supply chains are more complex, and opaque, than ever. Most orgs don’t have real visibility into what’s inside the tools they buy, especially when those tools are using AI components under the hood.

The TPRM problem isn’t new, but with AI in the mix, the stakes are higher. Without knowing the provenance of models, datasets, and dependencies, you’re trusting your risk posture to hope and vendor assurances. That’s not a strategy.

The Takeaway

Black Hat confirmed what I’ve been saying for months: AI is becoming foundational to enterprise operations, but AI security is still in its infancy. The risk is no longer hypothetical, but the tooling, processes, and governance are still catching up.

Until we treat AI like any other software, with inventories, provenance, and continuous monitoring, we’re playing defense in the dark.

“Manifest knows the AIBOM and cybersecurity space, sees the problems arising, and always has a solution to showcase.”
Manager of Global Technology Legal Compliance,
Multinational Software Company
Secure your software supply chain today.
Get a demo