Governing AI in Practice

Alexa Rzasa
September 15, 2025

Challenges and Outcomes Across Security, Risk, and Compliance

AI adoption is moving fast, with more and more companies becoming “AI first”. Enterprises are experimenting with new models, embedding AI into software, and engaging vendors whose products now include AI features by default. But with this innovation comes a set of challenges that can’t be ignored: security gaps, opaque supply chains, and emerging compliance requirements.

Across industries, three groups are on the front lines of these challenges: product security teams, third-party risk management (TPRM), and governance, risk, and compliance (GRC). Each faces unique pressures, but they all share a need for visibility, governance, and accountability in AI adoption.

Challenge 1: Product Security Teams and the Hidden AI Supply Chain

Modern applications aren't just shipping software, they're increasingly shipping AI. Teams may embed open-weight models and fine-tuned custom ones, yet product security teams often lack visibility into what models and datasets are being used, how they were trained, and whether they comply with company policies.

This creates real risk when developers unknowingly build on top of unvetted models or expose proprietary data to AI components with unclear provenance. Consider a US-based healthcare organization that aimed to utilize an open-weight model from the Hugging Face leaderboard. However, due diligence revealed risky training datasets and questionable licensing terms. 

The Outcome
When product security teams gain visibility into the models embedded in source code and shipped software, they can:

  • Identify AI components early in the development cycle.
  • Assess the risks of open-weight or custom models before sensitive data is introduced.
  • Prevent non-compliant models from reaching production.

The most efficient path to this visibility is implementing AI Bill of Materials (AIBOM) tracking. This means not only generating BOMs for models and datasets, but also storing and inventorying them, scanning them, and enabling continuous monitoring. The result is faster, safer development that allows teams to innovate without compromising security. Organizations can maintain development velocity while ensuring appropriate risk management and compliance with internal AI governance policies.

Challenge 2: Third-Party Risk Management and Vendor AI

Vendors are rapidly integrating AI capabilities into their offerings to capture efficiency gains and competitive advantages. But organizations purchasing those products often have little visibility into which models are being used, where they come from, or whether they comply with licensing and governance requirements.

This lack of transparency turns vendor AI into a significant blind spot in the enterprise risk landscape. Traditional third-party risk assessments may not adequately address AI-specific concerns such as model provenance, training data sources, or ongoing model updates that could alter risk profiles. Critical questions often go unanswered: Is the vendor actually using AI in their product as claimed, or are they obscuring its use? Are they relying on trusted, well-vetted models like GPT-5 or Llama-3.3, or on a homegrown model of uncertain quality and security? And most importantly, how can organizations trust the model’s performance in high-stakes environments; for example, a hospital purchasing AI-enabled diagnostic devices from a medical device manufacturer?

The challenge is compounded by the rapid pace of AI integration across vendor ecosystems. Suppliers may implement AI capabilities without explicit notification to customers, or may change underlying models in ways that affect data handling, performance, or compliance status.

The Outcome
Third-party risk management teams can address this by assessing vendor AI use with the same rigor applied to other software components. With this insight, they can:

  • Flag high-risk vendor AI before procurement is finalized.
    Ensure AI adoption across the supply chain aligns with organizational policies.
  • Strengthen defensibility in vendor risk assessments and audits.

By extending existing third-party risk frameworks to encompass AI-specific considerations, organizations can reduce exposure while building more trustworthy vendor relationships. This includes implementing AI-focused vendor questionnaires, requiring transparency into model usage, and establishing ongoing monitoring of vendor AI practices.

Challenge 3: GRC and the New AI Compliance Mandate

From the FDA to the DoD, regulators are moving quickly to extend oversight to AI systems, treating them much like software in terms of accountability and transparency requirements. The EU AI Act and the Cyber Resilience Act (CRA) are accelerating this momentum globally, setting stringent requirements around model documentation, risk classification, and system transparency. For GRC teams, this means developing comprehensive policies around licensing, model provenance, and dataset transparency, while proving that AI use complies with both existing and emerging regulatory frameworks.

Without visibility into AI components across the organization, GRC teams face significant challenges in ensuring compliance and demonstrating responsible AI practices. The dynamic nature of AI systems, where models can be fine-tuned, modified, or replaced, adds complexity to traditional compliance approaches.

Even forward-thinking organizations are hitting roadblocks. A multinational software company, for example, discovered a developer had downloaded a restricted model, DeepSeek, in violation of company policy, then fine-tuned it to masquerade as an approved alternative. This single incident underscored the urgent need for comprehensive AI governance capable of detecting, tracing, and preventing policy violations across the development lifecycle.

The stakes aren’t just theoretical. In one high-profile case, a senior executive at a US-based food and beverage company was hauled before Congress to testify on the company’s use of AI in hiring. Lawmakers pressed for evidence that the systems weren’t discriminatory. What followed was a hair-on-fire, manual scramble: teams were forced to piece together model inventories, training data lineage, and compliance documentation just to prove the company’s practices were clean. The lesson was clear: without disciplined AI governance, even well-intentioned organizations risk public scrutiny, reputational damage, and regulatory penalties.

The Outcome
When GRC teams have a clear view of model inventory and usage, they can:

  • Enforce governance policies around acceptable AI use.
  • Ensure compliance with new regulatory frameworks as they evolve.
  • Provide evidence of responsible AI practices to auditors, regulators, and customers.

The benefit extends beyond risk reduction to building stakeholder trust. Organizations that demonstrate disciplined AI governance position themselves as responsible adopters of AI technology, which becomes increasingly important as regulatory scrutiny intensifies and customers demand transparency in AI usage.

From Challenge to Confidence

The use of AI in the enterprise is no longer experimental, it’s operational. That makes transparency and governance essential. Product security, third-party risk, and GRC teams each play a role in addressing the risks of AI adoption, and the organizations that succeed will be those that treat AI with the same discipline they already apply to software.

The outcome is clear: when enterprises gain visibility into AI models and enforce policies consistently, they move from uncertainty to confidence. They can innovate with speed while reducing risk and meeting compliance demands, all while building AI systems their stakeholders can trust. Organizations that establish robust AI governance frameworks early in their adoption journey position themselves for sustainable competitive advantage in the AI-driven business environment.

Manifest is helping global organizations gain full transparency and control over their software supply chains. Chat with our team and learn how.

“Manifest knows the AIBOM and cybersecurity space, sees the problems arising, and always has a solution to showcase.”
Manager of Global Technology Legal Compliance,
Multinational Software Company
Secure your software supply chain today.
Get a demo