Some Assembly Required

A New Kind of Asset

Early Release: The following content is an early preview and should be considered a work in progress. Information may be missing or incomplete until the final release

Let’s be honest—there isn’t a single person walking around today who hasn’t heard the term “AI.” It’s everywhere, from your morning news feed to the checkout counter at your local grocery store. But here’s the thing: as this trend continues to explode, organizations aren’t just talking about AI anymore—they’re racing to embed it into everything they build or have already built. The era of “AI-powered” everything isn’t coming; it’s already here, and it’s happening faster than most of us anticipated.

The AI Gold Rush is Real

This isn’t just another technology trend that will fade away in a few quarters. Organizations across every industry are experiencing an AI revolution in real-time, and enterprise strategists are putting their money where their mouth is. We’re witnessing a fundamental shift in how businesses operate, with AI adoption becoming not just a competitive advantage, but a necessity for staying relevant in rapidly evolving markets.

This isn’t just hype, either. Organizations are seeing real returns on their AI investments through automated processes, enhanced customer experiences, improved decision-making, and operational efficiencies that were previously impossible to achieve. From customer service chatbots that actually understand context to predictive analytics that optimize supply chains, AI is delivering tangible business value across the enterprise.

But here’s where things get interesting—and potentially problematic. As we’ve learned from previous conversations about application assets, when organizations rapidly adopt new technologies, they often create new categories of assets that need to be managed, secured, monitored, and understood. AI is no exception, and it might be the most complex exception we’ve encountered yet.

Enter the New Asset Class: AI Components

Think about it this way: if your organization is keen on using AI (and let’s face it, most are), then you need to start thinking about entirely new types of assets that didn’t exist in your infrastructure just a few years ago. We’re talking about Large Language Models, specialized AI frameworks, Model Context Protocol servers, training datasets, inference pipelines, and a whole ecosystem of AI-related dependencies that are now critical to your operations.

These new assets come in several distinct flavors, each with its own complexity, risk profile, and management requirements:

Third-Party Models accessed via API represent perhaps the most common entry point for organizations dipping their toes into AI waters. Whether you’re calling OpenAI’s GPT models, AWS Bedrock services, Google’s AI platform, or Azure OpenAI, these dependencies are now as critical to your applications as any traditional software library or database connection. The difference? These models are essentially black boxes with their own update cycles, performance characteristics, rate limits, and potential failure modes that you don’t directly control. When the model provider experiences downtime, changes their pricing structure, or deprecates a model version, your applications are directly impacted.

Self-Hosted Third-Party Models take things up a notch in terms of complexity and responsibility. Organizations are increasingly downloading and running popular models locally—whether it’s for privacy reasons, cost optimization, performance requirements, or regulatory compliance. Suddenly, you’re not just consuming an API; you’re managing infrastructure, model versions, resource allocation for AI workloads, storage requirements, and compute scaling. You need to think about GPU utilization, memory management, and model loading times. These assets behave more like traditional applications but require specialized expertise to operate effectively.

In-House Models represent the deep end of the AI asset pool. Some organizations are training, fine-tuning, and deploying their own models locally using proprietary data and specialized requirements. These assets require entirely new categories of management: training data provenance and lineage, model versioning and rollback capabilities, performance monitoring and drift detection, bias testing, and ongoing maintenance that looks nothing like traditional software maintenance. You’re not just deploying code; you’re deploying learned behavior that can change and degrade over time.

Model Context Protocol (MCP) Servers are emerging as critical infrastructure pieces in modern AI-powered developer ecosystems. These servers enable AI models to connect with external data sources, APIs, databases, and tools, creating new integration points that act as bridges between AI capabilities and existing enterprise systems. They need to be secured, monitored, maintained, and scaled just like any other critical service, but they also introduce unique considerations around data access, context management, and AI model interaction patterns.

AI-Specific Infrastructure Components round out the ecosystem with specialized tools like vector databases for embeddings storage, model registries for version control, feature stores for machine learning pipelines, and monitoring tools designed specifically for AI workloads. Each of these represents a new category of asset with its own operational requirements.

The Visibility Problem

Here’s where we run into a familiar challenge that should make any seasoned IT professional’s eye twitch: without proper visibility into these new asset types, we’re flying blind into potentially dangerous territory. We can’t protect what we can’t see, we can’t optimize what we don’t measure, and we certainly can’t assess risk from components we don’t even know exist in our increasingly complex supply chain.

This isn’t just theoretical risk, either. Security teams are already recognizing the urgent need to flag AI dependencies, enforce version controls, monitor for known vulnerabilities in AI supply chains, and understand the data flow implications of AI integration. The traditional software supply chain is complex enough, but AI introduces entirely new attack vectors, privacy considerations, and risk categories that many organizations aren’t prepared to handle.

Consider this all-too-realistic scenario: your development team integrates a third-party language model into your customer service platform to handle initial customer inquiries. Six months later, that model provider has a security incident exposing customer data, changes their terms of service to allow training on your data, or simply discontinues the model you’re using in favor of a newer version with different behavior. Without proper asset tracking and impact analysis, you might not even realize the extent of your exposure until it’s too late. You could find yourself scrambling to understand which applications depend on that model, what data they process, and how quickly you can implement a replacement.

The complexity multiplies when you consider that AI models often have dependencies on other AI models, training data that may have its own licensing restrictions, and inference pipelines that span multiple cloud providers or on-premises infrastructure. A single AI-powered feature in your application might depend on dozens of components across the AI stack, each with its own lifecycle and risk profile.

The AI-BOM Solution

This is where the concept of an AI Bill of Materials (AI-BOM) becomes not just useful, but absolutely essential for any organization serious about AI governance. Just like a Software Bill of Materials (SBOM) helps us identify and catalog all the software components used within an application, an AI-BOM expands on this concept to include comprehensive documentation of algorithms, data collection methods, training datasets, model architectures, frameworks and libraries, licensing information, and compliance requirements.

Think of an AI-BOM as a detailed inventory that goes far beyond traditional software components. Much like a traditional Bill of Materials in manufacturing that meticulously lists out all the parts, components, suppliers, and specifications of a physical product, an AI-BOM provides a comprehensive inventory of all components within an AI system—both technical and procedural.

But an AI-BOM needs to capture information that simply doesn’t exist in traditional software inventories. We’re talking about training data sources and their provenance, model architectures and their theoretical foundations, inference endpoints and their performance characteristics, bias testing results, fairness metrics, performance baselines, regulatory compliance status, and ethical considerations. It’s a significantly more complex beast than a traditional SBOM, but the fundamental principles remain the same: you need to know what you have, where it came from, and how it works before you can manage it effectively and responsibly.

An effective AI-BOM should also track the relationships and dependencies between components. For example, if your customer recommendation system uses a fine-tuned version of a base model, your AI-BOM should capture not just the current model version, but also the base model it was derived from, the training data used for fine-tuning, the hyperparameters used during training, and the evaluation metrics that determined the model was ready for production.

The Growing Urgency

The challenge isn’t getting easier, either—it’s accelerating at a pace that’s making traditional IT governance approaches look quaint. AI adoption is not stabilizing into predictable patterns; it’s expanding into new use cases and business functions faster than most organizations can establish proper governance frameworks. As AI becomes intrinsic to core operations and market offerings, companies need systematic, transparent, and scalable approaches to AI governance. The days of ad-hoc AI implementation and shadow AI projects are numbered.

Organizations that don’t get ahead of this curve are setting themselves up for significant operational, security, and regulatory challenges. We’re already seeing regulatory pressure building around AI transparency, accountability, and explainability across multiple jurisdictions. Government agencies are developing AI governance frameworks, industry standards bodies are creating AI-specific compliance requirements, and even military organizations are seeking industry input on AI bill of materials practices that mirror the software supply chain security initiatives we’ve seen gain momentum over the past few years.

The stakes are particularly high because AI failures aren’t just technical failures—they can have direct business, legal, and ethical implications. When a traditional software component fails, you might experience downtime or performance issues. When an AI component fails, you might inadvertently discriminate against protected classes, leak sensitive information, or make decisions that violate regulatory requirements.

The Million-Dollar Question

So here we are, staring down a question that’s keeping IT leaders, security professionals, and compliance teams awake at night: how can we expand our visibility and governance capabilities to incorporate these new types of assets when their incorporation and use within organizations is growing exponentially on a daily basis?

The answer isn’t simple, and it’s definitely not something you can solve with traditional approaches or existing tools without significant adaptation. It starts with honest recognition that we’re dealing with a fundamentally different category of technology asset—one that learns, evolves, and can exhibit emergent behaviors that weren’t explicitly programmed.

We need to acknowledge that traditional asset management approaches, while providing a solid foundation, aren’t sufficient for AI components in their current form. These assets have lifecycles that don’t map neatly to traditional software development cycles, dependencies that extend beyond code libraries to include data and learned behaviors, and risk profiles that encompass technical, ethical, and regulatory dimensions simultaneously.

New, But Not Really

This reality requires new tools, new processes, new skills, and new ways of thinking about what constitutes a critical asset in modern infrastructure. It might mean investing in specialized AI-BOM tools, significantly extending existing asset management platforms to handle AI-specific metadata, or building custom solutions that can track and monitor AI dependencies across their entire lifecycle.

It definitely means having proactive conversations with development teams, data science teams, security teams, legal teams, and business stakeholders about the AI components they’re currently using, planning to use, or have already deployed without proper documentation. These conversations need to happen before AI components become deeply embedded in critical business processes.

The organizations that will thrive in the AI era aren’t necessarily the ones with the most advanced AI capabilities or the biggest AI budgets—they’re the ones that can deploy AI safely, securely, and sustainably at scale. That requires the same kind of disciplined asset management, risk assessment, and operational excellence that has kept IT infrastructure running reliably for decades, just applied to an entirely new and more complex category of technology.

The good news? We’ve solved similar problems before, and the fundamental principles of good asset management haven’t changed, even if the assets themselves have evolved dramatically. We just need to adapt our approaches, expand our toolsets, invest in new skills, and prepare for a world where AI components are as fundamental to our infrastructure as databases, web servers, and network equipment—but potentially more impactful when they fail or behave unexpectedly.

The AI revolution is here, and it’s bringing unprecedented challenges along with unprecedented opportunities. But for organizations willing to invest in proper visibility, governance, and management of their AI assets, it’s also opening up new possibilities for innovation, efficiency, and competitive advantage. The question isn’t whether AI will transform your organization—it’s whether you’ll be ready to manage that transformation safely, effectively, and responsibly.

The era of “AI-powered everything” is upon us. Let’s make sure we’re equipped to handle it with the same rigor and professionalism we’ve brought to every other technological revolution.