← Back to Insights
Private Equity13 min read

The AI Risk Your Due Diligence Isn't Catching

18 March 2026

You are in diligence on a healthcare technology company. The model is impressive. The market is growing. The team is strong. Your technical diligence confirms the AI works as advertised.

Six months post-close, a regulator asks who authorized the AI's role in a clinical decision that produced an adverse outcome. Your portfolio company's legal team confirmed regulatory compliance at close. The technology passed every technical diligence test. But no one evaluated whether documented human authority existed over the AI's expanding role in clinical workflows, because that question was never part of the diligence framework.

The AI worked perfectly. The governance architecture never existed.

This is the risk that standard technical and legal diligence does not surface. And in 2026, it is the risk that will determine whether AI creates value or creates liability in your portfolio.

AI and machine-learning focused private equity deal value more than tripled from about $41.7 billion in 2023 to roughly $140 billion in 2024, and the momentum continued into 2025. AI is no longer a sector play. It is embedded in every sector you invest in.

But here is the number that reframes the diligence question: Only one in five companies has a mature governance model for autonomous or agentic AI systems according to Deloitte's 2026 State of AI in the Enterprise survey. Meanwhile, roughly 78% of health systems are engaged in AI projects, but only about 52% feel operationally ready to implement them at scale, according to the Guidehouse and HIMSS 2026 Healthcare AI Trends report. Looking ahead, Gartner predicts that more than 40% of agentic AI projects will be canceled or fail to reach production by 2027, a signal that governance maturity, not just technical capability, will determine which investments produce returns.

The implication for investors is straightforward. You are acquiring companies whose AI capabilities are scaling faster than their governance infrastructure. That gap is a material risk. And it does not show up in a SOC 2 audit.

Standard technical diligence evaluates the model. Does it perform? Is the data pipeline sound? Are there security vulnerabilities? These are necessary questions. They are not sufficient.

What they miss is the human layer. The governance architecture, or the absence of one, that determines what happens when the AI produces the wrong output in a high-stakes environment. Who is accountable. Whether anyone has authority to stop the system. Whether the decisions the AI is making were ever consciously delegated by a named human being, or whether they were silently absorbed by the algorithm one workflow at a time.

In regulated sectors (healthcare, education, financial services) this gap is existential. In a technology startup, an AI failure is a product iteration. In a regulated institution (a hospital, a university, a financial services firm) it is a regulatory inquiry, a compliance event, and a potential liability that affects valuation.

'The AI risk in a portfolio company is not in the model. It is in the human governance architecture that standard technical diligence never examines, and that determines whether AI creates value or creates liability post-close.' - Dr. Tiffany Masson, Falkovia

If your investment thesis depends on a portfolio company's AI capabilities, the regulatory landscape has shifted materially in the last twelve months. Consider the enforcement realities your portfolio companies face.

Texas's Responsible Artificial Intelligence Governance Act (TRAIGA), effective January 1, 2026, requires transparency around AI use in care delivery and documented human oversight of AI-assisted diagnosis and treatment decisions. No vendor contract transfers that obligation. Colorado's AI Act takes effect June 30, 2026, requiring annual impact assessments for high-risk AI systems in healthcare and education, with penalties that can reach up to $20,000 per violation under Colorado's consumer protection law. The EU AI Act's high-risk provisions begin to apply in 2026, with major requirements activating around August 2026. Over one thousand AI-related bills were introduced across U.S. states in 2025 alone.

For investors, the calculus is clear. AI governance is no longer a compliance checkbox. It is a board competency question. And the absence of documented human authority is a balance sheet liability that compounds with every deployment your portfolio company scales.

These are the questions I use when I assess the governance architecture of AI-dependent companies. They do not replace your technical diligence. They address the layer your technical diligence cannot reach.

1. Can the company produce a documented Human Authority Line for every high-risk AI system? Which decisions has the company explicitly designated as non-delegable to AI? Where is this documented? Who approved it? If this document does not exist, the algorithm is drawing the authority line by default.

2. Is there a named individual with documented authority and organizational immunity to stop an AI system? Not a committee. Not a review process. A name. A threshold. A pre-authorized kill switch. If an AI system fails at 2 AM on a Saturday, does the governance hold when no committee is available?

3. Has the company completed a Shadow AI audit? Nearly half of employees using generative AI do so through personal accounts their organizations cannot monitor.

4. What happens in the first 90 minutes of an AI failure? Not the first 90 days. The first 90 minutes. Does a documented incident response protocol exist? Has it been tested? In a regulated environment, the window between an AI failure and reputational damage is hours, not weeks.

5. Has the company addressed the workforce identity threat? AI adoption does not fail when the model breaks. It fails when the people who are supposed to use it quietly disengage. If adoption metrics look strong but key talent is leaving or disengaging, the human architecture was never built.

Here is the principle I bring to every governance assessment: AI adoption is 10 percent technology. It is 90 percent human architecture.

For every dollar your portfolio company has spent on AI technology, ask what they have invested in the human architecture required to make that technology viable, the trust, the authority structures, the identity protection, the governance infrastructure. If the ratio is inverted, you are looking at a digital skyscraper on human quicksand.

'The firms that lead in this era will not be the ones that moved the fastest. They will be the ones that understood what they were actually acquiring: not just the model, but the entire human system around it.' - Dr. Tiffany Masson, Falkovia

Next Step

Ready to govern AI, not just deploy it?

Schedule a confidential conversation about your institution's AI governance architecture.

Start a Conversation