AI Labor Replacement TAM Fallacy

Vertical AI platforms are currently over-valued because their expected total addressable market (TAM) is predicated on capturing labor spend. This won't happen. Markets are much smaller, and startups like Harvey will struggle to grow into their valuations.

Harvey illustrates this problem. Their recent $11BN raise (>50x revenue multiple) reflects the same TAM expansion logic. The common argument for justifying these high revenue multiples is:

Yes, the current market looks limited. ~400K-500K addressable big law lawyers at Harvey's current ~$2K per lawyer = $800M-$1B revenue. At current public market multiples of 4x, this implies a $3.2B-$4B terminal valuation.2

But, this doesn't consider TAM expansion by replacing labor with AI. $200BN spent annually on non-partner lawyers. If Harvey replaces these lawyers with AI agents priced at $50K-$100K per agent, its 400K-500K agents could generate $20B-$50B revenue, implying an $80B-$200B valuation at current 4x multiples.3

Here is a video from 20VC making this exact argument:

This is a fallacy. It assumes that just because we pay someone $200K to do a job, we'll pay a replacement solution $200K or even close to that.

Elevator operators are the only occupation completely eliminated by automation since 1950. At their peak, 90,000 operators earned ~$52,000 annually (inflation-adjusted), representing $4.7 billion in labor costs. Automatic elevators captured a fraction of this: $3,000-$10,000 in annual maintenance versus $52,000 for a human operator. The technology displaced the labor but did not capture the spend.

AI TAM: Grows vs Software, Shrinks vs Labor Labor TAM ($Z) AI TAM ($Y) Software TAM ($X) X < Y << Z

1. Shift to Usage/Outcome Pricing

The TAM expansion argument assumes labor will be replaced by AI and pricing will remain based on headcount replaced. If you replace 500K humans at $200K each, that does not mean you charge 500K * $200K. Pricing will shift to outcomes or cost-plus margins on token costs.

2. Competition drives down prices

"Your margin is my opportunity." - Jeff Bezos

Law firms pay $200K for a Harvard Law graduate because there is no cheaper, similar quality alternative. Scarcity drives price. If you look at someone working at Trader Joe's, they earn $18/hour because their skillset is abundant.

With AI, the "Harvard Brain" becomes abundant. Assuming providers use standard models (OpenAI, Claude, etc.), the intelligence is no longer supply constrained.

When intelligence commoditizes, value shifts to the software layer: the harness, rails, and workflows built around that intelligence.

But software moats are eroding on two fronts:

First, building products has become faster and easier with agentic coding tools. The cost of creating software is approaching zero.

Second, switching costs are declining. Klarna CEO Sebastian Siemiatkowski argues that AI will enable one-click data migration, eliminating the traditional friction of moving between vendors. Years of operational data locked in vendor systems can now be migrated with AI assistance, eroding the stickiness that protected incumbent SaaS providers.

Scarce intelligence commanded a $200K premium. Abundant intelligence combined with weak software moats means competition will drive prices toward cost+margin.

Market Dynamics

Legal AI faces fierce competition from three angles:

  1. The labs (Anthropic/OpenAI) are sending forward-deployed engineers into top law firms
  2. Law firms are building internal tools (example: Clifford Chance Assist). In-house legal AI adoption doubled from 23% to 52% in one year, with 64% expecting to depend less on outside counsel due to internal AI capabilities
  3. Startups are competing (Harvey, Legora, etc.)

The exact all-in costs to replace an associate/paralegal are uncertain. They could be $10K, $30K, or higher depending on token usage, sales, marketing, and product development. The specific number matters less than the competitive dynamic.

Using $10K as an illustrative example: with these costs, will Harvey sustain $50K-$100K per seat pricing, or will competition drive it closer to cost+margin?

Even if you scale costs upward (say to $30K or $50K per seat), the argument holds. Harvey's revenues would scale with higher token costs, but most of that revenue increase goes straight to cost of goods sold (token payments to Anthropic/OpenAI), not margin. The fundamental question remains: can Harvey maintain the markup needed to capture current labor spend as high margin revenue, or will competition compress margins toward commodity levels?

The only way the TAM holds…

Oligopoly Pricing

A market where we have seen relatively high margins despite largely undifferentiated products is the cloud industry. Top players are not aggressively competing to drive down prices.

However, this would likely not solve Harvey's TAM issue.

1. This cartel-like approach works because of the hyperscaler oligopoly. This oligopoly is sustained by high barriers to entry (e.g. CAPEX) and high lock-in (switching costs). If more startups/firms can easily produce competitors and switching costs for software are down, an oligopolistic market structure with high margins is not sustainable.

2. The margins of the cloud industry are still not that high (AWS's operating margin was 32.9% in Q2 2025, Google Cloud's 20.7%). If Harvey has similar margins, then charging $100K per replaced associate would need to cost them $67K-$79K. Given falling token costs and amortization of product development, costs will likely come in lower than that.

Compounding Moat

The only way to differentiate margins and returns is to avoid competition altogether (create a monopoly as Peter Thiel suggests). In other words, if you are the only one who can build a Harvard Lawyer and everyone else is building Tier 3 University lawyers, you will be able to charge a lot more. The reason NVIDIA is able to charge a lot more is because they are the only one who can build the GPU. The supply is constrained. This moat can come in the form of network effects, sales cycles, customization, system of record (Salesforce), proprietary data, proprietary models, etc. The higher the moat, the higher the margin.1

This raises a critical question: Do we live in a world where we need a genius generalist with context or a smart specialist?

Genius Generalist: The labs (Anthropic/OpenAI) continue building smarter general models. Value comes from providing context to increasingly capable generalists.

Smart Specialist: Value comes from fine-tuning general models with proprietary data and domain expertise.

The evidence points to the genius generalist world. When LLMs were first released, the consensus was to train custom models (example: Poolside). This failed. The labs built orders of magnitude better models. Fine-tuning showed some success (Cursor's composer model for low latency), but for complex tasks, users reach for state-of-the-art Anthropic or OpenAI models.

Legora exemplifies this. Unlike Harvey's fine-tuned models, Legora uses general models (Azure OpenAI, Claude) with context engineering and RAG. Despite this approach, Legora grew from 250 to 400+ customers in five months, and is being said to be out-competing Harvey on certain deals.

If intelligence is commoditized and value lies in the software layer, Harvey's moat weakens. Software is easier to build and migrate than proprietary models. The differentiator becomes a thin software layer, not breakthrough AI. This is exactly the weak moat that can't sustain the margins needed to capture current labor spend as high margin revenue.

Conclusion

High revenue multiples for vertical AI platforms are based on the fallacy that they will capture current labor spend as high margin revenue. Using history as a guide, this is unlikely to be the case. In a world where it is easier than ever to build software and costs of migrating data are at an all-time low, the only industries where such value capture is possible are those where you have enduring moats.

Founder and investors should be acutely aware of this when evaluating their business and deciding entry price.

2 There are ~1.3M active lawyers in the US. After excluding in-house counsel, lawyers using specialized tools (e.g., patent lawyers), and assuming 60% market share (Legora and competitors capture the rest), this leaves ~400K-500K big law lawyers. Bessemer Cloud Index median revenue multiple is ~4x (February 2026).

3 Jason Lemkin predicts AI will capture a large portion of the $200BN spent on non-partner lawyers (associates and paralegals), with the price per seat increasing significantly from current levels to $50K-$100K as AI agents replace these roles.

1 Software companies like Salesforce, ServiceNow, and Workday maintain exceptional margins, but only because they have strong moats (deep integration, network effects, high switching costs).

Sources