The rules have changed.
The wall between product design and code has collapsed.
Every industry has knowledge that's hard to codify — workflows with hidden dependencies, context that only the practitioners understand, judgment calls that never fit neatly into a spec. Getting that expertise into working software has always been the problem. The translation from domain knowledge to requirements to code to testing diluted everything. Too slow, too expensive, too much lost along the way. That's what changed.
What Changed
For decades, domain experts described problems and engineers translated them into software. That translation layer introduced friction, delay, and distortion.
Today, domain knowledge can go directly to working software. That changes where value sits.
Vision
Document
Understanding
Product
Revolutions
Sold
Happened
Where Value Lives
If the translation layer is gone, then the old source of competitive advantage — having engineers who could build it — is gone too. The question becomes: where does value actually live now?
The AI is the engine. Anyone can get API keys.
The structured thinking you wrap around the AI to produce specific, useful outputs.
(Commodity)
Cloud compute
Base models
Decision frameworks
Structured workflows
Off-the-shelf tools
Generic workflows
Expert intuition
Industry context
Cursor — an AI-powered code editor built by Anysphere — went from launch to $1 billion in annualized revenue in roughly two years, with about 300 employees (CNBC, November 2025). The AI underneath is commodity (Claude, GPT-4). What’s valuable is how Cursor structures the interaction: codebase-wide context, multi-file editing agents, native terminal access, agent workflows. That’s the logic layer. It’s why developers pay $20/month for something built on top of models anyone can access.
This pattern recurs. The AI is the engine. The logic layer is the product.
The Cost Collapse
This isn’t just a pricing story. Cost, context capacity, and reasoning capability all improved simultaneously. That convergence is why everything in this thesis is happening now.
The Speed of Shipping
The Speed of Shipping — A Billion Dollars With Almost Nobody
| Dimension | Traditional (Pre-2023) | AI-Native (2025+) | Shift |
|---|---|---|---|
| Time to $1M ARR | 18–36 months | 3–6 months | 6x faster |
| Team to launch | 8–15 people (dev, design, ops, marketing) | 1–3 people (+ AI) | 5x leaner |
| Seed capital needed | $1M–$3M | $0–$50K | 20x cheaper |
| Revenue per employee | $200K–$400K | $1M–$5M | 5–25x higher |
| Code required to ship | Months of custom development | Days with AI coding tools | 10x faster |
| Design quality | Requires dedicated designer | AI generates production-grade UI | $0 design cost |
| Market research | Hire analysts, consultants ($50K+) | AI produces institutional-quality analysis | 100x cheaper |
| Competitive moat | Capital, headcount, brand, distribution | Speed, taste, domain expertise, iteration velocity | Structural shift |
What Used to Take a Team — Illustrative
The Team You'd Need to Hire — Without AI
Valuation in Uncertainty
The standard valuation playbook assumes you can project cash flows with reasonable confidence. Pay 10x EBITDA because you believe the business will look roughly similar in five years. That assumption is breaking down.
The Core Problem
When small teams with domain expertise can build what used to require large engineering organizations — this changes what “defensible” means.
You can't pay 10x cash flow when you don't know if the cash flow exists in year three.
The exposure looks like this:
- Revenue at risk from AI-native entrants who can undercut on price and outpace on iteration.
- Proprietary software that may be worth a fraction of its carrying value if small teams can rebuild the core functionality.
- Goodwill impairment risk when "moats" acquired at premium multiples turn out to be data piles.
- Human capital risk — losing the senior talent who understand AI while retaining junior-heavy org charts designed for a different era.
New Due Diligence Questions
Beyond the standard financial and operational diligence, these questions now matter:
Disruption exposure: What would it take for a well-funded AI-native startup to capture 20% of this business's market in 24 months? What's the specific attack vector?
Labor composition: What percentage of operating costs are in roles that AI could automate in the next 3 years? What's the plan?
Software defensibility: If the proprietary software could be rebuilt by a small team using AI tools, what actually creates switching costs?
Data moat: Does the business generate proprietary data that improves over time, or is it using commodity data?
Management awareness: Does leadership understand these dynamics, or are they assuming business as usual?
Implications for Multiples
This doesn't mean everything is worthless — it means valuation needs to be more granular. Businesses with genuine defensibility (data flywheels, deep workflow integration, regulatory moats, domain-specific logic layers) may deserve premium multiples. Businesses that are essentially labor arbitrage or commodity software wrapped in a brand are more exposed than traditional analysis suggests.
The spread between "AI-advantaged" and "AI-exposed" businesses will widen and accelerate.
Workforce Shifts
The way companies build teams is changing — not gradually, but structurally.
Headcount Decoupling
Companies are reaching significant scale with teams that would have been impossible two years ago. Y Combinator’s Winter 2025 batch grew revenue 10% per week in aggregate — the fastest in YC history. A quarter of those companies reported codebases that were 95% AI-generated (CNBC, Garry Tan, March 2025).
Headcount is decoupling from output. The constraint shifts from "how many people can we hire?" to "how good is our judgment about what to build?"
The Junior Role Transformation
Junior roles existed for two reasons: to do work that didn't require senior judgment, and to train the next generation of seniors. AI disrupts both. The work that trained juniors — research, first drafts, data processing, documentation — is being automated. But junior roles won't disappear — they'll transform.
Do the research. Write the first draft. Process the data. Follow the template. Repeat.
Frame the problem. Direct AI execution. Evaluate outputs. Learn judgment faster.
One data point worth watching: a Stanford study found that employment among software developers aged 22 to 25 fell ~15–20% between 2022 and 2025, coinciding with the rise of AI coding tools (Brynjolfsson, Chandara & Chen, Stanford Digital Economy Lab, August 2025). Meanwhile, studies across hundreds of thousands of developers show senior engineers are twice as likely to report significant speed gains from AI tools, and are far better at catching and correcting AI mistakes — turning AI into a genuine force multiplier rather than a source of bugs (Docker/DX, 2025). The gap is widening, not narrowing.
If a senior engineer with AI tools can reliably direct, evaluate, and ship AI-generated output while a junior introduces a 41% increase in bugs (Uplevel, 2024), the ROI on senior compensation changes dramatically. Paying $400K for a senior who replaces the output of what used to require a five-person team isn't expensive — it's a bargain. The math inverts the traditional pyramid: fewer, more experienced people generating more output at higher per-head cost but lower total cost.
The Hiring Pipeline Problem
The Stanford employment data raises a question that university CS programs and corporate recruiters haven't answered yet: if entry-level hiring contracts by 15–20%, where does the next generation of senior talent come from? The traditional pipeline — hire juniors, train them over 5-10 years, promote the best — assumed a steady intake at the bottom. That intake is compressing.
Companies that solve this will do two things differently. First, they’ll redesign junior roles around AI direction rather than manual execution — hiring for judgment and problem framing rather than raw technical skill. Second, they’ll compress the timeline from junior to senior by exposing new hires to higher-level decisions earlier, using AI to handle the rote work that used to consume years of an early career.
What Moats Look Like Now
Warren Buffett's moat concept assumed competitive advantages that compound over decades. Some still do. But when execution gets cheap and teams get small, the old barriers to entry — engineering headcount, software complexity, process knowledge — stop protecting you. The durability calculus has changed. Some moats are weaker, some still hold, and new ones are emerging.
A 10-year competitive advantage might now be a 3-year advantage. The durable businesses will be the ones that continuously rebuild their moats — not the ones that assume today's will hold forever.
Evaluating AI-Native Businesses
A new category of business is emerging that requires different evaluation frameworks. These companies are built from the ground up around AI capabilities — not bolted on after the fact.
The best AI-native businesses combine domain expertise with defensible IP — the kind that compounds with usage and can't be replicated by switching models.
If You Sit on a Board
Most boards have historically treated AI as a technology initiative — something the CTO presents once a quarter, sandwiched between a cybersecurity update and a cloud migration timeline. The framing has changed. AI is a strategic question with direct implications for competitive position, workforce structure, capital allocation, and enterprise value.
Delegating AI to the technology committee is like delegating the internet to the IT department in 1998.
The Fiduciary Question
There's a point — and we're approaching it — where failure to understand AI's impact on the business becomes a governance failure. Not because every board member needs to use ChatGPT, but because the board's primary job is to ensure the company is positioned for the future, and AI is reshaping what that future looks like faster than any technology shift since the internet. A board that can't evaluate whether management has a credible AI strategy is a board that can't do its job.
What You Should Be Asking Management
Five questions that separate boards doing their job from boards going through the motions. If your CEO can't answer these clearly, that's the finding.
"Where is AI already changing our competitive landscape — not theoretically, but right now?" You're testing whether management is tracking AI-native entrants and incumbents that are pulling ahead. If the answer is vague or dismissive, they're not watching.
"What's our logic layer — the proprietary reasoning we've built around AI that competitors can't easily replicate?" If management can't articulate this, the company is using AI as a tool rather than building AI into its competitive position.
"How has our revenue per employee changed in the last 12 months, and what's the target for the next 12?" This is the single most important metric for measuring whether AI is creating value or just creating presentations. If the number isn't moving, the AI strategy isn't working. (Adjust for model/API spend and contractor usage — the metric is easy to game with outsourcing.)
"What's our plan for the 30% of roles that are primarily execution-based?" Not "are we looking into AI" but "what's the specific plan, with timelines, for restructuring execution-heavy functions?"
"If we were starting this company today with AI-native tools, how would it look different from what we have?" This is the hardest question because it forces honesty about legacy structures. The gap between "what we have" and "what we'd build" is the size of the transformation required.
Asking the right questions is the start. Scoring the answers is the discipline. Use the Board Diagnostic to assess whether your company is AI-Advantaged or AI-Exposed — six binary questions, a score, and a prescribed action for each tier.
The Diagnostic
Six questions. Ten minutes. A structural read on whether your company is positioned to win or exposed to disruption.
Your value proposition requires proprietary data, regulatory approval, physical infrastructure, or deep domain relationships that can't be replicated by a well-funded team with GPT-5. You have structural protection.
Action: Identify which parts of your offering are defensible and which are exposed. Double down on what's hard to replicate. Assume someone is already building the AI-native version of everything else.
You're capturing AI leverage. Output is growing faster than headcount. This is the clearest measurable signal that AI is creating value in your organization rather than just creating demos.
Action: Audit where AI tools are deployed versus where they're actually changing output. If you have AI tools and revenue per employee hasn't moved, you have an adoption problem, not a technology problem.
You have a compounding advantage — each customer interaction improves your product, which attracts more customers. This is the strongest AI-era moat.
Action: Map every customer touchpoint and identify where proprietary data is being generated or could be. If you're not collecting it, start. If you're collecting it but not using it to improve the product, that's your next project.
You’ve built structured IP that turns commodity AI into differentiated output. This is what competitors can’t copy.
Action: Start building it. Identify the 10 decisions your best people make that require judgment, then codify those decisions into structured workflows that can be augmented by AI. Your experts' intuition is your IP — capture it.
AI is making your offering more valuable, not just cheaper to deliver. You're on the right side of the value equation. Customers are paying for outcomes, not hours or seats.
Action: If customers are demanding lower prices because "AI should make this cheaper," you're losing the value argument. Shift the conversation to outcomes and results. If you can't quantify the outcome, that's the problem.
You have a structured AI training program, your senior people are staying, and you're building the human capital to execute the strategy. Tools without trained people are shelfware. The companies pulling ahead are the ones whose best people know how to direct AI — and choose to stay because they see the opportunity.
Action: Audit your AI training investment and your attrition among senior talent. If your best people are leaving for AI-native companies, you have a strategy credibility problem — they don't believe in your plan. If nobody is trained, your AI tools are running at 10% of their potential. Fix both within 90 days.
Scoring
If You Manage a Team
You're the one who has to make it real — restructuring workflows, rethinking roles, and delivering more output with a team designed for a different era. Middle management is both the most important layer for AI transformation and the most exposed to it.
The Manager's Dilemma
AI collapses the handoffs that created the need for coordination in the first place. Management doesn't disappear — it shifts from coordinating execution to directing judgment. The manager who thrives in this environment identifies which decisions require human judgment, designs workflows that put AI on execution and humans on evaluation, and measures output rather than activity.
The Team Audit
Run this exercise on your team this week. It takes an hour and it will change how you think about every role.
Step 1: List every role on your team and their primary outputs. Not job descriptions — actual outputs. What does each person produce in a typical week? Documents, analyses, code, designs, decisions, communications.
Step 2: For each output, estimate the split between execution and judgment. Execution is the work of producing the thing — the research, the drafting, the formatting, the data processing. Judgment is deciding what to produce, evaluating whether it's right, and adapting it to context. Be honest. Most outputs are 70-80% execution.
Step 3: Estimate how much of the execution component AI could handle today. Not in theory — with tools that exist right now. For most knowledge work, the answer is 40-70% of the execution layer.
Step 4: Do the math. If AI can handle 50% of the execution on a role that's 80% execution, that's 40% of the role's current time freed up. Across a 10-person team, that's the equivalent of 4 full-time roles worth of capacity. The question becomes: what do those people do with that time? More judgment work? Fewer people? Both?
Restructuring a Team — The Practical Version
The Metric That Matters
Track one number: output per person per month. Define "output" in terms your function actually cares about — deals reviewed, campaigns launched, features shipped, cases resolved, whatever the unit of work is. If that number isn't rising quarter over quarter, your AI integration isn't working. If it is rising, you have the evidence to justify restructuring to leadership and the credibility to lead it.
Your First Action
Pick the single highest-volume workflow on your team — the thing that consumes the most hours across the most people. Redesign it around AI execution with human oversight. Give it 30 days. Measure the before and after. That result becomes your case study for transforming everything else. Don't ask permission to run a pilot. Run a pilot and present the results.
If This Is Your Career
The Honest Assessment
Most career advice about AI falls into two useless categories: panic ("your job is going away") or denial ("AI can't do what I do"). The truth is more specific and more actionable. The execution-versus-judgment framework in the Managers section applies to individual roles too. Your exposure depends on the ratio between those two in your day-to-day work.
The Career Diagnostic
Answer these honestly. Write down the answers — the act of writing forces precision that thinking doesn't.
What do you know that AI doesn't? Not what information you have — AI has more. What understanding do you have? What patterns can you see in your domain that come from years of experience? What do you know about how things actually work that isn't written down anywhere? That's your moat.
What can you evaluate that AI can't? AI can generate ten options. Can you reliably pick the right one for this specific situation? If yes, you're a judge — and judges are more valuable in a world with infinite generators. If no, you're competing with a machine that works faster and cheaper.
If AI handled 80% of your daily tasks, what's the 20% that requires you? If you can name it clearly, that 20% is your career. Invest everything in making it deeper and sharper. If you can't name it, that's the problem — and it's solvable, but not by waiting.
Are you learning to direct AI or learning to compete with it? The person who spends their evening mastering a new AI workflow is building leverage. The person who spends it perfecting a manual skill AI already does well is training for yesterday.
The Three Investments That Matter
Go deeper in your domain — not broader, deeper. The person who understands the nuances of healthcare reimbursement, or construction permitting, or derivatives pricing isn't threatened by AI. They're the person AI makes ten times more productive. Generalists who know a little about everything are exactly who AI replaces. Specialists who know everything about something are who AI empowers.
Learn to direct AI as a tool of your craft, not as a novelty. This doesn't mean taking a course on prompt engineering. It means integrating AI into your actual work — this week, not next quarter. Use it to produce first drafts you then refine. Use it to analyze data you then interpret. Use it to generate options you then evaluate. The goal is to develop judgment about when AI is right and when it's wrong in your specific domain. That judgment is worth more than any certification.
Build a reputation for judgment, not output. In a world where everyone can produce at volume, the person known for making the right call becomes disproportionately valuable. That means being visible about the decisions you make and why. It means developing a track record of good judgment that others can observe and rely on. It means positioning yourself as the person who knows what to do, not just the person who gets things done.
Your First Action
Take the single task you spent the most hours on last week. Do it again with AI handling the execution. Compare the output and the time. If it's 80% as good in 20% of the time, you've just found your leverage point — and you've identified that the value you add is in the evaluation and refinement, not the production. Build from there.
If You're Raising Kids
Every investor, operator, and board member reading this document is also thinking about their children. The question comes up at dinner, at school fundraisers, in conversations with college counselors: what should my kid study? What career paths still make sense? The thesis above provides a framework for answering — and the answer is counterintuitive.
The Old Advice Is Wrong
"Learn to code" was the career advice of the last decade. It was right then. It's incomplete now. Software development isn't disappearing, but the barrier to writing functional code has collapsed so far that coding skill alone is no longer a differentiator. The Stanford data on junior developer employment — a 15–20% decline among 22-to-25-year-olds — is the first hard signal. What's happening in software will happen across every field where AI can handle execution.
The old advice prioritized skills: learn Python, learn Excel, learn financial modeling, learn to write clean code. The new reality prioritizes understanding: learn how healthcare actually works, learn why supply chains break, learn what makes a legal argument persuasive, learn how buildings get permitted. Skills can be automated. Understanding compounds.
Study computer science. Learn to code. Get a technical certification. Specialize in a tool. The skill itself was the career.
Study a field deeply. Understand how things actually work. Develop judgment about what matters. AI handles the execution — your kid directs it.
What Actually Prepares Them
The careers that will thrive share a common profile: deep domain knowledge combined with the ability to direct AI tools effectively. A nurse practitioner who understands patient care deeply and can use AI to handle documentation, research drug interactions, and flag anomalies is extraordinarily valuable. A general-purpose "AI specialist" with no domain expertise is competing with every other generalist — and with the AI itself.
The Capabilities That Compound
Four capabilities will matter more than any specific major or technical skill: the ability to frame problems well (AI is an extraordinary answer engine, but it depends on the quality of the question); the ability to evaluate output critically (you can’t spot what’s wrong if you don’t know what right looks like); the ability to communicate persuasively with other humans (AI doesn’t negotiate, build trust, or read a room); and comfort with ambiguity and rapid change (the specific tools will change every 18 months — adaptability beats mastery of any single system).
The Conversation to Have
The most important conversation isn't about what to study. It's about what problems fascinate them. A kid who's genuinely interested in how cities work, how diseases spread, how buildings stand up, how markets move, or how people make decisions has the raw material for a career that AI makes more powerful. A kid who's choosing a major based on starting salary data from 2023 is optimizing for a world that won't exist when they graduate.
Not every path runs through a screen. There are entire categories of work where physical presence, human connection, and hands-on skill are the value — and AI only makes them more in demand. These fields face growing shortages, strong pricing power, and zero risk of being automated away. Examples include:
- Skilled trades — electricians, plumbers, HVAC, welding
- Construction and infrastructure — physical, local, and increasingly complex
- Healthcare — nursing, physical therapy, home health — human presence is the product
- Hospitality and service — judgment, empathy, and trust can't be automated
- Emergency services — firefighters, paramedics, first responders
If You Invest
If you allocate capital — through a fund, a public portfolio, angel checks, or your own retirement account — the honest question is: are your analytical frameworks keeping up, or are you pattern-matching against a world that no longer exists?
Most Investors Are Behind and Don't Know It
Most investors — including sophisticated ones — are still evaluating companies through pre-AI lenses. The investor who underwrites a professional services firm at 12x EBITDA because “that’s where the sector trades” without asking what happens when AI compresses the leverage model isn’t being conservative. They’re being blind.
Your Analytical Edge Is Already Gone
Traditional investment analysis rewards pattern recognition across historical data — comps, multiples, sector performance, management track records. AI doesn't just help with that analysis. It commoditizes it entirely. Any investor with a Claude subscription can now run a comparable analysis, build a financial model, or summarize an earnings call in seconds. If your edge was ever "I read more 10-Ks than the next person" or "my analyst team builds better models," that edge is gone.
The Market Is Mispricing Both Directions
The February 2026 software sell-off — the "SaaSpocalypse" — wiped roughly $1 trillion from software market caps in seven trading days. Forward earnings multiples for the sector collapsed from 39x to 21x. Goldman Sachs' software basket fell 13% in a single session, the deepest one-day correction in over a decade. The trigger was Anthropic's enterprise agent rollout, followed by OpenAI's full-stack orchestration layer. The market panicked and sold everything — AI-advantaged and AI-exposed companies alike. Salesforce fell 38% YTD, ServiceNow 23%, Intuit 33%. That indiscriminate selling tells you the analytical frameworks haven't been built yet. That's the opportunity, but it cuts both ways.
What's overpriced: Thin wrappers on commodity models. Jasper AI raised $125M at a $1.5B valuation in October 2022. Revenue collapsed 54% — from $120M to $55M — by 2024. Both co-founders left. A Google VP warned in February 2026 that two categories of AI startups face extinction: thin wrappers and commoditized infrastructure tools. "When the underlying model improves, the wrapper company's value proposition can be replicated overnight." The market still gives premium multiples to companies that mention AI on earnings calls regardless of whether they have a logic layer, a data flywheel, or any defensible position.
What's underpriced: Incumbents caught in the sell-off crossfire. ServiceNow dropped 23% despite beating earnings nine consecutive quarters — Morningstar called it "deeply undervalued for long-term investors." Oracle's cloud infrastructure is booked over a year ahead for AI training, with a remaining performance obligation backlog exceeding $130B, yet it was sold alongside pure SaaS names. The market penalizes companies that haven't articulated an AI story, but the company with 20 years of proprietary data and a captive customer base is better positioned than the startup with a pitch deck and a GPT wrapper. The sell-side doesn't have a framework for valuing "AI-advantaged incumbent" yet.
What's dangerously mispriced: Companies trading at historical multiples in sectors where the economic model is structurally breaking. EdTech revenue multiples collapsed from 7.2x to 1.6x — a 78% compression — between Q4 2020 and Q4 2024 (Finerva). Robert Half is down 60% as AI automates both recruiting workflows and the back-office temp roles it fills. The downside isn't a 20% correction. It's Chegg: a $14B company at peak, now worth $191M. A permanent rerating as the market realizes the cash flows aren't coming back.
Hard Questions for Your Own Practice
The thesis applies to your firm too. 95% of fund managers now use generative AI in their work, up from 86% a year earlier (AIMA, 2025). 60% of institutional investors said they'd be more likely to invest in a fund that allocates meaningful budget to AI R&D. If you manage a fund, you employ analysts who build models, conduct research, prepare investment memos, and generate deal flow. AI can do most of that faster and cheaper today. The question isn't whether to use AI in your own practice — it's whether your competitors already are, and what it means when AI-native firms generate $3.5M in revenue per employee versus the traditional SaaS average of $200-350K.
The investor who uses AI to do in 20 minutes what used to take a junior analyst two days has a structural advantage — not because the analysis is better, but because they can evaluate more opportunities, iterate faster on theses, and spend their time on the judgment calls that actually drive returns. But here's the trap: AI makes it easy to feel productive without being effective. Running more screens, building more models, reading more research — that's volume, not insight. The investors who win in this environment will use AI to compress the execution and spend the freed-up time on the things AI can't do: building relationships with management teams, developing conviction through primary research, and thinking deeply about second-order effects that aren't in any dataset.
Your First Action
Take your three largest positions. For each one, write a single paragraph answering: "If a well-funded AI-native startup targeted this company's market tomorrow, what would they build first, which customers would they take, and what specifically would stop them?" If you can't write a convincing defense for a position, that's the position to scrutinize — not next quarter, now. The market hasn't fully priced this transition yet, which means you have a window. But windows close, and the investors who do this exercise first will be the ones selling to the investors who do it last.
Winners & Losers Scorecard
The thesis above describes structural forces. This exhibit applies them to specific companies. The classification framework below uses five winner signals and four loser signals, each observable in public data. Every company named here includes a "because" statement — a falsifiable reason for the classification, not a prediction.
The Classification Rubric
These signals separate structural winners from companies at risk. A company doesn't need all five winner signals to qualify — three or more, with evidence, is the threshold. Similarly, two or more loser signals with confirming market data warrants the classification.
The Scorecard
The following companies are classified using the rubric above. The reasoning matters more than the verdict — these are illustrations of how the signals manifest in practice, not stock picks.
| Company | Sector | Verdict | Because |
|---|---|---|---|
| Palantir | Enterprise Software | Winner | AI-native data platform with irreplaceable government and enterprise logic layers. Deep AIP adoption means the product improves with every deployment. |
| Shopify | Commerce | Winner | AI integrated across merchant tools — product descriptions, inventory, customer service. AI-powered checkout is capturing share by making merchants more successful, not just more efficient. |
| CrowdStrike | Cybersecurity | Winner | AI makes the threat detection platform exponentially better with scale. Each new endpoint feeds the data flywheel. Security is an AI-advantaged domain. |
| Cursor (private) | Developer Tools | Winner | ~300 employees generating a reported ~$1B ARR (per CNBC). Product couldn't exist without AI — it is the AI. The economics are a category of their own. |
| Klarna | Fintech | On the Fence | Aggressively cut headcount and automated customer service with AI. But had to rehire after quality dropped — a cautionary signal that AI transformation requires judgment about where to apply it. |
| Salesforce | Enterprise Software | On the Fence | Agentforce is ambitious but seat-based licensing model is threatened by AI agents that could cannibalize per-user revenue. The transition from selling seats to selling outcomes is unproven. |
| Adobe | Creative Software | On the Fence | Firefly AI is integrated but AI-native tools like Canva and Midjourney are capturing the low end while pricing pressure builds from free alternatives. The moat is the professional workflow, not the generation capability. |
| Chegg | Education | Loser | Core product — homework answers — is now free via ChatGPT. Attempted pivot to "skills" but the core value prop was eliminated overnight. No logic layer between the content and free alternatives. |
| Upwork | Professional Services | Loser | Freelance marketplace for tasks AI can now do directly. Why hire someone on Upwork to write copy or build a basic website when the buyer can do it themselves? |
| Pearson | Education | Loser | Content-as-product model is structurally exposed. AI can generate, summarize, and tutor from any source material. No logic layer between Pearson's content and free alternatives. |
Private Equity
The PE model is built on predictability: stable cash flows, operational improvements over a hold period, multiple expansion on exit, and leverage against reliable earnings. Each of these pillars is under pressure. And the data says the industry knows it — 86% of PE firms have adopted generative AI in M&A workflows, 59% consider AI a key factor in value creation — but only 5% of companies broadly are achieving AI value at scale (Deloitte 2025; FTI 2024; BCG 2025). The gap between awareness and execution is enormous.
The Model Under Stress
| PE Pillar | The Assumption | The AI Risk |
|---|---|---|
| Cash Flows | Stable, predictable earnings over a 5-year hold | Core business model faces existential pressure mid-hold |
| Operational Improvements | Business model is sound; optimize around it | AI changes what the business should be doing — incremental improvements miss the point |
| Multiple Expansion | Buyer confidence supports higher exit multiples | Exit buyers asking the same AI disruption questions — multiples compress |
| Leverage | Debt amplifies returns against reliable earnings | Earnings volatility makes leverage dangerous — 25–35% of private credit exposed (UBS) |
The Hold Period Problem
Average PE hold periods have stretched to approximately 6.5 years — up ~35% from the 2010–2021 average (S&P Global; McKinsey, 2026). That timeline now spans multiple generations of AI capability. A business acquired in 2024 will exit into a world where AI-native entrants have emerged in every sector. The exit buyer in 2029 won’t compare your portfolio company to today’s competitors — they’ll compare it to companies that don’t exist yet.
Questions for the Investment Committee
These overlap with the board-level questions above — deliberately. Investment committees need the same strategic clarity, filtered through deal economics:
1. What's the AI attack surface?
Where could an AI-native competitor enter? What would they build first? How fast could they scale?
2. What's the transformation thesis?
Is there a credible path to making this business AI-advantaged? What does that require in terms of talent, capital, and time?
3. What's the exit story?
Will strategic buyers and sponsors in 5 years see this as a platform or a problem? What needs to be true?
4. What's the margin structure dependency?
If AI reduces the cost of the core service by 50%, does the business model survive? What's the pricing power?
5. Is management equipped?
Does the leadership team understand these dynamics? Are they building toward AI-advantage or defending the status quo?
The Credit Contagion
The SaaSpocalypse didn't stay in equities. Software companies account for roughly 25% of the $3 trillion private credit market through year-end 2025. When the sell-off hit, shares of Blue Owl, TPG, Ares Management, and KKR all fell by double digits. UBS estimates 25-35% of private credit is exposed to AI disruption risk, warning that loans originated before 2024 "likely did not contemplate AI as a meaningful business risk." Under an aggressive disruption scenario, default rates could approach double digits (UBS, February 2026; CNBC). The risk isn't theoretical — it's already repricing.
The Opportunity
This isn't all downside. The firms already moving are showing the playbook. Vista Equity Partners — $100B+ under management across 90+ companies — launched an "Agentic AI factory" in 2025; roughly a third of its portfolio companies are using those tools to automate tasks and improve productivity. Thoma Bravo acquired Verint Systems for $2B, merging it with portfolio company Calabrio to create a unified AI-powered contact center platform. The pattern: acquire AI-exposed businesses at appropriate discounts, transform them into AI-advantaged positions, and exit to buyers who see the new trajectory. But it requires new capabilities — technical diligence, transformation playbooks, and operators who understand what's actually possible.
The next generation of PE outperformance won't come from better deal sourcing or cheaper leverage. It will come from understanding which businesses can be transformed and having the capability to do it.
Professional Services
The Leverage Model
Professional services economics work like this: a partner with 20 years of experience supervises 5-10 junior professionals who do the bulk of the work. The partner's judgment directs the effort; the juniors execute research, analysis, drafting, and documentation. The firm bills the partner at $1,500/hour and the juniors at $400/hour, but the juniors do 80% of the hours. Profit comes from the spread.
The work that juniors do — research, first drafts, document review, data analysis, precedent searching — is exactly what AI does well. The leverage model breaks when the middle of the pyramid disappears.
What AI Changes
The pattern: tasks that took junior professionals hours or days now take minutes. The senior judgment that directs the work still matters — often more than before — but the execution layer compresses dramatically.
The Pricing Problem
Clients aren't stupid. When they see AI doing work that used to take 40 billable hours, they won't pay for 40 hours. The conversation shifts:
Old conversation: "This contract review will require approximately 60 hours of associate time at $450/hour, plus 5 hours of partner oversight at $1,200/hour."
New conversation: "Why am I paying $27,000 for something your AI did in an afternoon? What am I actually paying for?"
Firms face a choice: capture the efficiency internally (same price, higher margin, but clients eventually notice), or pass it through (lower prices, same margin, but revenue shrinks). Neither is comfortable. The firms that win will find a third path — new pricing models based on value delivered rather than hours spent.
The Talent Pipeline Problem
Junior professionals learn by doing. The associate who reviews 10,000 documents develops judgment about what matters. The analyst who builds 50 financial models understands how the pieces connect. The junior consultant who sits through 30 client interviews learns to read a room.
If AI does the junior work, how do juniors learn? And if juniors don't learn, where do future partners come from?
This isn't a theoretical problem — it's happening now. Firms report that training timelines are extending because juniors get less repetition. Partners complain that associates "don't have the reps." The apprenticeship model that built expertise over 10-15 years is breaking down. For investors, the implication is direct: the value of a traditional firm's "talent pipeline" may be overstated if that pipeline no longer produces fully trained seniors on the old timeline.
Winners and Losers
| Positioned to Win | Under Pressure |
|---|---|
| Boutique, senior-heavy firms — less leverage to lose, more judgment to sell | Large leverage-dependent firms (Big Law, Big 4, major consultancies) — pyramid economics are exactly what AI disrupts |
| AI-native entrants built from scratch — no legacy economics to protect | Commodity service providers (doc review, basic compliance, routine transactions) — pure substitution risk |
| Firms already pricing on outcomes rather than hours — the model everyone now needs | Slow adopters waiting to see how it plays out — clients and talent move first |
What This Means for Clients
Renegotiate scope and pricing. If your law firm or consultant is using AI tools, the economics have changed. You shouldn't pay 2019 rates for 2026 delivery.
Evaluate AI capability directly. Ask what tools they're using, how they're integrated into workflows, and what the quality control looks like. This is now a procurement criterion.
Consider unbundling. The commodity parts of professional services (document review, basic research, routine filings) can often be done by AI-native specialists at a fraction of traditional cost.
Watch for talent flight. The best professionals are leaving traditional firms to build or join AI-native practices. Make sure your advisors are on the right side of this transition.
Professional services isn't going away. Expertise matters more than ever. But the delivery model is being rebuilt, and the firms that recognize this early will define the next era. The rest will find themselves defending high rates for work that clearly doesn't require that many hours.
The Compression of Prompt Engineering
When you analyze a deal, you think about: (1) downside protection first, (2) base case returns, (3) upside optionality. You never lead with the bull case. You always identify the "kill shot" — the single risk that would make you walk away — before evaluating the opportunity.
Your communication style is direct and concise. You write for a senior partner who has 15 minutes to decide whether this deal deserves a second look. No throat-clearing. No "it depends." Take a position.
EXECUTIVE SUMMARY (3-4 sentences. Lead with your verdict: Pass, Proceed to DD, or Conditional Proceed. State the single most compelling reason and the single biggest risk.)
BUSINESS QUALITY ASSESSMENT
— Revenue quality score (1-10) with justification
— Customer concentration analysis (top 10 customers as % of revenue)
— Recurring vs non-recurring revenue breakdown
— Organic growth rate vs. acquisition-driven growth
— Gross margin trajectory and sustainability
MANAGEMENT & OPERATIONS
— Key person risk assessment
— Bench depth below C-suite
— Operational efficiency (revenue per employee trends)
— Capital allocation history and discipline
FINANCIAL ANALYSIS
— EBITDA quality and adjustments (flag any aggressive add-backs)
— Working capital dynamics
— CapEx requirements (maintenance vs. growth)
— Free cash flow conversion rate
— Debt capacity analysis
VALUATION & RETURNS
— Implied entry multiple on adjusted EBITDA
— Comparable transactions and public comps
— Base case IRR (25th, 50th, 75th percentile)
— Key assumptions driving returns
— Sensitivity on 2-3 critical variables
VALUE CREATION PLAN
— Top 3 operational improvements with quantified impact
— Potential add-on acquisition targets
— Pricing power assessment
— Technology/automation opportunities
RISK MATRIX
— The kill shot (walk-away risk)
— Top 5 risks ranked by probability × impact
— Mitigants for each
VERDICT
— 2-sentence recommendation
— Conditions for proceeding, if applicable
— DO NOT take management projections at face value. Apply a 20-30% haircut to forward revenue projections and evaluate what the business looks like under conservative assumptions.
— DO NOT use generic language like "strong market position" or "attractive growth profile" without specific evidence.
— DO flag any EBITDA adjustments that exceed 15% of reported EBITDA as potentially aggressive.
— DO calculate implied customer lifetime value if data is available.
— DO identify the 2-3 metrics that matter most for this specific business and explain why.
— DO compare stated growth rates against industry benchmarks.
— DO NOT write more than 2 pages total. Density over length.
— DO use specific numbers from the CIM. Never say "significant" when you can say "34%."
// The goal is a document I can hand to my managing partner
// that gives him everything he needs in a 15-minute read
// to decide: do we take the next meeting or not?
— Write at a Wharton MBA level, not a blog post level
— Use tables where they communicate faster than prose
— Bold the single most important number in each section
— If the CIM is missing critical data, flag it explicitly as a red flag — don't fill in assumptions silently
— End every section with a one-line "so what" that connects back to the investment decision
The same structured analysis — executive summary with a verdict, revenue quality scoring, EBITDA adjustment flags, risk matrix with a kill shot identified, value creation levers, and a clear recommendation. The AI now knows how a PE professional thinks. It knows what "be direct" means in this context. It knows to haircut projections, flag customer concentration, and lead with downside.
Every instruction from the 2-page prompt on the left? The reasoning model already has it internalized.
Why this happened
Reasoning models don't just pattern-match — they think. Early models needed every instruction because they were essentially sophisticated autocomplete. They'd generate PE-sounding language, but they didn't understand PE logic. You had to encode the logic yourself, in the prompt. Today's models (Claude with extended thinking, GPT-o1, Gemini Deep Research) have internalized domain expertise. When you say "PE associate," the model understands the analytical framework, the skepticism, the format, the audience, the decision structure. You're directing intent, not programming behavior.
Prompt engineering was a bridge technology — necessary when the models were powerful but lacked judgment. As reasoning improved, the prompts collapsed. The skill that replaced it isn't "how to write better prompts." It's knowing what to ask for — understanding the domain well enough to describe the outcome, and trusting the AI to figure out the process. The expertise moved from the prompt to the person.
The AI Graveyard
Here Lies the Middleware
The ICU
Not dead — but the use case that justified the hype is shrinking fast.
For most single-document and bounded-context tasks, the middleware is gone.
The reasoning model already knows how to think, what format to use, and what matters.
Most of these were workarounds for limited reasoning and context. Others persist for scale, governance, and control. But for the majority of bounded tasks, the middleware collapsed. The stack went from 12 layers to 3.
What’s Next
Capabilities shipping now or in beta — the window between preview and broad rollout is shrinking. Weeks in the fastest-moving products, months in the rest.
AI capabilities aren't launching like software — they're compounding like interest. Each wave makes the next one faster. The question isn't whether this arrives. It's whether you're positioned when it does.
Where This Thesis Could Be Wrong
The case above is deliberately confident. Here's where it might break down:
Being wrong about timing is different from being wrong about direction. The question for any specific decision is: what's the timeline that matters for you?
Toolbox
Frameworks and diagnostics from this thesis — built to be pulled out and used independently.
Sources
AI Code Generation & Adoption
Tan, G. (March 2025). YC W25 batch growth rates. CNBC.
Workforce & Productivity
Brynjolfsson, E., Chandara, D. & Chen, N. (August 2025). "Canaries in the Coal Mine? Six Facts about the Recent Decline in Entry-Level Employment." Stanford Digital Economy Lab.
Uplevel (2024). Gen AI for Coding Research Report. Bug introduction rates with AI tools.
Docker/DX (2025). AI productivity divide across experience levels.
Company Data
Anysphere/Cursor (November 2025). $1B ARR, ~300 employees, 36% freemium conversion. CNBC, Contrary Research.
Klarna (2025). Headcount reduction, revenue growth, IPO at $19.6B valuation. CNBC, Al Jazeera.
Stock performance data: Public market data, November 30, 2022 through February 2026.
Board Governance
Deloitte (2025). "Governance of AI: A Critical Imperative for Today's Boards."
Korn Ferry (2025). CEO & Board Survey: AI as strategic priority.
Legal Profession
Everlaw/ACEDS/ILTA (2025). 2025 Ediscovery Innovation Report. 32.5 days reclaimed, 90% billing impact expectation.
ACC/Everlaw (October 2025). Generative AI's Growing Strategic Value for Corporate Law Departments. 60% no savings, 64% expect reduced reliance on outside counsel.
Market Performance
Morningstar (January 2026). "AI Stocks: Winners, Laggards, and Losers of 2025." Equal-weighted 34-stock AI basket vs. Morningstar US Market Index.
Investor & PE Data
Bloomberg (2025). "AI Is the Hot Topic in Tech Earnings and a Blind Spot Everywhere Else." Analyst coverage gap analysis.
PwC (2025). Global Investor Survey. AI disclosure adequacy ratings.
AIMA (2025). Front Office Gen AI Adoption Survey. 95% fund manager adoption rate.
Finerva (2024). "EdTech 2025 Valuation Multiples." Revenue multiple compression 7.2x to 1.6x.
The Information (2024). Jasper AI internal valuation cut, revenue decline.
S&P Global (December 2025). Average PE hold periods. McKinsey Global Private Markets Report (2026).
Bain & Company (2026). Global Private Equity Report. "12 is the new 5" deal return math.
EY (Q4 2025). PE Pulse. 71% of exit value from revenue growth.
Deloitte (2025). GenAI in M&A Survey. 86% PE adoption rate.
FTI Consulting (2024). Private Equity AI Survey. 59% consider AI key value creation factor.
BCG (September 2025). "The Widening AI Value Gap." Only 5% achieving AI value at scale.
UBS (February 2026). Private credit AI disruption exposure analysis. 25-35% exposure estimate.
Microsoft/arXiv (2023). GitHub Copilot productivity study. 55.8% faster task completion.
February 2026 Software Sell-Off
Fortune/Deutsche Bank (February 2026). ~$1 trillion in software market cap losses.
Bloomberg (February 4, 2026). "SaaSpocalypse" coverage. ~$285B erased in single session.
Goldman Sachs (February 2026). Software basket 13% single-day decline. Forward multiples 39x to 21x.
CNBC (February 2026). Anthropic enterprise agent launch and SaaS stock impact.