The rules have changed.

The wall between product design and code has collapsed.

Every industry has knowledge that's hard to codify — workflows with hidden dependencies, context that only the practitioners understand, judgment calls that never fit neatly into a spec. Getting that expertise into working software has always been the problem. The translation from domain knowledge to requirements to code to testing diluted everything. Too slow, too expensive, too much lost along the way. That's what changed.

What Changed

For decades, domain experts described problems and engineers translated them into software. That translation layer introduced friction, delay, and distortion.

Before
Domain Expertise
Requirements
Specs
Scoping
Product Managers
Engineers
Designers
Handoffs
Architects
Coordinators
Sprints
Meetings
Delay
Budget
Final Product
Months. Millions. Meaning lost.
Now
Domain Expertise
Direct
Final Product
Days. Direct. Nothing lost.

Today, domain knowledge can go directly to working software. That changes where value sits.

Ideas are cheap, execution is expensive
Execution is cheap, judgment is expensive
Technical talent is the bottleneck
Problem selection is the bottleneck
Domain experts pay engineers to build
Domain experts build directly
Months to prototype
Hours to prototype
50–100 engineers for scale
Small teams at unprecedented scale
The Cost of Translation
Expert's
Vision
Requirements
Document
Engineer's
Understanding
Shipped
Product
Domain knowledge preserved
Lost in translation
Now
Expert's Vision
High fidelity
Working Product
Time Compression
The Old Process
Expert
Define
Handoff
PRD
Engineer
Build
QA / Revise
Test
3–6 months to prototype
The New Process
Expert
Define → Build → Iterate
Days to Weeks to prototype
This Feels Different
35 Years (1990s thru 2025)
Now
Tech
Revolutions
Software. Data. Internet. Mobile. Cloud.
1990sEmail, Excel, Bloomberg — faster access to information
2000sSaaS, data platforms, Salesforce — smarter workflows
2010sCloud, mobile, big data — scale everything
AI
Feels different
How They
Sold
Efficiency & intelligence
Feels fundamentally different
What
Happened
Head counts increased
Not sure

Where Value Lives

If the translation layer is gone, then the old source of competitive advantage — having engineers who could build it — is gone too. The question becomes: where does value actually live now?

The Commodity
AI Access

The AI is the engine. Anyone can get API keys.

The Value
The Logic Layer

The structured thinking you wrap around the AI to produce specific, useful outputs.

That's where domain expertise meets design.
?
What questions get asked
In what sequence
With what context
How outputs get structured
What gets surfaced vs. filtered
Product testing re-envisioned
These aren't "AI tools." They're information products and decision engines.
Where IP Lives Now
The Logic Layer
Domain Knowledge
AI / LLM
(Commodity)
The Logic Layer — Where IP lives
The structured reasoning that turns commodity AI into a differentiated product.
Domain Knowledge — What you know
Years of expertise, industry context, edge cases, and judgment calls.
AI / LLM — What everyone has
The engine. Powerful, but commodity.
Where to Invest
Low Value High Value
Generic + Commodity
LLM APIs
Cloud compute
Base models
Unique + High Value
The Logic Layer
Decision frameworks
Structured workflows
Generic + Low Value
Basic prompts
Off-the-shelf tools
Generic workflows
Unique + Emerging
Domain knowledge
Expert intuition
Industry context
Generic Unique
The top-right quadrant
This is where defensible value lives. Not the AI itself — everyone has that. The structured thinking you wrap around it.

Cursor — an AI-powered code editor built by Anysphere — went from launch to $1 billion in annualized revenue in roughly two years, with about 300 employees (CNBC, November 2025). The AI underneath is commodity (Claude, GPT-4). What’s valuable is how Cursor structures the interaction: codebase-wide context, multi-file editing agents, native terminal access, agent workflows. That’s the logic layer. It’s why developers pay $20/month for something built on top of models anyone can access.

This pattern recurs. The AI is the engine. The logic layer is the product.

The Cost Collapse

This isn’t just a pricing story. Cost, context capacity, and reasoning capability all improved simultaneously. That convergence is why everything in this thesis is happening now.

This Is Faster Than Moore's Law
Moore's Law (Transistors)
2x / 2 years
Transistor density doubles every ~2 years. Held for 50+ years. Considered one of the most reliable trends in technology history.
AI Cost Curve
6x cheaper · 250x more context · 35 months
The best model on earth costs 1/6th what the frontier cost 3 years ago. Context capacity grew 250x. Reasoning went from "autocomplete" to "analyst." Simultaneously.
Moore's Law improved one dimension (density) on a predictable curve. AI is improving three dimensions simultaneously — cost, capacity, and capability — on a curve steeper than anyone predicted. That's why everything is accelerating now.
Cost per Million Tokens
Mar 2023
$30
GPT-4
Feb 2026
$5
Claude Opus 4.6
6x
cheaper — and far more capable
The most capable model on earth costs 1/6th what the frontier cost 3 years ago. Smaller Claude models run under $1/M.
Context Window
Mar 2023
4K
~6 pages
Feb 2026
1M
~1,500 pages
250x
larger in under 3 years
You went from pasting a paragraph to pasting an entire book. No chunking, no retrieval pipelines. Just paste it.
Real-world equivalence — what the collapse means in practice
2023 Cost
~$100–200
To analyze one 50-page CIM through GPT-4 with a RAG pipeline (embedding + retrieval + multiple calls + reranking)
2025 Cost
<$1
Same analysis. Better result. Paste the whole CIM. Single call, 1M context.
2023 Cost
$50K+
Fine-tune a legal domain model. Curate training data, run training jobs, evaluate, iterate. 3-6 month project.
2025 Cost
~$0
Base model already outperforms most fine-tunes. Marginal cost near zero — no dedicated infra required.
2023 Cost
$2,500
Build and maintain a RAG pipeline — vector DB, embedding API, retrieval logic, reranking, chunking strategy.
2025 Cost
~$0
1M token context window. Paste the whole thing. No pipeline needed for most use cases.
Why this matters for builders

The cost collapse isn't just a pricing story — it's an access story. When a CIM analysis costs $100+, only PE firms run them. When it costs less than a dollar, anyone can. When fine-tuning costs $50K, only enterprises customize AI. When base models are good enough, everyone gets the customized version. The technology didn't just get cheaper. It got democratic. And that's why one person can now do what took a team.

The Speed of Shipping

Exhibit

The Speed of Shipping — A Billion Dollars With Almost Nobody

The old playbook is dead

It used to take 1,000+ employees, a decade, and hundreds of millions in funding to build a billion-dollar company. AI-native companies are doing it with teams smaller than a high school basketball roster. The structural advantage isn't just technology — it's that the technology eliminated the need for the army.

Cursor
AI-powered code editor by Anysphere
$1B
ARR
~300
Employees
~2 yrs
To $1B
Revenue per employee
$3.3M
vs. Salesforce at ~$375K. That's 9x the efficiency.
Midjourney
AI image generation platform
$200M+
Est. Revenue
~40
Employees
$0
VC Funding
Revenue per employee
$5M+
No VC. No office. ~40 people. An estimated $200M+ in revenue. Bootstrapped to absurdity.
Bolt.new
AI full-stack app builder in the browser
$40M+
ARR (est.)
<50
Employees
~6 mo
To Product
Revenue per employee
~$800K+
Lets non-developers build full apps. The tool that builds tools.
Perplexity
AI-native search engine
$100M+
ARR
~200
Employees
$9B
Valuation
Revenue per employee
~$500K
Taking on Google Search with a fraction of the headcount. AI-native from day one.
It's not just companies — individuals are shipping products from home
AI Tools
Solo SaaS Builders
One person building and selling AI-powered tools — form builders, email assistants, analytics dashboards — with zero employees.
$10K–$50K MRR common
Content
AI-Assisted Newsletters
Writers using AI for research, drafting, and distribution. One person producing institutional-quality analysis weekly.
Paid subscriber bases of 5K–50K
Development
Vibe Coders
Non-developers shipping real applications using Claude Code, Cursor, and Bolt. Building SaaS products without a CS degree.
From zero to deployed in days
Services
AI-Augmented Consultants
Solo consultants delivering enterprise-level analysis and deliverables. AI handles the production; the human handles the relationships.
Premium pricing, solo delivery
The structural comparison
Old Playbook vs. AI-Native Playbook
Dimension Traditional (Pre-2023) AI-Native (2025+) Shift
Time to $1M ARR 18–36 months 3–6 months 6x faster
Team to launch 8–15 people (dev, design, ops, marketing) 1–3 people (+ AI) 5x leaner
Seed capital needed $1M–$3M $0–$50K 20x cheaper
Revenue per employee $200K–$400K $1M–$5M 5–25x higher
Code required to ship Months of custom development Days with AI coding tools 10x faster
Design quality Requires dedicated designer AI generates production-grade UI $0 design cost
Market research Hire analysts, consultants ($50K+) AI produces institutional-quality analysis 100x cheaper
Competitive moat Capital, headcount, brand, distribution Speed, taste, domain expertise, iteration velocity Structural shift
The implication

Cursor didn't beat Salesforce's engineering team by hiring better engineers. Midjourney didn't beat Adobe by spending more on R&D. They won by building AI-native from day one — no legacy architecture, no bloated org chart, no traditional development cycles. The advantage isn't just speed. It's that the old advantages — capital, headcount, infrastructure — are becoming liabilities. The companies that moved fastest moved lightest. That's the structural lesson for everyone watching.

What Used to Take a Team — Illustrative

7
Roles that would have been hired
$415K+
Annual fully-loaded cost
1
Person who actually did the work
~$2,400
Annual AI subscription + API costs

The Team You'd Need to Hire — Without AI

👤
Project Lead / You
Vision + Direction
Not Hired
🔍
Research Analyst
$75K
Market research, competitive analysis, data sourcing, statistics verification, CIM analysis
Not Hired
✍️
Copywriter / Editor
$65K
Investment thesis prose, report writing, newsletter copy, editing for tone and clarity
Not Hired
🎨
UI/UX Designer
$80K
Visual design systems, typography, color palettes, responsive layouts, data visualization
Not Hired
💻
Front-End Developer
$90K
HTML/CSS/JS, responsive code, navigation systems, interactive elements, browser extension
Not Hired
📊
Data Viz Specialist
$70K
Charts, scorecards, matrices, Bloomberg-style layouts, infographics, sector grids
Not Hired
🚀
DevOps / Deploy
$85K
Netlify config, Cloudflare DNS, GitHub CI/CD, SSL, domain setup, API integration
Not Hired
🧪
QA Tester
$55K
Cross-browser testing, mobile responsiveness, link verification, print/PDF QA
VS
1
One Person + AI
Every role. Every deliverable.
~$2,400/yr
Claude Pro + API costs + hosting
Claude Pro $240/yr API calls ~$600/yr Netlify Free Cloudflare Free GitHub Free Supabase Free Tier Claude Code $240/yr Domains ~$100/yr
Research & Analysis — AI reasons through CIMs, market data, competitive dynamics. You direct the inquiry.
Writing & Editing — Investment-grade prose. Iterative refinement. Tone-matching. 24-page thesis quality.
Design Systems — Typography, color, layout, information hierarchy. Professional visual output.
Front-End Development — HTML, CSS, JS. Responsive. Interactive. Production-ready code.
Data Visualization — Scorecards, matrices, Bloomberg-density layouts, charts, sector analysis grids.
Infrastructure — Netlify deploy, Cloudflare DNS, GitHub repos, API integration, environment config.
QA & Testing — Mobile responsiveness, cross-browser, print CSS, PDF conversion with Playwright.
Project Management — Context docs, CLAUDE.md files, iterative feedback loops, version control.
Annual cost comparison
Traditional Team
$415,000+ fully loaded
One Person + AI
~$2,400
173x
Cost reduction. Not a rounding error — a structural shift in what it costs to produce professional-grade work. The team didn't get cheaper. It got unnecessary.
The point

This isn't about AI being "good enough." It's about the cost structure of professional-grade output fundamentally changing. The barrier to creating institutional-quality work just went from a headcount problem to a skills problem. And AI is solving the skills problem too.

Valuation in Uncertainty

The standard valuation playbook assumes you can project cash flows with reasonable confidence. Pay 10x EBITDA because you believe the business will look roughly similar in five years. That assumption is breaking down.

The Core Problem

When small teams with domain expertise can build what used to require large engineering organizations — this changes what “defensible” means.

You can't pay 10x cash flow when you don't know if the cash flow exists in year three.

The exposure looks like this:

  • Revenue at risk from AI-native entrants who can undercut on price and outpace on iteration.
  • Proprietary software that may be worth a fraction of its carrying value if small teams can rebuild the core functionality.
  • Goodwill impairment risk when "moats" acquired at premium multiples turn out to be data piles.
  • Human capital risk — losing the senior talent who understand AI while retaining junior-heavy org charts designed for a different era.

New Due Diligence Questions

Beyond the standard financial and operational diligence, these questions now matter:

Disruption exposure: What would it take for a well-funded AI-native startup to capture 20% of this business's market in 24 months? What's the specific attack vector?

Labor composition: What percentage of operating costs are in roles that AI could automate in the next 3 years? What's the plan?

Software defensibility: If the proprietary software could be rebuilt by a small team using AI tools, what actually creates switching costs?

Data moat: Does the business generate proprietary data that improves over time, or is it using commodity data?

Management awareness: Does leadership understand these dynamics, or are they assuming business as usual?

Implications for Multiples

This doesn't mean everything is worthless — it means valuation needs to be more granular. Businesses with genuine defensibility (data flywheels, deep workflow integration, regulatory moats, domain-specific logic layers) may deserve premium multiples. Businesses that are essentially labor arbitrage or commodity software wrapped in a brand are more exposed than traditional analysis suggests.

The spread between "AI-advantaged" and "AI-exposed" businesses will widen and accelerate.

Workforce Shifts

The way companies build teams is changing — not gradually, but structurally.

Headcount Decoupling

Companies are reaching significant scale with teams that would have been impossible two years ago. Y Combinator’s Winter 2025 batch grew revenue 10% per week in aggregate — the fastest in YC history. A quarter of those companies reported codebases that were 95% AI-generated (CNBC, Garry Tan, March 2025).

Headcount is decoupling from output. The constraint shifts from "how many people can we hire?" to "how good is our judgment about what to build?"

The Junior Role Transformation

Junior roles existed for two reasons: to do work that didn't require senior judgment, and to train the next generation of seniors. AI disrupts both. The work that trained juniors — research, first drafts, data processing, documentation — is being automated. But junior roles won't disappear — they'll transform.

Old Junior
Executor

Do the research. Write the first draft. Process the data. Follow the template. Repeat.

New Junior
AI Director

Frame the problem. Direct AI execution. Evaluate outputs. Learn judgment faster.

One data point worth watching: a Stanford study found that employment among software developers aged 22 to 25 fell ~15–20% between 2022 and 2025, coinciding with the rise of AI coding tools (Brynjolfsson, Chandara & Chen, Stanford Digital Economy Lab, August 2025). Meanwhile, studies across hundreds of thousands of developers show senior engineers are twice as likely to report significant speed gains from AI tools, and are far better at catching and correcting AI mistakes — turning AI into a genuine force multiplier rather than a source of bugs (Docker/DX, 2025). The gap is widening, not narrowing.

If a senior engineer with AI tools can reliably direct, evaluate, and ship AI-generated output while a junior introduces a 41% increase in bugs (Uplevel, 2024), the ROI on senior compensation changes dramatically. Paying $400K for a senior who replaces the output of what used to require a five-person team isn't expensive — it's a bargain. The math inverts the traditional pyramid: fewer, more experienced people generating more output at higher per-head cost but lower total cost.

The Hiring Pipeline Problem

The Stanford employment data raises a question that university CS programs and corporate recruiters haven't answered yet: if entry-level hiring contracts by 15–20%, where does the next generation of senior talent come from? The traditional pipeline — hire juniors, train them over 5-10 years, promote the best — assumed a steady intake at the bottom. That intake is compressing.

Companies that solve this will do two things differently. First, they’ll redesign junior roles around AI direction rather than manual execution — hiring for judgment and problem framing rather than raw technical skill. Second, they’ll compress the timeline from junior to senior by exposing new hires to higher-level decisions earlier, using AI to handle the rote work that used to consume years of an early career.

What Moats Look Like Now

Warren Buffett's moat concept assumed competitive advantages that compound over decades. Some still do. But when execution gets cheap and teams get small, the old barriers to entry — engineering headcount, software complexity, process knowledge — stop protecting you. The durability calculus has changed. Some moats are weaker, some still hold, and new ones are emerging.

Weakening
Still Holds
Software complexity — small teams can rebuild core functionality in months
Regulatory barriers — AI can't lobby, compliance creates real friction
Engineering headcount — 20 AI-native engineers can outship 200
Physical infrastructure — AI doesn't build warehouses or lay fiber
Process knowledge — AI can execute, so knowing how matters less than knowing what
True network effects — AI makes it easier to build, not easier to get adoption
Content libraries — AI generates and curates at scale
Trusted relationships — AI accelerates delivery but not trust
The Moats That Matter Now
Data Flywheels
Proprietary data that improves the product, which generates more data. The loop compounds.
Iteration Speed
When AI commoditizes building, the moat is how fast you run the full loop: sense → build → ship → learn.
Domain Logic Layers
Proprietary frameworks for how AI gets applied to specific problems. The thinking, not the technology.
Workflow Integration
Deep embedding in how customers work. Not a tool they use — a system they can't remove.

A 10-year competitive advantage might now be a 3-year advantage. The durable businesses will be the ones that continuously rebuild their moats — not the ones that assume today's will hold forever.

Evaluating AI-Native Businesses

A new category of business is emerging that requires different evaluation frameworks. These companies are built from the ground up around AI capabilities — not bolted on after the fact.

AI-Enabled
AI-Native
AI added to existing product
Product impossible without AI
Traditional team structure with AI tools
Small team, AI handles execution
80%+ gross margins, low marginal costs
40-70% gross margins, costs scale with usage
Could exist (worse) without AI
Business model requires AI to function
Red Flags
Green Flags
Thin wrapper — nice UI on a single API call, no logic layer
Deep domain expertise — team knows the edge cases and real workflows
No answer for model dependency or provider lock-in
Model-agnostic architecture, can switch providers
"We're collecting data" with no compounding flywheel
Clear data flywheel — usage improves product, which drives more usage
ARR growth that hides negative gross margins
Honest unit economics with a path to sustainable margins

The best AI-native businesses combine domain expertise with defensible IP — the kind that compounds with usage and can't be replicated by switching models.

If You Sit on a Board

Most boards have historically treated AI as a technology initiative — something the CTO presents once a quarter, sandwiched between a cybersecurity update and a cloud migration timeline. The framing has changed. AI is a strategic question with direct implications for competitive position, workforce structure, capital allocation, and enterprise value.

Delegating AI to the technology committee is like delegating the internet to the IT department in 1998.

The Fiduciary Question

There's a point — and we're approaching it — where failure to understand AI's impact on the business becomes a governance failure. Not because every board member needs to use ChatGPT, but because the board's primary job is to ensure the company is positioned for the future, and AI is reshaping what that future looks like faster than any technology shift since the internet. A board that can't evaluate whether management has a credible AI strategy is a board that can't do its job.

72%
of directors say AI is a top-three strategic priority — but only 23% say their board has sufficient AI expertise to provide effective oversight (Deloitte/Korn Ferry Board Surveys, 2025). That gap is the governance risk.

What You Should Be Asking Management

Five questions that separate boards doing their job from boards going through the motions. If your CEO can't answer these clearly, that's the finding.

"Where is AI already changing our competitive landscape — not theoretically, but right now?" You're testing whether management is tracking AI-native entrants and incumbents that are pulling ahead. If the answer is vague or dismissive, they're not watching.

"What's our logic layer — the proprietary reasoning we've built around AI that competitors can't easily replicate?" If management can't articulate this, the company is using AI as a tool rather than building AI into its competitive position.

"How has our revenue per employee changed in the last 12 months, and what's the target for the next 12?" This is the single most important metric for measuring whether AI is creating value or just creating presentations. If the number isn't moving, the AI strategy isn't working. (Adjust for model/API spend and contractor usage — the metric is easy to game with outsourcing.)

"What's our plan for the 30% of roles that are primarily execution-based?" Not "are we looking into AI" but "what's the specific plan, with timelines, for restructuring execution-heavy functions?"

"If we were starting this company today with AI-native tools, how would it look different from what we have?" This is the hardest question because it forces honesty about legacy structures. The gap between "what we have" and "what we'd build" is the size of the transformation required.

Asking the right questions is the start. Scoring the answers is the discipline. Use the Board Diagnostic to assess whether your company is AI-Advantaged or AI-Exposed — six binary questions, a score, and a prescribed action for each tier.

The Diagnostic

Six questions. Ten minutes. A structural read on whether your company is positioned to win or exposed to disruption.

A Board-Level Diagnostic for the AI Era
AI-Advantaged or AI-Exposed?
01
Could an AI-native startup replicate your core value proposition within 18 months?
If No → +1 Point

Your value proposition requires proprietary data, regulatory approval, physical infrastructure, or deep domain relationships that can't be replicated by a well-funded team with GPT-5. You have structural protection.

If Yes → 0 Points

Action: Identify which parts of your offering are defensible and which are exposed. Double down on what's hard to replicate. Assume someone is already building the AI-native version of everything else.

02
Is your revenue per employee higher today than it was two years ago?
If Yes → +1 Point

You're capturing AI leverage. Output is growing faster than headcount. This is the clearest measurable signal that AI is creating value in your organization rather than just creating demos.

If No → 0 Points

Action: Audit where AI tools are deployed versus where they're actually changing output. If you have AI tools and revenue per employee hasn't moved, you have an adoption problem, not a technology problem.

03
Does your business generate proprietary data that makes your AI capabilities better over time?
If Yes → +1 Point

You have a compounding advantage — each customer interaction improves your product, which attracts more customers. This is the strongest AI-era moat.

If No → 0 Points

Action: Map every customer touchpoint and identify where proprietary data is being generated or could be. If you're not collecting it, start. If you're collecting it but not using it to improve the product, that's your next project.

04
Can you articulate your "logic layer" — the proprietary reasoning between a base AI model and your customer?
If Yes → +1 Point

You’ve built structured IP that turns commodity AI into differentiated output. This is what competitors can’t copy.

If No → 0 Points

Action: Start building it. Identify the 10 decisions your best people make that require judgment, then codify those decisions into structured workflows that can be augmented by AI. Your experts' intuition is your IP — capture it.

05
Is your pricing power intact — are customers paying the same or more, not less, since AI became available?
If Yes → +1 Point

AI is making your offering more valuable, not just cheaper to deliver. You're on the right side of the value equation. Customers are paying for outcomes, not hours or seats.

If No → 0 Points

Action: If customers are demanding lower prices because "AI should make this cheaper," you're losing the value argument. Shift the conversation to outcomes and results. If you can't quantify the outcome, that's the problem.

06
Are you actively upskilling your workforce on AI and retaining the senior talent who can direct it?
If Yes → +1 Point

You have a structured AI training program, your senior people are staying, and you're building the human capital to execute the strategy. Tools without trained people are shelfware. The companies pulling ahead are the ones whose best people know how to direct AI — and choose to stay because they see the opportunity.

If No → 0 Points

Action: Audit your AI training investment and your attrition among senior talent. If your best people are leaving for AI-native companies, you have a strategy credibility problem — they don't believe in your plan. If nobody is trained, your AI tools are running at 10% of their potential. Fix both within 90 days.

Scoring

0–2: Exposed
Structural risk is high
Your business model is vulnerable to AI disruption on multiple fronts. This doesn't mean you're doomed — it means the current trajectory leads to compression. The playbook: identify what's truly defensible (relationships, regulatory position, physical assets), restructure around those moats, and treat everything else as a candidate for AI-native redesign.
Monday: Commission an honest assessment of which business lines are structurally threatened. Identify the 2–3 things you do that AI can't replicate and rebuild around them. Set a 90-day deadline to show measurable AI integration in core workflows — not a pilot, not a proof of concept, a production deployment.
3–4: On the Fence
Transition underway but incomplete
You have some structural advantages but haven't fully leveraged them. Most incumbents sit here. The risk is complacency — "we're using AI" is not the same as "AI is transforming our model." The playbook: accelerate what's working, kill what isn't, and measure ruthlessly.
Monday: Measure revenue per employee quarterly and report it to the board. Identify where your logic layer is thinnest and assign senior people to build it deeper. Run a red team exercise: have your best people build the AI-native competitor that would kill your business, then steal their playbook.
5–6: AI-Advantaged
Structural position is strong
You have the ingredients — logic layer, data flywheel, pricing power, and operational leverage. The playbook: widen the gap. Invest in the data flywheel, hire senior people who can build the logic layer deeper, and resist the temptation to slow down because you're ahead.
Monday: Explore whether your logic layer can become a platform that others build on. Watch your blind spots: the February 2026 software sell-off hit winners and losers alike. Being right about structure doesn't protect you from being wrong about timing or valuation.

If You Manage a Team

You're the one who has to make it real — restructuring workflows, rethinking roles, and delivering more output with a team designed for a different era. Middle management is both the most important layer for AI transformation and the most exposed to it.

The Manager's Dilemma

AI collapses the handoffs that created the need for coordination in the first place. Management doesn't disappear — it shifts from coordinating execution to directing judgment. The manager who thrives in this environment identifies which decisions require human judgment, designs workflows that put AI on execution and humans on evaluation, and measures output rather than activity.

The Team Audit

Run this exercise on your team this week. It takes an hour and it will change how you think about every role.

Step 1: List every role on your team and their primary outputs. Not job descriptions — actual outputs. What does each person produce in a typical week? Documents, analyses, code, designs, decisions, communications.

Step 2: For each output, estimate the split between execution and judgment. Execution is the work of producing the thing — the research, the drafting, the formatting, the data processing. Judgment is deciding what to produce, evaluating whether it's right, and adapting it to context. Be honest. Most outputs are 70-80% execution.

Step 3: Estimate how much of the execution component AI could handle today. Not in theory — with tools that exist right now. For most knowledge work, the answer is 40-70% of the execution layer.

Step 4: Do the math. If AI can handle 50% of the execution on a role that's 80% execution, that's 40% of the role's current time freed up. Across a 10-person team, that's the equivalent of 4 full-time roles worth of capacity. The question becomes: what do those people do with that time? More judgment work? Fewer people? Both?

Restructuring a Team — The Practical Version

What to Stop
What to Start
Hiring junior roles to handle volume
Hiring fewer, more senior people who can direct AI at volume
Measuring hours worked or tasks completed
Measuring outcomes delivered and quality of judgment calls
Multi-step review chains for routine output
AI first-pass review with human evaluation of edge cases
Specialist silos that require coordination overhead
Generalists with AI tools who own end-to-end outcomes
Training programs focused on execution skills
Training programs focused on AI direction and judgment development

The Metric That Matters

Track one number: output per person per month. Define "output" in terms your function actually cares about — deals reviewed, campaigns launched, features shipped, cases resolved, whatever the unit of work is. If that number isn't rising quarter over quarter, your AI integration isn't working. If it is rising, you have the evidence to justify restructuring to leadership and the credibility to lead it.

Your First Action

Pick the single highest-volume workflow on your team — the thing that consumes the most hours across the most people. Redesign it around AI execution with human oversight. Give it 30 days. Measure the before and after. That result becomes your case study for transforming everything else. Don't ask permission to run a pilot. Run a pilot and present the results.

If This Is Your Career

The Honest Assessment

Most career advice about AI falls into two useless categories: panic ("your job is going away") or denial ("AI can't do what I do"). The truth is more specific and more actionable. The execution-versus-judgment framework in the Managers section applies to individual roles too. Your exposure depends on the ratio between those two in your day-to-day work.

The Career Diagnostic

Answer these honestly. Write down the answers — the act of writing forces precision that thinking doesn't.

What do you know that AI doesn't? Not what information you have — AI has more. What understanding do you have? What patterns can you see in your domain that come from years of experience? What do you know about how things actually work that isn't written down anywhere? That's your moat.

What can you evaluate that AI can't? AI can generate ten options. Can you reliably pick the right one for this specific situation? If yes, you're a judge — and judges are more valuable in a world with infinite generators. If no, you're competing with a machine that works faster and cheaper.

If AI handled 80% of your daily tasks, what's the 20% that requires you? If you can name it clearly, that 20% is your career. Invest everything in making it deeper and sharper. If you can't name it, that's the problem — and it's solvable, but not by waiting.

Are you learning to direct AI or learning to compete with it? The person who spends their evening mastering a new AI workflow is building leverage. The person who spends it perfecting a manual skill AI already does well is training for yesterday.

The Three Investments That Matter

Go deeper in your domain — not broader, deeper. The person who understands the nuances of healthcare reimbursement, or construction permitting, or derivatives pricing isn't threatened by AI. They're the person AI makes ten times more productive. Generalists who know a little about everything are exactly who AI replaces. Specialists who know everything about something are who AI empowers.

Learn to direct AI as a tool of your craft, not as a novelty. This doesn't mean taking a course on prompt engineering. It means integrating AI into your actual work — this week, not next quarter. Use it to produce first drafts you then refine. Use it to analyze data you then interpret. Use it to generate options you then evaluate. The goal is to develop judgment about when AI is right and when it's wrong in your specific domain. That judgment is worth more than any certification.

Build a reputation for judgment, not output. In a world where everyone can produce at volume, the person known for making the right call becomes disproportionately valuable. That means being visible about the decisions you make and why. It means developing a track record of good judgment that others can observe and rely on. It means positioning yourself as the person who knows what to do, not just the person who gets things done.

Your First Action

Take the single task you spent the most hours on last week. Do it again with AI handling the execution. Compare the output and the time. If it's 80% as good in 20% of the time, you've just found your leverage point — and you've identified that the value you add is in the evaluation and refinement, not the production. Build from there.

If You're Raising Kids

Every investor, operator, and board member reading this document is also thinking about their children. The question comes up at dinner, at school fundraisers, in conversations with college counselors: what should my kid study? What career paths still make sense? The thesis above provides a framework for answering — and the answer is counterintuitive.

The Old Advice Is Wrong

"Learn to code" was the career advice of the last decade. It was right then. It's incomplete now. Software development isn't disappearing, but the barrier to writing functional code has collapsed so far that coding skill alone is no longer a differentiator. The Stanford data on junior developer employment — a 15–20% decline among 22-to-25-year-olds — is the first hard signal. What's happening in software will happen across every field where AI can handle execution.

The old advice prioritized skills: learn Python, learn Excel, learn financial modeling, learn to write clean code. The new reality prioritizes understanding: learn how healthcare actually works, learn why supply chains break, learn what makes a legal argument persuasive, learn how buildings get permitted. Skills can be automated. Understanding compounds.

The Old Path
Learn the Skill

Study computer science. Learn to code. Get a technical certification. Specialize in a tool. The skill itself was the career.

The New Path
Learn the Domain

Study a field deeply. Understand how things actually work. Develop judgment about what matters. AI handles the execution — your kid directs it.

What Actually Prepares Them

The careers that will thrive share a common profile: deep domain knowledge combined with the ability to direct AI tools effectively. A nurse practitioner who understands patient care deeply and can use AI to handle documentation, research drug interactions, and flag anomalies is extraordinarily valuable. A general-purpose "AI specialist" with no domain expertise is competing with every other generalist — and with the AI itself.

Lower Value Trajectory
Higher Value Trajectory
Generic CS degree, no domain focus
Domain degree (engineering, healthcare, law, finance) with AI fluency
Learning to perform tasks
Learning to evaluate whether tasks were performed well
Breadth across many tools
Depth in one field with AI as a multiplier
Optimizing for first-job salary
Optimizing for judgment that compounds over a career
"What job can I get?"
"What problem do I understand better than anyone?"

The Capabilities That Compound

Four capabilities will matter more than any specific major or technical skill: the ability to frame problems well (AI is an extraordinary answer engine, but it depends on the quality of the question); the ability to evaluate output critically (you can’t spot what’s wrong if you don’t know what right looks like); the ability to communicate persuasively with other humans (AI doesn’t negotiate, build trust, or read a room); and comfort with ambiguity and rapid change (the specific tools will change every 18 months — adaptability beats mastery of any single system).

The Conversation to Have

The most important conversation isn't about what to study. It's about what problems fascinate them. A kid who's genuinely interested in how cities work, how diseases spread, how buildings stand up, how markets move, or how people make decisions has the raw material for a career that AI makes more powerful. A kid who's choosing a major based on starting salary data from 2023 is optimizing for a world that won't exist when they graduate.

Not every path runs through a screen. There are entire categories of work where physical presence, human connection, and hands-on skill are the value — and AI only makes them more in demand. These fields face growing shortages, strong pricing power, and zero risk of being automated away. Examples include:

  • Skilled trades — electricians, plumbers, HVAC, welding
  • Construction and infrastructure — physical, local, and increasingly complex
  • Healthcare — nursing, physical therapy, home health — human presence is the product
  • Hospitality and service — judgment, empathy, and trust can't be automated
  • Emergency services — firefighters, paramedics, first responders

If You Invest

If you allocate capital — through a fund, a public portfolio, angel checks, or your own retirement account — the honest question is: are your analytical frameworks keeping up, or are you pattern-matching against a world that no longer exists?

Most Investors Are Behind and Don't Know It

Most investors — including sophisticated ones — are still evaluating companies through pre-AI lenses. The investor who underwrites a professional services firm at 12x EBITDA because “that’s where the sector trades” without asking what happens when AI compresses the leverage model isn’t being conservative. They’re being blind.

80% vs. 33%
Over 80% of tech companies get asked about AI on earnings calls. For the rest of the S&P 500, that number drops to under a third. The analytical consensus treats AI disruption as a tech-sector story — not a structural force reshaping professional services, staffing, education, and every other knowledge-work industry in this thesis (Bloomberg, 2025). Meanwhile, only 23% of investors say companies adequately disclose AI's impact on headcount (PwC Global Investor Survey, 2025). The gap between what's happening and what's being analyzed is where the mispricing lives.

Your Analytical Edge Is Already Gone

Traditional investment analysis rewards pattern recognition across historical data — comps, multiples, sector performance, management track records. AI doesn't just help with that analysis. It commoditizes it entirely. Any investor with a Claude subscription can now run a comparable analysis, build a financial model, or summarize an earnings call in seconds. If your edge was ever "I read more 10-Ks than the next person" or "my analyst team builds better models," that edge is gone.

The Market Is Mispricing Both Directions

The February 2026 software sell-off — the "SaaSpocalypse" — wiped roughly $1 trillion from software market caps in seven trading days. Forward earnings multiples for the sector collapsed from 39x to 21x. Goldman Sachs' software basket fell 13% in a single session, the deepest one-day correction in over a decade. The trigger was Anthropic's enterprise agent rollout, followed by OpenAI's full-stack orchestration layer. The market panicked and sold everything — AI-advantaged and AI-exposed companies alike. Salesforce fell 38% YTD, ServiceNow 23%, Intuit 33%. That indiscriminate selling tells you the analytical frameworks haven't been built yet. That's the opportunity, but it cuts both ways.

What's overpriced: Thin wrappers on commodity models. Jasper AI raised $125M at a $1.5B valuation in October 2022. Revenue collapsed 54% — from $120M to $55M — by 2024. Both co-founders left. A Google VP warned in February 2026 that two categories of AI startups face extinction: thin wrappers and commoditized infrastructure tools. "When the underlying model improves, the wrapper company's value proposition can be replicated overnight." The market still gives premium multiples to companies that mention AI on earnings calls regardless of whether they have a logic layer, a data flywheel, or any defensible position.

What's underpriced: Incumbents caught in the sell-off crossfire. ServiceNow dropped 23% despite beating earnings nine consecutive quarters — Morningstar called it "deeply undervalued for long-term investors." Oracle's cloud infrastructure is booked over a year ahead for AI training, with a remaining performance obligation backlog exceeding $130B, yet it was sold alongside pure SaaS names. The market penalizes companies that haven't articulated an AI story, but the company with 20 years of proprietary data and a captive customer base is better positioned than the startup with a pitch deck and a GPT wrapper. The sell-side doesn't have a framework for valuing "AI-advantaged incumbent" yet.

What's dangerously mispriced: Companies trading at historical multiples in sectors where the economic model is structurally breaking. EdTech revenue multiples collapsed from 7.2x to 1.6x — a 78% compression — between Q4 2020 and Q4 2024 (Finerva). Robert Half is down 60% as AI automates both recruiting workflows and the back-office temp roles it fills. The downside isn't a 20% correction. It's Chegg: a $14B company at peak, now worth $191M. A permanent rerating as the market realizes the cash flows aren't coming back.

Hard Questions for Your Own Practice

The thesis applies to your firm too. 95% of fund managers now use generative AI in their work, up from 86% a year earlier (AIMA, 2025). 60% of institutional investors said they'd be more likely to invest in a fund that allocates meaningful budget to AI R&D. If you manage a fund, you employ analysts who build models, conduct research, prepare investment memos, and generate deal flow. AI can do most of that faster and cheaper today. The question isn't whether to use AI in your own practice — it's whether your competitors already are, and what it means when AI-native firms generate $3.5M in revenue per employee versus the traditional SaaS average of $200-350K.

The investor who uses AI to do in 20 minutes what used to take a junior analyst two days has a structural advantage — not because the analysis is better, but because they can evaluate more opportunities, iterate faster on theses, and spend their time on the judgment calls that actually drive returns. But here's the trap: AI makes it easy to feel productive without being effective. Running more screens, building more models, reading more research — that's volume, not insight. The investors who win in this environment will use AI to compress the execution and spend the freed-up time on the things AI can't do: building relationships with management teams, developing conviction through primary research, and thinking deeply about second-order effects that aren't in any dataset.

Your First Action

Take your three largest positions. For each one, write a single paragraph answering: "If a well-funded AI-native startup targeted this company's market tomorrow, what would they build first, which customers would they take, and what specifically would stop them?" If you can't write a convincing defense for a position, that's the position to scrutinize — not next quarter, now. The market hasn't fully priced this transition yet, which means you have a window. But windows close, and the investors who do this exercise first will be the ones selling to the investors who do it last.

Exhibit A

Winners & Losers Scorecard

The thesis above describes structural forces. This exhibit applies them to specific companies. The classification framework below uses five winner signals and four loser signals, each observable in public data. Every company named here includes a "because" statement — a falsifiable reason for the classification, not a prediction.

The Classification Rubric

These signals separate structural winners from companies at risk. A company doesn't need all five winner signals to qualify — three or more, with evidence, is the threshold. Similarly, two or more loser signals with confirming market data warrants the classification.

Winner Signals
1 Revenue per employee is rising. The business is growing output without proportionally growing headcount. This is the single clearest indicator of AI leverage.
2 AI is core to the product, not bolted on. The product couldn't exist — or would be fundamentally worse — without AI capabilities baked into the architecture.
3 Logic layer is visible and proprietary. There's a defensible layer of domain-specific reasoning between the base model and the end user.
4 Data flywheel is compounding. Usage generates proprietary data that improves the product, creating a widening advantage over time.
5 Pricing power is intact or expanding. The company can charge more because AI makes its offering demonstrably better, not just cheaper to deliver.
Loser Signals
1 Core value proposition is now free. What the company charges for can be replicated by a general-purpose AI tool at zero marginal cost to the user.
2 Revenue is declining with no structural response. The company is losing customers and hasn't redesigned its model around AI capabilities.
3 Margin structure is compressing. Costs haven't decreased commensurately with AI availability, or competitors are using AI to undercut pricing.
4 No logic layer between commodity AI and customer. The business is a thin intermediary that AI can route around entirely.

The Scorecard

The following companies are classified using the rubric above. The reasoning matters more than the verdict — these are illustrations of how the signals manifest in practice, not stock picks.

Company Sector Verdict Because
Palantir Enterprise Software Winner AI-native data platform with irreplaceable government and enterprise logic layers. Deep AIP adoption means the product improves with every deployment.
Shopify Commerce Winner AI integrated across merchant tools — product descriptions, inventory, customer service. AI-powered checkout is capturing share by making merchants more successful, not just more efficient.
CrowdStrike Cybersecurity Winner AI makes the threat detection platform exponentially better with scale. Each new endpoint feeds the data flywheel. Security is an AI-advantaged domain.
Cursor (private) Developer Tools Winner ~300 employees generating a reported ~$1B ARR (per CNBC). Product couldn't exist without AI — it is the AI. The economics are a category of their own.
Klarna Fintech On the Fence Aggressively cut headcount and automated customer service with AI. But had to rehire after quality dropped — a cautionary signal that AI transformation requires judgment about where to apply it.
Salesforce Enterprise Software On the Fence Agentforce is ambitious but seat-based licensing model is threatened by AI agents that could cannibalize per-user revenue. The transition from selling seats to selling outcomes is unproven.
Adobe Creative Software On the Fence Firefly AI is integrated but AI-native tools like Canva and Midjourney are capturing the low end while pricing pressure builds from free alternatives. The moat is the professional workflow, not the generation capability.
Chegg Education Loser Core product — homework answers — is now free via ChatGPT. Attempted pivot to "skills" but the core value prop was eliminated overnight. No logic layer between the content and free alternatives.
Upwork Professional Services Loser Freelance marketplace for tasks AI can now do directly. Why hire someone on Upwork to write copy or build a basic website when the buyer can do it themselves?
Pearson Education Loser Content-as-product model is structurally exposed. AI can generate, summarize, and tutor from any source material. No logic layer between Pearson's content and free alternatives.

Private Equity

The PE model is built on predictability: stable cash flows, operational improvements over a hold period, multiple expansion on exit, and leverage against reliable earnings. Each of these pillars is under pressure. And the data says the industry knows it — 86% of PE firms have adopted generative AI in M&A workflows, 59% consider AI a key factor in value creation — but only 5% of companies broadly are achieving AI value at scale (Deloitte 2025; FTI 2024; BCG 2025). The gap between awareness and execution is enormous.

The Model Under Stress

PE Pillar The Assumption The AI Risk
Cash Flows Stable, predictable earnings over a 5-year hold Core business model faces existential pressure mid-hold
Operational Improvements Business model is sound; optimize around it AI changes what the business should be doing — incremental improvements miss the point
Multiple Expansion Buyer confidence supports higher exit multiples Exit buyers asking the same AI disruption questions — multiples compress
Leverage Debt amplifies returns against reliable earnings Earnings volatility makes leverage dangerous — 25–35% of private credit exposed (UBS)

The Hold Period Problem

Average PE hold periods have stretched to approximately 6.5 years — up ~35% from the 2010–2021 average (S&P Global; McKinsey, 2026). That timeline now spans multiple generations of AI capability. A business acquired in 2024 will exit into a world where AI-native entrants have emerged in every sector. The exit buyer in 2029 won’t compare your portfolio company to today’s competitors — they’ll compare it to companies that don’t exist yet.

71%
of value created in 2024 PE exits came from revenue growth — the highest in five years — not multiple expansion. The era of buying at 8x and selling at 12x on the same business is over. "12 is the new 5": deals now require 10-12% EBITDA growth to generate 2.5x returns (EY PE Pulse Q4 2025; Bain Global PE Report, 2026).

Questions for the Investment Committee

These overlap with the board-level questions above — deliberately. Investment committees need the same strategic clarity, filtered through deal economics:

1. What's the AI attack surface?
Where could an AI-native competitor enter? What would they build first? How fast could they scale?

2. What's the transformation thesis?
Is there a credible path to making this business AI-advantaged? What does that require in terms of talent, capital, and time?

3. What's the exit story?
Will strategic buyers and sponsors in 5 years see this as a platform or a problem? What needs to be true?

4. What's the margin structure dependency?
If AI reduces the cost of the core service by 50%, does the business model survive? What's the pricing power?

5. Is management equipped?
Does the leadership team understand these dynamics? Are they building toward AI-advantage or defending the status quo?

The Credit Contagion

The SaaSpocalypse didn't stay in equities. Software companies account for roughly 25% of the $3 trillion private credit market through year-end 2025. When the sell-off hit, shares of Blue Owl, TPG, Ares Management, and KKR all fell by double digits. UBS estimates 25-35% of private credit is exposed to AI disruption risk, warning that loans originated before 2024 "likely did not contemplate AI as a meaningful business risk." Under an aggressive disruption scenario, default rates could approach double digits (UBS, February 2026; CNBC). The risk isn't theoretical — it's already repricing.

The Opportunity

This isn't all downside. The firms already moving are showing the playbook. Vista Equity Partners — $100B+ under management across 90+ companies — launched an "Agentic AI factory" in 2025; roughly a third of its portfolio companies are using those tools to automate tasks and improve productivity. Thoma Bravo acquired Verint Systems for $2B, merging it with portfolio company Calabrio to create a unified AI-powered contact center platform. The pattern: acquire AI-exposed businesses at appropriate discounts, transform them into AI-advantaged positions, and exit to buyers who see the new trajectory. But it requires new capabilities — technical diligence, transformation playbooks, and operators who understand what's actually possible.

The next generation of PE outperformance won't come from better deal sourcing or cheaper leverage. It will come from understanding which businesses can be transformed and having the capability to do it.

Professional Services

The Leverage Model

Professional services economics work like this: a partner with 20 years of experience supervises 5-10 junior professionals who do the bulk of the work. The partner's judgment directs the effort; the juniors execute research, analysis, drafting, and documentation. The firm bills the partner at $1,500/hour and the juniors at $400/hour, but the juniors do 80% of the hours. Profit comes from the spread.

The work that juniors do — research, first drafts, document review, data analysis, precedent searching — is exactly what AI does well. The leverage model breaks when the middle of the pyramid disappears.

What AI Changes

Traditional Model
AI-Native Model
Junior associate researches precedents (hours)
AI surfaces relevant precedents (minutes)
Analyst builds financial model (days)
AI generates model from parameters (hours)
Team reviews 10,000 documents (weeks)
AI reviews and flags issues (hours)
Associate drafts contract from template (hours)
AI drafts from requirements (minutes)
Consultant synthesizes interview notes (a day)
AI synthesizes with themes (minutes)

The pattern: tasks that took junior professionals hours or days now take minutes. The senior judgment that directs the work still matters — often more than before — but the execution layer compresses dramatically.

32.5
working days per year reclaimed by lawyers using AI — with nearly half saving 1–5 hours weekly
90%
of legal professionals believe AI has already altered or will alter conventional billing practices within two years
60%
of in-house counsel report “no noticeable savings yet” from outside counsel’s use of AI — the gap between capability and passed-through value
60%+
of in-house counsel are planning to push for change in how legal services are delivered and priced
Source: ACC/Everlaw & Everlaw/ACEDS, 2025

The Pricing Problem

Clients aren't stupid. When they see AI doing work that used to take 40 billable hours, they won't pay for 40 hours. The conversation shifts:

Old conversation: "This contract review will require approximately 60 hours of associate time at $450/hour, plus 5 hours of partner oversight at $1,200/hour."

New conversation: "Why am I paying $27,000 for something your AI did in an afternoon? What am I actually paying for?"

Firms face a choice: capture the efficiency internally (same price, higher margin, but clients eventually notice), or pass it through (lower prices, same margin, but revenue shrinks). Neither is comfortable. The firms that win will find a third path — new pricing models based on value delivered rather than hours spent.

The Talent Pipeline Problem

Junior professionals learn by doing. The associate who reviews 10,000 documents develops judgment about what matters. The analyst who builds 50 financial models understands how the pieces connect. The junior consultant who sits through 30 client interviews learns to read a room.

If AI does the junior work, how do juniors learn? And if juniors don't learn, where do future partners come from?

This isn't a theoretical problem — it's happening now. Firms report that training timelines are extending because juniors get less repetition. Partners complain that associates "don't have the reps." The apprenticeship model that built expertise over 10-15 years is breaking down. For investors, the implication is direct: the value of a traditional firm's "talent pipeline" may be overstated if that pipeline no longer produces fully trained seniors on the old timeline.

Winners and Losers

Positioned to Win Under Pressure
Boutique, senior-heavy firms — less leverage to lose, more judgment to sell Large leverage-dependent firms (Big Law, Big 4, major consultancies) — pyramid economics are exactly what AI disrupts
AI-native entrants built from scratch — no legacy economics to protect Commodity service providers (doc review, basic compliance, routine transactions) — pure substitution risk
Firms already pricing on outcomes rather than hours — the model everyone now needs Slow adopters waiting to see how it plays out — clients and talent move first

What This Means for Clients

Renegotiate scope and pricing. If your law firm or consultant is using AI tools, the economics have changed. You shouldn't pay 2019 rates for 2026 delivery.

Evaluate AI capability directly. Ask what tools they're using, how they're integrated into workflows, and what the quality control looks like. This is now a procurement criterion.

Consider unbundling. The commodity parts of professional services (document review, basic research, routine filings) can often be done by AI-native specialists at a fraction of traditional cost.

Watch for talent flight. The best professionals are leaving traditional firms to build or join AI-native practices. Make sure your advisors are on the right side of this transition.

Professional services isn't going away. Expertise matters more than ever. But the delivery model is being rebuilt, and the firms that recognize this early will define the next era. The rest will find themselves defending high rates for work that clearly doesn't require that many hours.

The Compression of Prompt Engineering

The counterintuitive truth

Everyone thinks prompts got longer and more complex over time. The opposite happened. Early AI models were powerful but dumb about context — you had to spell out every detail of who you were, how you thought, what format you wanted, what to avoid. Today's reasoning models understand intent. You describe the outcome. The AI figures out how to get there.

Use Case
Analyze a CIM for a PE firm's investment decision
Same task. Same quality expectation. Different era of AI.
Early AI — 2023/Early 2024 ~1,800 words · 47 lines · 2 full pages
Section 1: Persona Definition
You are a senior private equity associate at a mid-market PE firm with $2-5B AUM. You have 8+ years of experience evaluating platform acquisitions in the $50M-$500M enterprise value range. You think in terms of EBITDA multiples, revenue quality, management team depth, and value creation levers. You are skeptical by default — your job is to find reasons NOT to invest, not to confirm a thesis. You understand the difference between recurring and non-recurring revenue, the importance of customer concentration, and how to evaluate management's forward projections against historical performance.

When you analyze a deal, you think about: (1) downside protection first, (2) base case returns, (3) upside optionality. You never lead with the bull case. You always identify the "kill shot" — the single risk that would make you walk away — before evaluating the opportunity.

Your communication style is direct and concise. You write for a senior partner who has 15 minutes to decide whether this deal deserves a second look. No throat-clearing. No "it depends." Take a position.
Section 2: Output Format Specification
Structure your analysis in the following format exactly:

EXECUTIVE SUMMARY (3-4 sentences. Lead with your verdict: Pass, Proceed to DD, or Conditional Proceed. State the single most compelling reason and the single biggest risk.)

BUSINESS QUALITY ASSESSMENT
— Revenue quality score (1-10) with justification
— Customer concentration analysis (top 10 customers as % of revenue)
— Recurring vs non-recurring revenue breakdown
— Organic growth rate vs. acquisition-driven growth
— Gross margin trajectory and sustainability

MANAGEMENT & OPERATIONS
— Key person risk assessment
— Bench depth below C-suite
— Operational efficiency (revenue per employee trends)
— Capital allocation history and discipline

FINANCIAL ANALYSIS
— EBITDA quality and adjustments (flag any aggressive add-backs)
— Working capital dynamics
— CapEx requirements (maintenance vs. growth)
— Free cash flow conversion rate
— Debt capacity analysis

VALUATION & RETURNS
— Implied entry multiple on adjusted EBITDA
— Comparable transactions and public comps
— Base case IRR (25th, 50th, 75th percentile)
— Key assumptions driving returns
— Sensitivity on 2-3 critical variables

VALUE CREATION PLAN
— Top 3 operational improvements with quantified impact
— Potential add-on acquisition targets
— Pricing power assessment
— Technology/automation opportunities

RISK MATRIX
— The kill shot (walk-away risk)
— Top 5 risks ranked by probability × impact
— Mitigants for each

VERDICT
— 2-sentence recommendation
— Conditions for proceeding, if applicable
Section 3: Analysis Instructions
When analyzing the CIM:

DO NOT take management projections at face value. Apply a 20-30% haircut to forward revenue projections and evaluate what the business looks like under conservative assumptions.
DO NOT use generic language like "strong market position" or "attractive growth profile" without specific evidence.
DO flag any EBITDA adjustments that exceed 15% of reported EBITDA as potentially aggressive.
DO calculate implied customer lifetime value if data is available.
DO identify the 2-3 metrics that matter most for this specific business and explain why.
DO compare stated growth rates against industry benchmarks.
DO NOT write more than 2 pages total. Density over length.
DO use specific numbers from the CIM. Never say "significant" when you can say "34%."

// The goal is a document I can hand to my managing partner
// that gives him everything he needs in a 15-minute read
// to decide: do we take the next meeting or not?
Section 4: Tone & Constraints
Additional constraints:
— Write at a Wharton MBA level, not a blog post level
— Use tables where they communicate faster than prose
— Bold the single most important number in each section
— If the CIM is missing critical data, flag it explicitly as a red flag — don't fill in assumptions silently
— End every section with a one-line "so what" that connects back to the investment decision
Reasoning improves
Today — 2025/2026 ~20 words · 2 lines
Analyze this CIM as if you're a senior PE associate deciding whether to recommend this deal to your managing partner. Be direct. Take a position.
That's it. Attach the CIM. Hit send.
What you get back

The same structured analysis — executive summary with a verdict, revenue quality scoring, EBITDA adjustment flags, risk matrix with a kill shot identified, value creation levers, and a clear recommendation. The AI now knows how a PE professional thinks. It knows what "be direct" means in this context. It knows to haircut projections, flag customer concentration, and lead with downside.

Every instruction from the 2-page prompt on the left? The reasoning model already has it internalized.

All that white space is the point
The prompt got smaller. The output got better.

Why this happened

Reasoning models don't just pattern-match — they think. Early models needed every instruction because they were essentially sophisticated autocomplete. They'd generate PE-sounding language, but they didn't understand PE logic. You had to encode the logic yourself, in the prompt. Today's models (Claude with extended thinking, GPT-o1, Gemini Deep Research) have internalized domain expertise. When you say "PE associate," the model understands the analytical framework, the skepticism, the format, the audience, the decision structure. You're directing intent, not programming behavior.

The same pattern across every domain
Legal Contract Review
800 words defining what a senior M&A attorney looks for, every clause type to flag, risk categorization framework, output format for a partner memo, jurisdictional considerations...
"Review this contract as outside counsel advising on an acquisition. Flag anything that would concern the buyer."
Financial Modeling
1,200 words specifying every assumption, cell formula logic, scenario structure, sensitivity table layout, chart formatting, base/bull/bear definitions...
"Build a 3-statement model for this company with base, bull, and bear cases. Focus on FCF sensitivity to revenue growth and margins."
Market Research
1,500 words defining research methodology, source hierarchy, competitive framework (Porter's + additional), output structure with TAM/SAM/SOM, evidence standards...
"Map the competitive landscape for [company] as if you're briefing a potential acquirer. Include market sizing and defensibility assessment."
Board Memo
900 words on tone, audience awareness (board members with varying technical depth), structure (situation-complication-resolution), what to bold, how to handle sensitive topics, appendix format...
"Draft a board memo on our AI strategy. This is for a mixed board — some technical, some not. Be honest about risks."
Investment Thesis (The Shift)
2,000+ words defining analytical framework, design system, typography, color palette, data sourcing methodology, prose style, audience personas, section structure...
"Build this as a 24-page investment thesis for PE professionals. Think Bloomberg meets Economist. Make it investment-grade."
Prompt length over time
GPT-3.5
~2,000 words
GPT-4
~800 words
Claude 3.5
~200 words
Claude Opus 4.6 / o1
~20 words
The takeaway

Prompt engineering was a bridge technology — necessary when the models were powerful but lacked judgment. As reasoning improved, the prompts collapsed. The skill that replaced it isn't "how to write better prompts." It's knowing what to ask for — understanding the domain well enough to describe the outcome, and trusting the AI to figure out the process. The expertise moved from the prompt to the person.

The AI Graveyard

2023 — 2026

Here Lies the Middleware

Prompt Engineering
2023 — 2025
"Write 2,000 words to explain how a PE professional thinks."
Killed by: Reasoning Models
Obsolete
AutoGPT & Early Agents
2023 — 2024
"Give it a goal. Watch it burn $40 in tokens. Achieve nothing."
Killed by: Competent Agents
Obsolete
AI Wrapper Apps
2023 — 2025
"A nice UI over the OpenAI API with a preset prompt. That'll be $20/month."
Killed by: ChatGPT & Claude
Obsolete
Prompt Marketplaces
2023 — 2024
"Buy this prompt for $5. It tells the AI to think like a marketer."
Killed by: Models Already Know
Obsolete
🏥

The ICU

Not dead — but the use case that justified the hype is shrinking fast.

🤕
RAG Pipelines
Narrowing
"Chunk, embed, retrieve, rerank, pray it finds the right paragraph."
Threatened by: 1M Token Context
Still useful at massive scale. But the 90% of use cases that fit in a context window? Gone.
🩼
LangChain Orchestration
Fading
"200 lines of Python to make AI think in steps."
Threatened by: Native Reasoning
Models now reason natively. The orchestration layer is collapsing into the model itself.
🩹
Casual Fine-Tuning
Narrowing
"Spend $100K training a model to be a lawyer. Base model does it for free now."
Threatened by: Base Model Quality
Enterprise fine-tuning for specific formats and compliance survives. The rest doesn't justify the cost.
🦽
Vector DB Hype
Narrowing
"Pinecone, Weaviate, Chroma. Built empires on 4K context limits."
Threatened by: Just Dump It In
Real use cases remain at true enterprise scale. But the startup gold rush is over.
The stack collapse — every killed layer was middleware between you and the answer
The 2023 Stack 12 layers
1
You
The human with the question
You
2
Prompt Template
2,000-word persona + format + constraints
Obsolete
3
Prompt Marketplace
Buy the "right" template from PromptBase
Obsolete
4
LangChain Orchestration
Chain-of-thought pipeline in Python
Obsolete
5
Document Chunking
Split your 50-page CIM into 500-token pieces
Obsolete
6
Embedding Model
Convert chunks to vectors with ada-002
Obsolete
7
Vector Database
Store embeddings in Pinecone / Weaviate
Obsolete
8
Retrieval Query
Semantic search for "relevant" chunks
Obsolete
9
Reranker
Re-score results with Cohere Rerank
Obsolete
10
LLM (GPT-3.5 / GPT-4)
Finally — the AI gets your mangled context
Evolved
11
Output Parser
Force JSON structure on the response
Obsolete
12
Result
Hope it used the right chunks
You
Reasoning collapses the stack
The 2026 Stack 3 layers
You
Claude / GPT
Result
9 layers eliminated for most bounded tasks All that middleware? The model handles it now.
Attach the document. Ask the question. Get the analysis.
For most single-document and bounded-context tasks, the middleware is gone.
The reasoning model already knows how to think, what format to use, and what matters.

Most of these were workarounds for limited reasoning and context. Others persist for scale, governance, and control. But for the majority of bounded tasks, the middleware collapsed. The stack went from 12 layers to 3.

What’s Next

Capabilities shipping now or in beta — the window between preview and broad rollout is shrinking. Weeks in the fastest-moving products, months in the rest.

Capability
What It Enables
What It Displaces
Computer Use Agents
Automating any web workflow end-to-end without APIs. Every SaaS product becomes scriptable.
Manual ops teams, SaaS “glue work,” brittle RPA
Persistent Memory
AI retains every project, preference, and conversation across sessions — a colleague with institutional knowledge.
Onboarding overhead, tribal knowledge loss, repeated context-setting
Voice-to-Application
Describe an app out loud, watch it build itself. Deployable software from natural speech.
Low-code platforms, offshore prototyping, agency wireframe cycles
Multi-Agent Orchestration
Multiple AI agents coordinating complex tasks — research, write, review, deploy. One person scales to an AI team.
Cross-functional project staffing, coordination overhead, junior generalist roles
Deep Research Agents
Autonomous multi-step research producing analyst-grade reports with citations.
Junior analysts, consultants doing desk research, manual due diligence
Autonomous Coding Agents
AI plans features, implements across files, runs tests, debugs, and submits PRs.
Mid-level dev teams, QA cycles, outsourced development shops
Year
Where the Value Is
2023 — Prompt Craft
Knowing how to talk to the AI. Structured prompts, template marketplaces.
2024 — Tool Selection
Knowing which tools to combine. Right model + right context + right workflow.
2025 — Judgment & Taste
AI can build anything. Value is knowing what to build, for whom, and why.
2026 — Orchestration
Directing AI teams toward complex goals. Management of machines, not people.
Signal
Implication
Biggest opportunity
Domain experts who build. 10+ years in a field plus AI execution speed. The combination is unfair.
Watch closely
The middle management layer. Roles that exist to coordinate and translate requirements. AI agents are learning to do exactly this.

AI capabilities aren't launching like software — they're compounding like interest. Each wave makes the next one faster. The question isn't whether this arrives. It's whether you're positioned when it does.

Where This Thesis Could Be Wrong

The case above is deliberately confident. Here's where it might break down:

Risk
What Breaks
AI reliability limits
Hallucination rates stay high in high-stakes domains. The "last mile" of reliability proves expensive.
Regulatory friction
Liability frameworks, licensing, and AI-specific regulation slow adoption. Incumbents use compliance as a real moat.
Enterprise inertia
Cultural resistance, IT security concerns, and change management mean large orgs move slower than the technology.
Data moats hold
Proprietary datasets and integration depth matter more than execution speed. New entrants can't get the data they need.
Stress Test: What If AI Progress Stalls for 24 Months?
Cost collapse stalls. Enterprise adoption slows. Incumbent moats hold longer. Multiples recover. The structural shift still happens — just on a longer curve. Nothing in this thesis requires AI to keep improving at the current rate. It requires the improvements that have already shipped to be adopted. That process is underway and doesn't reverse.

Being wrong about timing is different from being wrong about direction. The question for any specific decision is: what's the timeline that matters for you?

Toolbox

Frameworks and diagnostics from this thesis — built to be pulled out and used independently.

Board Diagnostic
Six yes/no questions that score whether a company is AI-Advantaged, On the Fence, or Exposed. Ten minutes per company.
Board-Level Questions
The questions every board member should be asking management — from logic layer articulation to workforce restructuring timelines.
Investor Evaluation Rubric
Framework for reclassifying holdings as AI-advantaged or AI-exposed, with concrete scoring criteria and case studies.
PE Investment Committee Questions
Five questions every investment committee should ask before deploying capital: attack surface, transformation thesis, exit story, margin dependency, management capability.
Winners & Losers Scorecard
Green flags vs red flags framework for evaluating whether a company is positioned to win or exposed to disruption.
Team Audit
Four-step exercise to map every role against AI capability. Identifies where capacity is hiding and which roles to restructure first.

Sources

AI Code Generation & Adoption

Tan, G. (March 2025). YC W25 batch growth rates. CNBC.

 

Workforce & Productivity

Brynjolfsson, E., Chandara, D. & Chen, N. (August 2025). "Canaries in the Coal Mine? Six Facts about the Recent Decline in Entry-Level Employment." Stanford Digital Economy Lab.

Uplevel (2024). Gen AI for Coding Research Report. Bug introduction rates with AI tools.

Docker/DX (2025). AI productivity divide across experience levels.

 

Company Data

Anysphere/Cursor (November 2025). $1B ARR, ~300 employees, 36% freemium conversion. CNBC, Contrary Research.

Klarna (2025). Headcount reduction, revenue growth, IPO at $19.6B valuation. CNBC, Al Jazeera.

Stock performance data: Public market data, November 30, 2022 through February 2026.

 

Board Governance

Deloitte (2025). "Governance of AI: A Critical Imperative for Today's Boards."

Korn Ferry (2025). CEO & Board Survey: AI as strategic priority.

 

Legal Profession

Everlaw/ACEDS/ILTA (2025). 2025 Ediscovery Innovation Report. 32.5 days reclaimed, 90% billing impact expectation.

ACC/Everlaw (October 2025). Generative AI's Growing Strategic Value for Corporate Law Departments. 60% no savings, 64% expect reduced reliance on outside counsel.

 

Market Performance

Morningstar (January 2026). "AI Stocks: Winners, Laggards, and Losers of 2025." Equal-weighted 34-stock AI basket vs. Morningstar US Market Index.

 

Investor & PE Data

Bloomberg (2025). "AI Is the Hot Topic in Tech Earnings and a Blind Spot Everywhere Else." Analyst coverage gap analysis.

PwC (2025). Global Investor Survey. AI disclosure adequacy ratings.

AIMA (2025). Front Office Gen AI Adoption Survey. 95% fund manager adoption rate.

Finerva (2024). "EdTech 2025 Valuation Multiples." Revenue multiple compression 7.2x to 1.6x.

The Information (2024). Jasper AI internal valuation cut, revenue decline.

S&P Global (December 2025). Average PE hold periods. McKinsey Global Private Markets Report (2026).

Bain & Company (2026). Global Private Equity Report. "12 is the new 5" deal return math.

EY (Q4 2025). PE Pulse. 71% of exit value from revenue growth.

Deloitte (2025). GenAI in M&A Survey. 86% PE adoption rate.

FTI Consulting (2024). Private Equity AI Survey. 59% consider AI key value creation factor.

BCG (September 2025). "The Widening AI Value Gap." Only 5% achieving AI value at scale.

UBS (February 2026). Private credit AI disruption exposure analysis. 25-35% exposure estimate.

Microsoft/arXiv (2023). GitHub Copilot productivity study. 55.8% faster task completion.

 

February 2026 Software Sell-Off

Fortune/Deutsche Bank (February 2026). ~$1 trillion in software market cap losses.

Bloomberg (February 4, 2026). "SaaSpocalypse" coverage. ~$285B erased in single session.

Goldman Sachs (February 2026). Software basket 13% single-day decline. Forward multiples 39x to 21x.

CNBC (February 2026). Anthropic enterprise agent launch and SaaS stock impact.