We are living through the most significant period of upheaval in human history. The next decade will compress 20,000 years of technological progress into a single generation. This dashboard maps the intelligence revolution's impact on M&A, law, and the future of work — with verified data and a futurist's perspective on what it all means.
Here's the thing about exponential change: it looks like nothing is happening — until everything happens at once. The human brain was never designed to think exponentially. Every cognitive bias we have — normalcy bias, anchoring, the availability heuristic — conspires to make these curves invisible until you're already on the steep part. By then, it's too late to adapt. Even ARK Invest's Cathie Wood has said in interviews that their own Big Ideas projections are probably conservative. Let that sink in.
| Company | Revenue (ARR) | Founded | Sector |
|---|---|---|---|
| OpenAI | $20 billion | 2015 | General AI |
| Anthropic | $9 billion | 2021 | Enterprise AI (850% CAGR) |
| Cursor | $1 billion | 2022 | AI Code Editor (>1000% YoY) |
| Harvey | $100 million | 2022 | Legal AI |
| Sierra | $100 million | 2023 | Customer Service AI |
| Hyperliquid | $800+ million | — | DeFi (15 employees!) |
Source: ARK Invest Big Ideas 2026 — "AI Productivity"
ARK projects the AI software market could grow from $1.43 trillion today to $14 trillion by 2030. Total surplus value unlocked: $117 trillion. Enterprise value creation potential exceeds $80 trillion. ARK's GDP forecast: 7.3% real growth by 2030 — more than double the IMF's 3.1%.
Here's my take: the establishment forecasters — McKinsey, Gartner, the IMF — have been systematically wrong on every major technology transition, typically by one to two orders of magnitude. They employ linear extrapolation models that structurally cannot capture exponential dynamics. ARK is one of the few institutions that even attempts exponential modeling, and even they hedge. The real numbers will likely exceed these projections. That's the nature of exponential S-curves — they always surprise, even the optimists.
Sources: ARK Invest Big Ideas 2026. For the systematic failure of establishment forecasting, see: IEA solar projections (off by 10-100x since 2000), Gartner's 2024 humanoid robotics projections (contradicted within months), Morgan Stanley's "1 billion robots by 2050" (likely 10-15 years early).
Malcolm Gladwell wrote about tipping points. Ray Kurzweil mapped the exponential curves. Nassim Taleb warned about the Black Swans institutions can't see. All three phenomena are converging in M&A right now. The adoption numbers doubled in a single year. If you're not using AI in your deal process today, you're not just behind — you're operating with a structural disadvantage that compounds every quarter.
| M&A Phase | Adoption | AI Application |
|---|---|---|
| Strategy & Assessment | 40% | Market landscape analysis, strategic fit modeling |
| Target Identification | 35% | Pattern matching across millions of companies, predictive screening |
| Due Diligence | 35% | Automated document review, risk flagging, contract analysis |
| Integration Planning | Growing | Cultural assessment, system integration planning |
| Regulatory | Emerging | Compliance checking, antitrust risk modeling |
Sources: Deloitte 2025, Bain 2026, McKinsey 2025
Sources: Bain 2026, Deloitte 2026
67% of organizations cite data security as their leading concern with GenAI in M&A. 65% are concerned about data quality and availability. The organizations succeeding are those that build robust AI governance frameworks alongside deployment.
Source: Deloitte 2025 GenAI in M&A Study
We live in the Never Normal — each day is the slowest day of the rest of your life. There is no stable state to return to, no "new normal" to settle into. 44% of legal tasks are technically automatable today. Not in five years. Today. And the capabilities are doubling, while costs collapse 91% in eight months. The firms that understand this aren't just automating — they're fundamentally reimagining what a legal practice even is. The billable hour was already under pressure. AI is the extinction event.
ELTEMATE, Hogan Lovells' legal tech brand, now has ~100 employees and 4,400+ active users worldwide. Their flagship AI platform CRAIG delivers:
🏆 Disruptive Technology of the Year — European Legal Innovation & Technology Awards 2025
🏆 Top 100 AI Tech Companies — World Future Awards 2025
| Company | Focus | Traction |
|---|---|---|
| Harvey | AI for legal professionals | $100M ARR (founded 2022) |
| Luminance | AI contract intelligence | 600+ law firms |
| Kira Systems (Litera) | ML-powered contract analysis | Enterprise-grade |
| Relativity | AI-powered e-discovery | Market leader |
| ELTEMATE/CRAIG | Full-stack legal AI | 4,400+ users, award-winning |
This is where it gets uncomfortable. Multiple credible researchers — including former OpenAI staff — converge on a timeline that most people dismiss as science fiction. But dismissal is the Rutherford Syndrome in action: Ernest Rutherford, the father of nuclear physics, declared nuclear energy "moonshine" in 1933. Twelve years later, it ended a world war. The experts most qualified to understand a technology are often the most blind to its trajectory. The question is not whether superintelligence arrives — it's whether your institutions, your career, and your worldview are ready for the most consequential event in human history.
By Daniel Kokotajlo (former OpenAI), Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
The authors note that pre-2026 predictions are "substantially more grounded" while post-2026 is speculative. But here's what I tell every audience: even if this timeline is off by 3-5 years, the trajectory doesn't change. And if it's off by 3-5 years, that still means superintelligence arrives before most institutions have finished their first AI strategy document. Read the full analysis →
Source: ARK Invest Big Ideas 2026
| Declining Skills | Rising Skills |
|---|---|
| Manual document review | AI prompt engineering & tool orchestration |
| Basic financial modeling | AI-augmented scenario analysis |
| Keyword-based legal research | Natural language AI querying |
| Manual compliance checking | AI risk assessment interpretation |
| Data room organization | AI-powered data extraction & synthesis |
Here's what I believe the data points to, and what most commentators get wrong: mass automation by AI is not the end of human purpose — it's the end of forced labor. It is De Grote Bevrijding — the Great Liberation. For the first time in 200,000 years, humanity has the technology to free itself from repetitive, soul-crushing work.
Yes, the transition will be brutal. 22% of jobs disrupted by 2030 is probably an underestimate — I project 40-60% displacement across white-collar and blue-collar sectors by 2030. But the endpoint is abundance. AI doesn't replace the M&A professional — it amplifies them. One AI-augmented analyst can do the work of 5-10 traditional ones. The premium shifts from information gathering to judgment, relationships, and wisdom — the things that make us irreducibly human.
The question for this room: will you be the professionals who ride this wave, or the ones swept away by it?
AI doesn't just automate work — it generates arguments more compelling than humans can write. And here's the uncomfortable truth: it does this without a truth function. Philosopher Harry Frankfurt defined "bullshit" as speech produced without concern for truth — not lying (which requires knowing the truth), but speaking with complete indifference to it. By that definition, large language models are the most efficient bullshit machines ever built. They produce fluent, confident, persuasive text that happens to be correct most of the time — but when it's wrong, it's wrong with the same confidence. This is the central paradox of the AI era for professionals who deal in truth and evidence.
A peer-reviewed study published in Nature Human Behaviour found that GPT-4 with access to personal data had 81.2% higher odds of persuading humans compared to other humans.
Salvi et al. (2025), Nature Human Behaviour, 9(8), 1645-1653
Can AI detect its own nonsense? Mostly, no. BullshitBench v2 tested 70+ AI models with plausible-sounding but nonsensical prompts — including legal and financial scenarios.
AI can produce compelling due diligence reports, persuasive pitchbooks, and sophisticated valuations — but without verification, they may contain confident errors. The role of human judgment in verifying AI output is becoming the single most critical professional skill of the 21st century.
This is the paradox of AI-augmented cognition: AI makes us smarter and dumber simultaneously. When AI handles cognitive work for us, it becomes part of our extended mind — but the brain itself remains unchallenged and undeveloped. The neural pathways that should be strengthened through reasoning and review atrophy when bypassed. The future isn't AI or humans. It's AI plus humans who maintain the discipline to think critically — who use AI to augment their cognition, not replace it.
Everything we've discussed so far is the digital intelligence revolution. But intelligence is about to get a body. 2024 was the breakout year for humanoid robotics. By 2030, humanoid robots will begin displacing human labor at scale. The humanoid form isn't arbitrary or nostalgic — it's engineering optimization. The entire built environment was designed for the human body. One general-purpose form factor for all tasks is vastly more efficient than building specialized robots for each application. Morgan Stanley projects one billion robots by 2050. I think they're 10-15 years early on their own forecast — they'll hit that number by 2035-2040.
Figure, Tesla Optimus, 1X Technologies, Boston Dynamics Atlas, Agility Robotics Digit, Unitree, Fourier Robotics, UBTECH Walker, AGIBOT
If humanoid robots penetrate 80% of US households over five years, GDP growth could accelerate from 2-3% to double digits.
Knowing the data is not enough. The gap between understanding what's coming and actually preparing for it is where careers are made or broken. Here are concrete, prioritized recommendations for the three professional tracks in this room — based on where the technology is right now, not where it might be in five years.
A Critical Note on Confidentiality & Data Privacy
You work in M&A. You handle non-public, price-sensitive, legally privileged information every day. Do not blindly upload confidential client data to any public AI model.
When I recommend using frontier AI tools below, I mean it — but with professional judgment. Consider these layers:
The recommendations below are a starting point. You have a professional and ethical responsibility to exercise judgment about what data goes where. Consult your firm's information security policies. If your firm doesn't have an AI usage policy yet — that is itself a red flag.
Many of you use internal tools like Hogan Lovells' CRAIG or similar firm-specific AI platforms. These are valuable — but understand what you're working with. CRAIG launched in December 2023 and, like most enterprise legal AI tools, its underlying model likely lags 12-18 months behind the frontier. Enterprise deployments require compliance review, security audits, and integration testing — all of which create delay.
Meanwhile, frontier models like Claude Opus 4.6, GPT-4.5, and Gemini 3.1 Pro (March 2026) are operating at a level that would have been science fiction two years ago — complex legal reasoning, multi-step analysis, nuanced judgment calls across hundreds of pages. The gap between what your firm's tool can do and what the frontier can do is enormous, and it's growing.
This doesn't mean your firm's tools are useless — they're tuned for your specific workflows and data. But if you only interact with enterprise AI, you have no idea what's actually possible. You owe it to yourself — and your clients — to understand the frontier. Use consumer-tier frontier models (with non-sensitive data) to calibrate your understanding of where AI actually is. Then push your firm to close the gap.
The majority of this room. Your profession will be unrecognizable within 36 months.
This Week:
This Month:
This Year:
AI is already trading, already modeling, already writing pitch books. The question is whether you're the one directing it.
This Week:
This Month:
This Year:
PE is about to experience the most dramatic shift in value creation since the leveraged buyout was invented.
This Week:
This Month:
This Year:
The Common Thread
Regardless of your track — legal, finance, or PE — the professionals who thrive in the next decade will share three traits: AI fluency (not expertise, fluency — you need to think in AI capabilities the way you currently think in spreadsheets), speed of adaptation (the window between "new technology" and "table stakes" has collapsed from decades to months), and intellectual honesty (the courage to admit that the world you trained for no longer exists). The ones who move first don't just survive. They define the new rules.
One thing I insist on: no hallucinated data, no fake papers, no broken links. Every statistic on this dashboard is traceable to its original source. In a world where AI can produce confident nonsense, the ability to verify claims is your most valuable skill. Don't take my word for it — read the reports yourself.