AI and the Collapse of the Big Four?
How automation is cracking the billable-hours pyramid and forcing Deloitte, PwC, EY, and KPMG to reinvent — or be replaced.
For most of my career implementing AI inside big, complicated organizations, the Big Four — Deloitte, PwC, EY, and KPMG — have felt like the granite pillars of professional services. In the UK, they audited 99% of FTSE 100 companies as of 2011. That dominance isn’t a fluke; it’s the compound interest of scale, brand, and a pyramid model that rewards lots of smart people billing lots of hours.
The shift now isn’t cyclical. It’s structural. The real question isn’t whether AI will touch the Big Four — it already has — but whether it collapses the old operating model or forces a reinvention strong enough to preserve their leadership.
Over the past three years I’ve watched prototypes escape the lab. CFOs who once asked me for “an AI strategy slide” now want to know which agents can pass internal audit. Boards aren’t debating whether to use AI, but how to govern it and how to price services that are part software, part human judgment. If you want a rule of thumb for impact, it’s this: where the work is structured and data-heavy — audit, tax compliance, due diligence, risk and forensics — AI lands first. Strategy and change management follow, but later.
Their model is world-class — and under pressure
The “Big Four” emerged from mergers (and Arthur Andersen’s 2002 collapse), consolidating what used to be the “Big Eight”. The engine has been a classic pyramid: many juniors at the base, a tight group of partners at the top, and billable hours as the fuel. It’s worked for decades.
AI challenges the core economics. When machines do a big share of the tasks people used to do, hours shrink, teams shrink, and clients ask harder questions about fees. Even if demand holds, the mix shifts: fewer time-based engagements, more platform + outcomes.
I often suggest a simple test to leadership teams: “If your top 10 offerings had to be delivered with 50% fewer human hours in 18 months, what would you change first — pricing, process, or product?” That conversation surfaces the operational debt hidden inside the current model.
What AI does to audit and advisory: automate the core
Audit, forensics, due diligence, compliance — these are structured, data-heavy domains. That’s exactly where AI shines. KPMG already uses AI to scan millions of accounting entries and flag anomalies for human review. In due diligence, models can read/summarize thousands of contracts and financials in minutes, surfacing risks humans might miss.
Insiders are blunt about the direction of travel. Alan Paton, ex-PwC, predicts “most structured, data-heavy tasks in audit, tax, and strategic advisory” will be automated within 3–5 years, potentially eliminating about 50% of roles in those areas — and says there are AI solutions capable of 90% of the audit process already. If you’re a CFO, you’ll ask why you’re paying for weeks of human testing when a system can do it instantaneously.
The firms are not asleep at the wheel. Deloitte launched Zora AI — agentic tooling (with Nvidia tech) that automates invoice processing, trend analysis, and more. EY’s AI now assists 80,000 tax pros, handling 3 million+ compliance cases and tens of millions of routine processes each year. These tools free thousands of hours — great for productivity, but it pressures the billing model built on human hours.
Where this lands practically: audits become exception-driven. Humans shift from sampling to investigating AI-flagged outliers; advisory work shifts from “build the analysis” to “stress-test scenarios and steer change.” The winners will formalize human-in-the-loop controls — clear thresholds for escalation, evidence trails for regulators, and documented model limitations.
From pyramid to diamond: the talent model flips
The pyramid relied on large graduate intakes doing repetitive work to learn the ropes. AI does much of that work now. Firms are already tightening the base: graduate openings fell 44% year-over-year in 2024 across the Big Four; KPMG cut some cohorts by nearly 30%.
As Ian Pay (ICAEW) put it, firms are talking about a “diamond model” — a thinner base, a wider middle of technical and managerial experts — because AI can’t yet make all the judgment calls. Expect demand to shift from rote task-doers to AI-fluent mid-career pros who can interpret outputs and manage edge cases.
Offshoring? Also pressured. If AI can do the work anywhere, labor arbitrage loses bite. Paton’s view: any model “based on the number of people you have” is “really vulnerable” now. We’ve seen offices shrink in high-cost markets and grow elsewhere (e.g., Deloitte NL down 5%, Malaysia up 9% in a year). AI could both accelerate that shift and reduce the total need for offshored roles.
If I were designing a 2026 intake, I’d weight hiring to:
1. Controls Engineers
These are the people who make AI safe and auditable. They stand up a live model inventory, wire in accuracy/bias/drift tests, ensure runs are reproducible, and document everything so a regulator (or partner) can follow the trail. In the first 90 days, I expect a working registry, basic telemetry, and signed model cards for the top use cases. I hire for baseline thinking, data lineage instincts, and the ability to turn a policy line into an automated check. I track simple numbers: models registered and tested, time to approval, incident rate, audit findings closed.
2. Domain Translators
Translators turn a CFO’s question into a solvable AI problem — and back again into plain-English value. They define metrics a board cares about (coverage, speed, errors, savings), write sensible data contracts, and narrate results clients can act on. Quarter one should produce two board-safe playbooks (e.g., GL anomaly review, vendor DD) with clear definitions of success. I look for structured assumptions, a quick value tree on a whiteboard, and crisp writing. Success shows up as faster cycle times, real adoption, delivered outcomes, and better NPS.
3. Exception Handlers
When the system flags something, these people investigate, decide, and explain — calmly. They triage alerts, chase root causes, and choose escalate/resolve/suppress with clear reasoning and a paper trail. Early on, I want a usable exception taxonomy, runbooks anyone can follow, and SLAs baked into workpapers and client comms. I hire for curiosity under pressure and empathy with facts. I track mean time to resolution, false positives, reoccurrence, and — most important — trust.
Team Shape
I’d staff small pods: one controls engineer, one translator, two exception handlers, with an ML engineer on tap. Year one, bias toward controls to build the governance spine (roughly 40/35/25 across controls/translators/exception). As playbooks stabilize and we shift from “make it safe” to “make it adopted,” move toward 30/40/30.
Onboarding
Make the runway short and real: a two-week bootcamp on tools and policy, sixty days shadowing a flagship use case, and a ninety-day certification tied to promotion. Rotate translators through client demos, exception handlers through red-team drills, and controls engineers through audit dry-runs. The goal is simple: safe by design, valuable by default, ready for scrutiny.
When “insight” gets commoditized
Clients paid a premium because the firms’ knowledge and analysis were scarce. AI changes that equation. LLMs synthesize industry data, regs, and best practices in seconds. Smaller firms and in-house teams can now punch above their weight with AI.
As Alibek Dostiyarov observes, AI lets boutiques proliferate — one client handled 10–12 inquiries simultaneously where before they could handle only a few. Mid-market West Monroe says win rates are up and pipelines are at records, aided by AI; they’re also attracting talent from the Big Four. That’s a real threat: brain drain plus nimble, tech-forward rivals.
Clients’ expectations are shifting to outcomes over hours. One senior advisor in India notes engagements once priced at ₹20–25 lakh are now done for about half, with margins and timelines under pressure. “Outcome-based advisory will be the norm… soon.”
For Big Four partners, that means sharpening three edges:
Differentiated IP (not just decks — tools clients reuse),
Decision rights (clarity on when AI speaks vs. when humans decide),
Implementation muscle (change management, data plumbing, and risk sign-off).
Disrupt yourself — or be disrupted
To their credit, the Big Four are investing billions in AI. KPMG announced $2B over five years, targeting $12B in added revenue. Each firm has built proprietary platforms: Deloitte’s AI-driven audit analytics; PwC’s GL.ai with H2O.ai; EY’s Helix; KPMG’s Ignite. The pitch: human expertise + AI scale.
But scale cuts both ways. As Hywel Ball (ex-EY UK chair) notes, the bigger you are, the slower change can be; even small process tweaks can take months or years. Meanwhile, smaller firms can roll out a tool in weeks. That adoption gap matters.
The commercial model also has to change. If hours shrink, pricing must pivot to subscriptions or outcomes. EY leaders have hinted at a SaaS-style future for services. The wider industry is already moving: McKinsey reports ~25% of projects are outcomes-priced, uses an internal GPT (Lilli) in 70% of consultants’ daily work, cut 30% of drudge time, eliminated 5,000+ support jobs, and gets 40% of revenue from tech-enabled services. Different firm, same lesson: bite the bullet, restructure, and productize what you can.
What I advise leadership to stand up in the next 12 months:
An AI “trust kit” for the whole firm
One place to register every model, simple checks for accuracy/bias/drift, prompt logs, and clear “a human signs off here” moments.
Month 1: draft the rules and pilot on two busy use cases. Month 2: turn on the registry and basic telemetry. Month 3: do a mock audit so we know it holds up.A shelf of reusable AI building blocks
The stuff we do over and over — extract, reconcile, flag anomalies, summarize — packaged as clean, versioned APIs with owners and SLAs.
Month 1: pick the top six blocks. Month 2: ship v1 with docs. Month 3: wire them into three flagship services and stop reinventing the wheel.A pricing council that tests outcomes and subscriptions
Define what “value” looks like per service (coverage, speed, error rate, savings) and run real pilots.
Month 1: choose five offers to test. Month 2: A/B the pricing. Month 3: publish what worked and expand. Start with vendor due diligence or GL anomaly reviews — they’re measurable and fast.A skills ladder that actually maps to the work
Name the roles we need now — Exception Handler, Domain Translator, Controls/ML Engineer — and tie micro-certs to promotion.
Month 1: publish the ladder and baseline skills. Month 2: certify 15% of staff. Month 3: make certification table stakes for leading engagements.
The global wrinkle: regulation and geopolitics
The Big Four operate everywhere, and rules aren’t uniform. The EU’s AI Act (expected to bite in 2025) will ratchet up requirements on data, transparency, and risk. The firms see an opening: become the auditors of AI itself — checking effectiveness, bias, and compliance — much like they did with ESG. PwC UK, for example, is gearing up AI assurance that tests chatbot accuracy and examines algorithms for unfair bias. Deloitte calls such assurance critical to adoption. A new revenue line beckons.
But regulation cuts both ways. The EU may slow adoption; the US (for now) is lighter-touch, which could accelerate disruption. China is a separate puzzle: heavy domestic AI investment plus tighter scrutiny of foreign auditors (reports of MOF scrutiny and talk of reducing reliance on the Big Four) complicate the landscape.
Regulators also want proof that AI improves audit quality. The UK’s FRC found wide use of AI tools but inconsistent monitoring of their impact — few clear KPIs to show quality gains. Expect demands for transparency on when and how AI is used in audits.
Operationally, global firms should design for regional variance: data residency, explainability requirements, and local model registries. One-size-fits-all deployment will create either compliance gaps or needless friction.
So — collapse or reinvention?
A literal collapse is unlikely. These firms are too capitalized, too embedded. What’s far more likely is the collapse of the old operating model — and the rise of a new one. By the early 2030s, we may see a “diamond” org shape, fewer total headcount, many more AI engineers, data people, and hybrid operators, and revenue mixes that look more like platform + outcomes than pure hours.
Could we end up with a Big Five or Six (as tech-enabled challengers scale) or a Big Three (if one stumbles)? Yes — plausible scenarios now. As one industry voice put it, firms must change “talent, mix of people, systems, operating model to make it through the valley of death.” I agree.
If they get it right: better audits (full-population testing, earlier anomaly detection), faster advisory (weeks of analytics in days), and clearer value (priced for outcomes). If they don’t: a slow bleed of clients, talent, and relevance to nimbler competitors.
My view after a decade in the trenches: the winners will practice productive paranoia — move fast, measure impact, price on value, and treat AI as a teammate. The old Big Four may indeed be fading. But a new Big Four — leaner, tech-enabled, and wiser — can absolutely take their place. The clock’s ticking.
References
Accountancy Age. (2025, June 27). FRC questions rising use of AI by auditors. ICAEW
Accountancy Age. (2025, August 6). Big Four lag behind smaller firms in AI adoption, says ex-EY chair. Accountancy Age
Axios. (2023, July 11). Microsoft strikes $2 billion AI partnership with KPMG. Axios
Business Insider. (2025, April 14). Uber cofounder says AI means some consultants are in “big trouble”. Business Insider
Business Insider. (2025, May 20). AI is coming for the Big Four too. Business Insider
Business Insider. (2025, April). Inside the AI boom that’s transforming how consultants work at McKinsey, BCG, and Deloitte. Business Insider
CIO.com. (2025, March 18). Deloitte unveils agentic AI platform, Zora AI. CIO
Competition & Markets Authority. (2019, April 18). Statutory audit services market study: Final report. GOV.UK
Deloitte. (2025, March 18). Deloitte unveils Zora AI: Agentic AI for tomorrow’s workforce [Press release]. Deloitte United Kingdom
Deloitte. (2025, July 15). Deloitte expands AI capabilities in Omnia global audit platform [Press release]. Deloitte United Kingdom
Deloitte. (n.d.). Digital audit technology — Deloitte Omnia. Deloitte United Kingdom
EY. (2025, March 18). EY launching EY.ai Agentic Platform, created with NVIDIA AI, to drive multi-sector transformation — starting with tax, risk and finance domains [Press release]. EY
EY. (2025, April 9). EY announces large-scale integration of leading-edge AI technology into global assurance technology platform [Press release]. EY
EY. (n.d.). EY Helix — Audit technology. EY
Financial Times. (2025, July). Big accounting firms fail to track AI impact on audit quality, says regulator. Financial Times
Financial Times. (2025, August 6). Big Four face AI competition from smaller firms, says former EY UK boss. Financial Times
Fortune. (2023, July 12). KPMG is committing $2 billion-plus to A.I. — and estimates significant revenue growth. Fortune
The Guardian. (2011, May 17). Big Four auditors face OFT consultation. The Guardian
The Guardian. (2023, February 22). China instructs state firms to phase out Big Four auditors on data risk. The Guardian
The Guardian. (2025, June 30). Number of new UK entry-level jobs has dived since ChatGPT launch — research. The Guardian
H2O.ai. (n.d.). PwC: AI-powered audit [Case study]. h2o.ai
House of Lords Economic Affairs Committee. (2010). Auditors: Market concentration and their role. manifest.co.uk
ICAEW. (2025, July 8). FRC publishes landmark guidance on the uses of AI for audit. ICAEW
KPMG. (n.d.). KPMG Ignite — Artificial intelligence platform. KPMG
KPMG. (2023, December 13). KPMG global FY2023 revenues grow to US$36 billion [Press release]. KPMG
New York State Society of CPAs. (2025, March 24). Big Four now using agentic AI to boost staff productivity. nysscpa.org
NVIDIA. (2025, March 18). NVIDIA launches family of open reasoning AI models for developers and enterprises to build agentic AI platforms [Press release]. NVIDIA Investor Relations
People Matters. (2025, July 20). Big Four cut entry-level jobs amid AI and cost pressures. peoplematters.in
PwC. (n.d.). Harnessing the power of AI to transform the detection of fraud and error [Press release]. PwC
PwC Mauritius. (n.d.). Audit and General Ledger Analysis using Halo. PwC
Reuters. (2024, July 10). China dials up scrutiny of Big Four audit firms after Evergrande probe — sources. Reuters
Reuters. (2024, December 19). China introduces new rule to tighten scrutiny on foreign accounting firms’ domestic operations. Reuters
Scottish Financial News. (2025, June 23). Big Four slash graduate jobs as AI takes over entry-level tasks. Scottish Financial News
The Finance Story. (2024, June 4). Big 4 firms scramble to win the consulting race. Investing… The Finance Story
The Times. (2025, January). Robots could speed up audits — but they’re unlikely to cut fees. The Times
The Wall Street Journal. (2025, August). As AI comes for consulting, McKinsey faces an “existential” shift. The Wall Street Journal


