I spent over a decade on the inside of companies going through due diligence.
I was the operator inside the org chart - watching consultants ask questions that confirmed what the deck said and miss what was actually broken. I was the one who later had to fix what those DDs missed: the tech debt that wasn't pitched as debt, the founder dependencies that weren't called out, the AI claims that turned out to be thin wrappers around someone else's model.
Now I'm building a DD practice from the other side - and I'm building it differently because I've been the company being assessed.
This piece outlines the framework I'd apply, the things pure-analyst DD typically misses, and what operator-led DD should look like. It's written for VCs evaluating how to add this lens to their existing process - and for founders who want to understand what an operator-led assessor would actually look at.
Why Pure-Analyst DD Misses Things
Most pre-investment DD follows a familiar pattern: management interviews, document review, customer calls, expert calls, and a polished deck of findings. It's thorough on what's documentable - financials, contracts, market data, customer concentration.
It's weak on what isn't documented:
- How the team actually makes decisions
- Whether the architecture will scale or buckle
- Whether the founders will execute or freeze under pressure
- Where critical knowledge is concentrated in one person
- Whether the AI strategy is real or marketing
Pure analysts work from frameworks. Operators work from scars.
The best DD comes from people who've been the company being assessed - and who got it wrong themselves before they learned to spot the patterns in others.
The 5-Phase Operator DD Framework
Phase 1: Operational Diagnostic (Days 1-3)
Most "operational DD" reports stop at the org chart. That's where mine starts.
What I actually look for:
- Decision flow: Who really makes which decisions, regardless of titles? Founder bottlenecks become deal-breakers post-Series B.
- Process maturity: Do "processes" exist as documented artifacts nobody follows? Or as actual operating rhythm? The gap between the two predicts post-close pain.
- Tribal knowledge mapping: What lives only in one person's head? Quantify the bus factor on critical functions.
- Communication patterns: Do teams talk to each other or only through leadership? Cross-functional friction is invisible in interviews but visible in workflow analysis.
- Cultural cracks: Companies look great when revenue's growing. Cultural problems surface during the first hiring crunch, the first big customer loss, or the first AI competitor.
Output: Operational maturity score across 8 dimensions, with prioritized risks.
Phase 2: Technical Architecture Review (Days 3-7)
Most VCs outsource technical DD to ex-CTOs who've been out of the building for 5 years. They evaluate against frameworks from when they were last hands-on. Tech moves fast - DD evaluators need to be moving with it.
What I actually look for:
- Architecture vs. business plan alignment: Does the architecture support the next 18 months? Or only the last 18?
- Tech debt that's been rebranded: "Fast iteration" sometimes means "we never refactored." Decode the language.
- Real engineering practices: Code review culture, deployment frequency, incident response - these signal engineering maturity better than headcount.
- Build vs. buy decisions: Companies that built what they should have bought reveal poor judgment. Companies that bought what they should have built reveal weak technical leadership.
- Hidden infrastructure costs: Cloud bills, third-party APIs, data licensing - costs that grow with usage and surprise teams at scale.
Phase 3: AI Readiness Assessment (Days 5-9)
This is where I see the biggest gap in current DD practice. Companies pitch AI capabilities they're not positioned to deliver.
Pure-analyst DD reads the AI section of the deck, takes management's word, and moves on. Operator-led AI DD evaluates against the 5 dimensions of AI readiness:
- Data foundation: Is the data clean enough for AI to produce useful output?
- Team capability: Does the team understand AI well enough to use it correctly?
- Infrastructure: Can existing systems support AI integration?
- Strategy & governance: Is there an AI strategy beyond "we use ChatGPT"?
- Use case identification: Are AI use cases tied to measurable business outcomes?
"AI-powered" companies whose AI is a thin GPT wrapper without proprietary data, model fine-tuning, or differentiated workflow. Looks impressive in demo. Disappears as soon as a competitor builds the same wrapper.
Phase 4: Team & Founder Evaluation (Days 7-11)
This is the highest-stakes module - and the one most underdeveloped in traditional DD.
Pure-analyst DD does management interviews, captures management's stated philosophy, and submits a write-up. Operator DD looks at decision patterns under pressure.
- Founder decision archeology: What are the 3 hardest decisions you made in the last 12 months? How did you make them? What would you do differently? The answers reveal more than any psychometric.
- Conflict resolution: When co-founders disagree, how does it actually resolve? Companies with unresolved founder conflict rarely survive scaling pressure.
- Hiring philosophy: Show me the last 5 senior hires. Who interviewed them? Who decided? The answer reveals whether founders can let go.
- What scares them: Ask founders what would kill the company. Vague answers ("execution") reveal weak strategic clarity. Specific answers ("we have 90 days to fix our churn before LTV inverts") reveal operators.
- Bus factor: If the founder disappeared for 60 days, what breaks? The answer measures the company's actual independence.
Phase 5: Risk Inventory & Post-Close Roadmap (Days 11-14)
This is where operator DD differs most from pure-analyst DD. Pure-analyst DD ends with findings. Operator DD ends with actionable post-close roadmap.
What goes in the inventory:
- Key-person dependencies that need redundancy planning
- Process gaps that will fracture at the next growth stage
- Technical debt requiring immediate attention vs. eventual refactor
- AI claims requiring deal terms (e.g., milestone-based payments)
- Cultural risks needing hiring or coaching intervention
- Governance gaps requiring board attention
The output: A go/caution/no-go signal, plus a prioritized 90-day post-investment value creation plan. If you invest, here's what to fix first.
The Red Flags Pure-Analyst DD Misses
Some patterns I've learned to spot - usually invisible to pure-analyst DD:
1. The "Architectural debt" Hidden in Velocity
Companies bragging about deploy frequency sometimes have it because nothing is actually tested. Investigate: do they have automated tests? Code review culture? Or is "ship fast" a euphemism for "ship without rigor"?
2. The Founder-As-Hero Dependency
If the founder personally closes every enterprise deal, personally interviews every senior hire, personally makes every product trade-off - the company has no business after the founder. Look for what the founder has successfully delegated.
3. AI as Marketing, Not Capability
Test: ask the head of engineering to explain how AI is integrated into the core product. If the answer is "we use ChatGPT for X" without describing data flows, model selection, fine-tuning approach, or governance - the AI claim is marketing.
4. Process Theater
Companies show you their Notion workspace with 47 documented processes. Ask people on the ground: do you actually follow this? In 70% of cases, the documented processes are theater - performed for investors, ignored in practice.
5. Customer Concentration Risk Hidden in Logos
"We have 50 customers" sometimes means "1 customer is 60% of revenue, 49 are 40%." Look at revenue concentration, not logo count.
6. The Fast-Growth Cultural Crack
Companies that grew 10x in 18 months almost always have unresolved cultural debt - hires made too fast, processes broken to ship, leadership stretched thin. The crack isn't visible until growth slows. Plan for it.
What Modern Deal Pace Requires
Big consulting firms quote 6-8 week DD timelines. That doesn't match how deals actually move today.
Modern DD timelines I work to:
- Light DD (specific risk area): 3-5 days
- Standard DD (operational + technical): 1-2 weeks
- Comprehensive DD (full coverage): 2-3 weeks
- Express DD (time-sensitive deals): 24-72 hours
Speed is a feature, not a compromise. The depth of analysis isn't lower - the framework just runs faster because there's no committee approval, no junior associate doing the first draft, no 12-person team meeting to align on findings. Operator-led DD is one experienced person plus their network of specialists when needed.
Why DD-to-Portfolio Continuity Matters
The pattern I want to build my practice around: VCs engage me for pre-investment DD. I surface what's real about operational maturity, technical posture, and team scalability. If they invest, I continue with the portfolio company to help fix the issues I diagnosed.
Why this design works:
- No re-onboarding cost. The same person already understands the company in depth.
- Credibility with founders. The DD report becomes a roadmap, not a hit job - because the assessor stays to help execute it.
- Continuity for the VC. One point of contact for both pre- and post-investment work on a highest-risk asset.
- Aligned incentives. An assessor who has to live with the findings has every reason to be calibrated and honest.
This pattern doesn't fit every VC's model. But for funds investing in companies that need operational support to reach the next stage - it's substantially more efficient than serial consulting engagements with different vendors at each stage.
Have a Deal in Motion?
Operator-led DD for VCs, angel investors, and family offices. 1-2 week delivery. Honest signals. Starting from $1,500.
See DD Service DetailsFrequently Asked Questions
What's operator-led DD vs traditional DD?
Traditional DD applies frameworks from analyst perspective. Operator-led DD applies pattern recognition from someone who's built and scaled the systems being assessed. Both have value - operator-led is faster, catches things frameworks miss, and produces actionable post-close roadmaps.
Should I always do operator DD or sometimes pure analyst?
Pure analyst DD wins when: financial rigor matters most, deal is in a sector you're not familiar with, or regulatory complexity dominates. Operator DD wins when: scaling/execution risk is the main question, AI capabilities need verification, or post-investment value creation is part of the thesis.
How honest can DD really be when fees come from the VC?
The same way audit honesty works - reputation is the currency. A DD assessor who hides issues to please a fund loses every future engagement once those issues surface post-close. The economic incentive aligns with honesty over flattery, regardless of who pays.
Can DD findings actually kill a deal?
Yes - and sometimes they should. The value of operator-led DD isn't producing "no-go" signals on every deal. It's producing accurate signals: "go with these post-close priorities," "go but with these deal terms," or "pause until X is fixed." A DD that always says "go" isn't honest. A DD that always says "no" isn't useful. The job is calibration based on what's actually true about the company.
Related Reading
- Enterprise AI Transformation: The Operator's Playbook - The companion flagship for post-investment AI transformation in portfolio companies.
- AI Readiness Assessment Guide - The diagnostic framework underneath the AI readiness section of any DD.
- The Operator-Consultant Method - The four-phase methodology behind post-investment portfolio support.
Working With May on a Live Deal
If you have a deal in motion that needs operator-led DD, the full DD service overview covers the engagement formats: Light DD (3-5 days, from $1,500), Standard DD (1-2 weeks, from $3,500), Comprehensive DD (2-3 weeks, from $7,500), and Express DD (24-48 hours for time-sensitive deals). Post-close, the same operator can continue with the portfolio company through Organizational Assessment, fractional support, or AI Transformation - cleaner handoff, faster value creation.
About the author: May Mor is a Scale Architect and AI Builder with 10+ years scaling tech companies across fintech, digital banking, and adtech - leading R&D from 30 to 150 engineers and shipping products to 100K+ users. She holds an M.Sc in Intelligent Systems & AI from Afeka and is certified in Organizational Consulting from Bar-Ilan University. She runs Scale with May, offering Enterprise AI Transformation, operator-led organizational consulting, and investment due diligence for VCs and growing tech companies.