Beyond Forecasts: AI for Deal Analysis
By Marco Diaz, March 15, 2025
Forecasts look outward. Deal analysis looks inward—at the messy signals that decide whether a committee will actually say “yes.” The shift isn’t about robo-selling; it’s about a quiet analyst that scores risk, suggests the next best move, and turns call notes into coachable moments—so cycles tighten and surprises fade.
The scene
Three weeks from quarter close, the pipeline review feels good. The champion is engaged, the pilot’s green, and the forecast says 70%. Then the machine flags something human eyes slid past: Security never repeated their “go/no-go” control in their own words; Finance asked for “ranges, not point estimates,” and the last call transcript shows a wobble on data residency. No drama, just drift.
What changed isn’t the product. It’s the surface area of evidence. An AI quietly stitched together transcripts, email trails, CRM fields, and meeting minutes; it didn’t predict the future—it analyzed the deal you already have.
Why this moment (and why it’s not hype)
Adoption is real and broadening. Across functions, reported AI use jumped sharply in 2024–25, with sales among the most active adopters. Executives increasingly point to measurable benefits: productivity, data quality, and personalization, when AI is embedded in the sales motion.
The economic upside is non-trivial. Independent analyses estimate generative AI could unlock $0.8–$1.2 trillion in additional productivity across sales and marketing alone, on top of gains from earlier analytics. The direction of travel is clear: from dashboards to decisions.
And the trust bar is rising. We’re entering an “agentic” phase: AI that doesn’t just summarize but proposes actions, where governance and transparency matter as much as cleverness.
What AI can see that humans won’t (or can’t)
1) Risk signals hiding in plain sight
Machines are good at “boring vigilance.” They notice when Security/Legal never acknowledged a control posture in writing, when Finance keeps asking for ranges (a proxy for uncertainty), or when your contact sentiment slips from active voice to hedged language across calls. None of these is decisive alone; together, they’re a risk signature.
2) Consensus, not charisma
AI can now approximate consensus surface area, how many functions are truly engaged, in what sequence, and with what strength of evidence. That’s different from counting meetings. It’s closer to the committee reality that decides enterprise deals.
3) Coachable moments at scale
Conversation intelligence has matured from keyword bingo to pattern detection: missed questions, talk-to-listen ratios, drowned-out stakeholders, promises with no follow-up. The value isn’t a score; it’s the clip reel that turns a rep’s last call into tomorrow’s improvement.
From prediction to prescription
Next best move, not next best email.
The difference between marketing automation and deal automation is context. A good system won’t just suggest “send a case study”; it will suggest which function to engage (Security vs. Finance), what artifact to use (control matrix vs. benefits ranges), and why now (an upcoming audit window). This is AI as orchestration, not spam.
Better forecasts are a by-product.
When risk is exposed earlier and actions are grounded in governance (Security/Legal/Finance checkpoints), forecast quality improves, because the committee has already argued with itself. Some vendors tout accuracy and cycle-time lifts; treat these as directional, but the operating logic is sound.
What changes for the human
AI has shown consistent productivity gains across knowledge work, but there’s a catch: if teams treat it like a crutch, motivation can dip. The fix isn’t to throttle the tools; it’s to put humans at the center of judgment and narrative, and let AI do the vigilance and synthesis.
Reps get a mirror, not a script. They still run the room; the system just highlights where the room went quiet.
Managers stop policing activity and start coaching decision quality (are we advancing governance gates?).
Executives trade anecdote-driven reviews for evidence-driven ones: risk codes, owner names, and dated checkpoints.
The ESO stance: AI inside the operating system
We treat AI as a neutral analyst and research assistant embedded across planning and execution, not a black box, and not a vendor pitch. Its job is to:
Research the account & context. Synthesize public filings, earnings calls, press, job posts, product docs, tech signals, risk disclosures, and incidents into a living Account Brief your team can trust.
Surface business opportunities. Detect triggers (regulatory deadlines, audit windows, budget waves, incident fallout, competitive gaps) and map them to revenue protection, cost avoidance, and risk reduction in that specific organization.
Tailor frameworks to purpose. Select and adapt the right ESO framework to the account’s situation: Decision Brief, Buyer-Owned MAP, Risk Register, Temporal Warfare Calendar™ alignment, or domain frameworks (e.g., Digital Infrastructure Resilience, compliance plays)—and propose the minimum viable artifacts to move the deal.
Score deal risk against committee reality. Flag missing functions (Security/Legal/Finance/Procurement), unacknowledged controls, weak sponsor signals, and consensus gaps.
Recommend the next best move. Who to engage, with which artifact, and why now (anchored to governance gates and timing windows).
Turn conversations into coaching. Generate concise highlight reels and pattern feedback tied to the plan.
This isn’t peripheral; it’s central to ESO. AI increases consensus surface area, reduces cognitive load, and accelerates decisions by enriching analysis, deal acceleration, and strategic decision-making, not by “automating the seller.” All recommendations include source transparency, permissions, and an auditable “why this” note to preserve trust.
What it means for execs (and your champion)
Design for trust. Document data sources, access controls, and “why this recommendation” notes; people will use what they understand.
Instrument governance, not vanity. Track functions engaged, gate acknowledgments, and slippage reasons—not just activity counts.
Make the model coachable. When a recommendation is ignored and the deal still closes, feed that back; when it saves a slip, capture the clip.
Tie AI to cadence. Weekly reviews should read like flight ops: risk codes, mitigations, owners, dates. Forecasts improve as a result, not as a ritual.
A closing scene
In the end, the “AI win” rarely looks cinematic. It looks like one more voice at the table that never gets tired, never forgets a promise, and never confuses energy with progress. It doesn’t replace the seller or the sponsor. It just gives the committee the evidence it needed to say “yes” once.
Sources
Adoption & value: McKinsey State of AI 2024–2025 (AI and gen-AI use rising; sales among leading functions).
Economic potential: McKinsey estimate of gen-AI productivity in sales/marketing ($0.8–$1.2T).
Outcomes with AI in sales teams: Salesforce State of Sales (teams using AI outperformed peers; data quality/productivity benefits).
Agentic AI & trust: Salesforce State of the AI Connected Customer (action-taking AI requires trust and governance).
Coaching & conversation intelligence trends: Gong Labs roundups (deal and conversation pattern insights drawn from large datasets).
Human impact: HBR research on gen-AI effects on productivity and motivation.