Illustrative engagement · representative of a 90-day Department Build · all names fictional
Department 02 · Revenue Ops

How a Revenue Ops department ships in ninety days.

A walkthrough of what a forward-deployed engineering engagement looks like end-to-end. Synthetic client (Northwind Treasury, $420M ARR fintech infrastructure), real methodology, real artifacts. Eight agents shipped to production. Built on the customer’s stack, owned by the customer, instrumented for evals from day one.

Client archetype
$420M ARR · post-PLG enterprise GTM
Sales org
95 quota-carriers · 8 RVPs · 2 sales ops
Engagement window
90 days · 3w discovery · 12w build
Year-one ROI envelope
$2.4–3.6M · methodology below
Chapter 01

The premise

Northwind’s sales team had outgrown its tooling. The product was a developer-led PLG fintech infra platform, but the deals had become enterprise — security questionnaires, redlines, multi-stakeholder buying committees. Reps were closing deals by force of will, not because the system was working. The CRO wanted forecasts she could defend at board meetings. The CFO wanted commission disputes to stop. Three weeks in, we had the picture below.

Systems detected & mapped
Salesforceconnected
CRM
REST + Bulk API
Gongconnected
Call intelligence
Webhook + REST
Clariconnected
Forecasting
REST
Outreachconnected
Sequencing
REST
DocuSign CLMconnected
Contract lifecycle
REST + Webhook
Loopioconnected
RFP / SecQ
REST
Slackconnected
Comms / routing
Bolt + Web API
Snowflakeconnected
Data warehouse
JDBC
8
systems mapped
14
handoff points documented
23
days median internal idle time per deal
95
reps the system would touch
Where time was actually leaking

Heatmap from time-and-motion shadowing of 14 reps and 4 RVPs across two weeks. Reps and managers were spending more time on internal coordination than on the customer.

Rep admin tax (CRM, Slack, handoffs)
38%
Pipeline review prep & data archaeology
22%
Forecast assembly & re-forecasting
14%
Security questionnaires & RFPs
11%
Deal desk routing & approvals
8%
Customer handoff packets
4%
Other (one-offs, exception edge cases)
3%
Opportunity map · prioritized by ROI

The deliverable from week three: which agent to build first and why. P1 is the foundation — every downstream agent depends on clean CRM data and structured call intelligence.

P1
CRM Hygiene + Call Intelligence
$0.9–1.2M year-one·88% automatable·6w build
P2
Pipeline Review Compiler
$0.6–0.9M year-one·100% automatable·4w build
P3
Forecast Intelligence
$0.5–0.7M year-one·89% automatable·8w build
P4
Deal Desk Routing
$0.3–0.4M year-one·73% automatable·4w build
P5
Security Questionnaire Automation
$0.2–0.3M year-one·84% automatable·6w build
P6
Contract Redline Triage
$0.2–0.3M year-one·71% automatable·6w build
P7
Customer Handoff Compiler
$0.1–0.2M year-one·92% automatable·4w build
What was surprising

The forecast wasn’t the broken thing — the rep admin tax was. Reps were spending 38% of their week on internal work, and pipeline-review prep alone consumed every Sunday night for every RVP. The team had been chasing forecast accuracy for two quarters; we found it was a downstream symptom of upstream signal loss. Fix the upstream, the forecast follows.

Chapter 02

Tribal knowledge becomes executable

Half of how a sales org actually runs is in nobody’s playbook. It’s in Marcus’s head, or in a Slack thread from 2024, or in the way the deal-desk lead grants exceptions in Q4 but not Q1. Before agents could run, we needed those rules captured, scored, and made revisable. We call this the SOP Compiler step.

Documented rules · extracted from existing playbook

Northwind had a 38-page deal playbook in Notion. The agent extracted 12 executable rules from it; 6 of the most-cited shown below. Confidence scores reflect how unambiguously each rule could be applied without further interpretation.

Extracted rules12 rules · 6 shown
01Deals >$500k require deal-desk review; >$1M require RVP review prior to proposal generation98%
02Discounts >15% on multi-year contracts require VP Sales sign-off; >25% require CRO sign-off97%
03Security questionnaires matched against KB before manual review; auto-respond if confidence ≥92%94%
04Commission calculated at standard rate unless deal includes non-standard terms flagged in CPQ96%
05Territory assignment follows named-account list first, then geo/segment rules, then round-robin95%
06CS handoff packet must include deal summary, success criteria, and onboarding requirements within 24h of signature99%
Unwritten rules · captured from interviews

What lived in people’s heads. Captured through structured interviews with the AE bench, the deal-desk lead, the CRO, the VP Customer Success, and the regional sales managers. Each rule has a named source so it can be revisited as the org changes.

Marcus always handles Meridian Health deals because he ran the relationship at his last company. Don't auto-route them.

Source · Senior Enterprise AERelationship-specific

Q4 pricing exceptions get more flexibility because annual quota pressure; deals pushed to Jan get standard rates.

Source · Deal Desk LeadPeriod-specific

Healthcare-vertical deals always need a HIPAA addendum even when the prospect doesn't ask for it.

Source · VP SalesVertical-specific

Compensation disputes from the top 10% of quota carriers get escalated to VP Sales Ops, not the standard queue.

Source · Sales Ops ManagerPriority override

If a deal has been in negotiation >60 days, the forecast probability should be downgraded one tier regardless of stage.

Source · CROTiming rule

EMEA renewals route to the regional rep, not the original closer, because of follow-up timezone & language match.

Source · VP Customer SuccessRegion-specific
Agent configuration · per-workflow confidence thresholds

Every agent ships with an explicit confidence threshold. Below it, the agent escalates to a named human; never silently fails. These thresholds are tuned weekly during the first quarter, then quarterly thereafter.

CRM Hygienethreshold 90%
96% auto-resolved4% escalated to human
Call Intelligence (MEDDPICC writes)threshold 85%
81% auto-resolved19% escalated to human
Forecast Intelligencethreshold 90%
89% auto-resolved11% escalated to human
Deal Desk Routingthreshold 95%
73% auto-resolved27% escalated to human
Security Questionnairethreshold 90%
84% auto-resolved16% escalated to human
Contract Redlinethreshold 88%
71% auto-resolved29% escalated to human
Why thresholds, not certainties

Agents don’t do well at pretending they’re right when they’re not. The whole stack is built around the agent declaring uncertainty rather than guessing. Below threshold, work routes to a named human with the full reasoning trace attached. The human decides; the agent learns from the decision; the threshold tunes itself.

Chapter 03

Day-in-the-life

Three perspectives on the same Monday morning, three weeks after deployment. Each role saw a different surface of the same agent system — tuned to the decisions that role actually needed to make.

Marcus Chen, Enterprise AE opened her queue Monday morning and found four items. Down from 23 unresolved CRM tasks before the agents shipped. Everything else was handled overnight.

What agents handled overnight
847
CRM fields auto-populated
12
Calls analyzed · MEDDPICC updated
3
Security questionnaires drafted
1
Forecast updated for Marcus’s territory
Before deployment, this took ~4 hours/day of Marcus’s time. After deployment: 38 minutes spent reviewing the queue below.
Queue snapshot · 4 items required human judgment
Sorted by ROI-at-risk · agents’ recommended actions attached to each
OPP-4824high
Cobalt Labs · pricing approval pending VP
18% discount vs 15% policy. RVP review in flight; SLA 8h. Suggested response drafted.
94%
OPP-4823high
Velocity Inc · champion engagement decay
Priya Rao no activity 11d. Forecast Intelligence flagged commit-risk. Recommended: VP-to-VP outreach this week.
83%
OPP-4828medium
Apex Manufacturing · EB still unnamed
Discovery call 3 surfaced new stakeholder Tom Yi. CFO escape vector still unclear. Suggested EB-id question library.
78%
OPP-4830medium
Stellar Foods · re-engagement after champion departure
Reassigned to Eli Park (Director PM). Re-engagement email drafted from prior intro context. Ready for review.
91%
Weekly time saved
Hours/week previously spent on internal admin · now reclaimed for selling
+18h
Mon
+22h
Tue
+14h
Wed
+26h
Thu
+31h
Fri
Chapter 04

Under the hood — how a single deal moves

One deal enters the system. The Main Sales Agent spawns five specialized sub-agents in parallel — each applying a different rule set, checking different thresholds, surfacing different decisions. The orchestrator synthesises the output. The feedback loop catches the cases where humans correct the agents — and the next similar pattern auto-adjusts.

Active deal · entered Negotiation 2h ago
Meridian Health Systems
OPP-4821 · Feb 3 2026 · Marcus Chen, Enterprise AE
$1.4M ACV
Platform License (3-year)
$960,000
Implementation & Onboarding
$280,000
Premium Support Package
$160,000
Main Sales Agent
Orchestrates 5 sub-agents in parallel
spawns sub-agents →
CRM Enrichment Agent1.4s96%

Updated 14 fields: MEDDPICC score, decision criteria, champion identified, timeline confirmed, competitive landscape populated from last 3 Gong calls.

Deal Routing Agent0.8s97%

Enterprise deal >$500k: routed to deal desk. Named-account match (Marcus Chen). Non-standard discount (22%) flagged for VP approval.

Rule 10: Enterprise (>$500k) to named AE; requires deal desk review
!Pricing Validation Agent1.6s89%

Discount at 22% exceeds 15% standard threshold. Multi-year premium applies. Competitive displacement justification attached. Recommend: VP approval with supporting data.

Rule 2: Discounts >15% require VP Sales approval; >25% require CRO sign-off
Security Questionnaire Agent4.2s87%

Matched 142/200 questions (71%) from knowledge base. 46 routed to security team. 12 flagged for updated SOC2 attestation. Draft response document assembled.

Rule 3: Auto-respond if confidence ≥92%. Below threshold, route to manual.
Contract Prep Agent

Draft contract assembled. Awaiting pricing approval resolution before generating final DocuSign package.

The feedback loop

When a human corrects an agent’s output, the correction is captured, the relevant rule is updated, and the next similar pattern auto-adjusts. The system learns explicitly, with the rule update visible in the audit trail.

Pricing Validation Agent · last week’s correction
Agent predicted
Stage 4 · forecast 60% probability
Based on stage + days-in-stage cohort
Human corrected
Stage 4 · forecast 45% probability
RVP override: champion engagement decay weighted higher
Probability model updated. Deals with declining engagement signals in Stage 4 are now weighted 15% lower. The next 9 similar patterns auto-adjusted before the agent surfaced them.
Before correction
84%
After correction
93%
Change detection · auto-absorbed during Q1

Org changes always break agent systems unless they’re absorbed automatically. Three real changes from this engagement; all auto-detected, rules auto-updated, with an audit log of what changed.

Apr 18, 2026
New pricing tier added: Enterprise Plus

Deal routing, pricing validation, and commission calculation rules updated automatically. Zero manual reconfiguration.

4 rules auto-updated
Apr 21, 2026
Territory realignment: West region split into NW and SW

Deal routing, lead assignment, and territory rules updated. 34 active opportunities re-mapped to correct AEs without disruption.

6 rules auto-updated
Apr 25, 2026
Comp plan update: H2 accelerator threshold changed from 110% to 105%

Commission calculation rules auto-adjusted. Retroactive impact analysis flagged 3 reps who cross new threshold.

2 rules auto-updated
Chapter 05

What changed

Month 1 vs Month 4. The agents got smarter, pipeline execution got faster, and the team shifted from process work to selling. What follows is the full set of measurable outcomes — plus the audit trail that backs every number.

Before & after · eight measurable outcomes
Metric
Before
After
Δ
Sales cycle (enterprise)
7.2 months
5.4 months
−25%
Internal idle time per deal
23 days
6 days
−74%
Rep admin time
41% of week
18% of week
−23pp
Forecast accuracy variance
±28%
±11.4%
−59%
Win rate · deals >$250k
baseline
+4.5pp
+4.5pp
CS handoff time
10–15 days
same day
−95%
Security Q turnaround
5–8 days
18–30 hours
−85%
New rep ramp to 80% quota
11 months
7 months
−4 months
The accuracy curve

Agents get smarter every week. Human feedback and SOP changes are absorbed automatically. Overall agent accuracy lifted from 82% in week 1 to 96.4% by week 16.

100%90%80%70%Week 1Week 4Week 8Week 12Week 16Feedback loop activated · week 696.4%
82% week 189% week 6 (feedback loop)94% week 1296.4% week 16
Full audit trail · every action, timestamped, traceable

Every agent action with timestamp, reasoning, confidence, and human approvals. Searchable. Filterable. Exportable for compliance review.

11:42:18
Call IntelligenceAuto-processed
MEDDPICC fields updated for Acme Corp from Gong call G-77412. Champion confirmed.
89%
11:43:01
CRM HygieneAuto-processed
Enriched 4 contacts at BlueRiver Industries. Tier 2 → Tier 1 reclassification.
94%
11:44:47
Forecast IntelligenceEscalated to human
Velocity Inc commit-risk flag. Champion engagement decay detected. Escalated to Forecast Council.
83%
11:45:12
Deal Desk RoutingEscalated to human
Cobalt Labs 18% discount routed to RVP. Above 15% standard policy. Multi-year premium attached.
11:45:53
Security QAuto-processed
TridentCloud SecQ: 168/200 auto-filled. 23 routed to Legal, 9 to Sec.
94%
11:47:09
Contract RedlineAuto-processed
GreenTech 4 redlines triaged. 3 standard auto-handled. 1 novel (liability cap) escalated.
91%
11:48:31
Customer HandoffAuto-processed
Hexagon AI handoff packet generated. CSM assigned. Kickoff booked 8 May.
96%
11:49:14
Call IntelligenceEscalated to human
Apex Manufacturing call analyzed. New stakeholder identified. EB still unnamed — flagged for AE.
78%
They own all of this

Northwind owns the agents, the data, the rules, the methodology. We did the work; they keep everything.

The client owns the agents

Every workflow, every rule, every model. Deployed on their infrastructure, inside their VPC, within their security perimeter.

Their data never leaves

Processing happens in their environment. No deal data sent to external servers. Full compliance with their security policies.

Walk away anytime

Zero platform lock-in. They keep everything if the engagement ends. Our IP is in the methodology, not the output.

Like hiring an architect

They own the building. We designed and built it. The blueprints, the structure, the systems. All theirs.

Working capital impact

$2.4–3.6M year one.

Range, not point estimate. We publish the methodology because mid-market CFOs read these numbers carefully and ours need to survive scrutiny. Below is how the value gets created — with each line tied to a specific agent and a specific measurable outcome.

$0.9–1.4M
Reclaimed-headcount equivalent

Rep admin time fell from 41% → 18% of their week. At 95 reps × 23pp × 40h × $180/hr fully-loaded, that’s 6.4–9.5 FTE-equivalent capacity reclaimed. Did not result in headcount cuts; capacity redeployed to expansion + new logo motion.

Driven by · Agents 01, 02, 03, 08
$0.7–1.0M
Cycle compression value

Average enterprise cycle 7.2mo → 5.4mo (25% compression). Cash flow value of pulling forward closes computed at quarterly weighted-average value of the closed-won cohort. Conservative — doesn’t count expansion deals.

Driven by · Agents 02, 04, 05, 06, 07
$0.5–0.8M
Loss-rate avoidance

+4.5pp win rate on >$250k deals, attributable to faster SecQ turnaround (5–8d → 18–30h) and cleaner deal desk routing (median 2.8d → 8h). Computed against the prior-quarter loss cohort with a stage-weighted attribution model.

Driven by · Agents 05, 06, 07
$0.3–0.4M
Expansion enabled

CS handoffs same-day instead of 10–15 days. Earlier cohort onboarding correlates with 22% higher first-90-day expansion-conversation rate in our control group. Conservative; the lift may be larger as cohort matures.

Driven by · Agent 08
Methodology footnotes

All baselines are pre-engagement (the prior fiscal quarter at Northwind). Headcount equivalent uses fully-loaded comp ($180k median for AE benchmarked against 2025 Pavilion data). Cycle compression value uses the contribution-margin-weighted close cohort. The range exists because cohort sizes are still small (n=47 for forecast accuracy, n=23 for win rate); we’ll tighten the band as more deals close. We never bill more than the lower bound of created value.

What we don’t claim

What agents don’t do well.

We have to say this because most consultancies won’t. Knowing the limits is the only way to deploy something that survives contact with reality.

  • Closing deals.Agents don’t replace AEs. The deals where the human relationship matters most (enterprise champion-build, executive alignment, deeply-political procurement) still belong to humans. The agents amplify the AE; they don’t substitute.
  • Inventing strategy.Agents are excellent at compressing the gap between known-good rules and execution. They don’t set GTM strategy, name new ICPs, or decide which competitors to displace. Those are CRO decisions; the agent surfaces evidence to support them.
  • Adapting to surprises without help.When the buying committee changes, when a champion leaves, when a regulatory shift reshapes a clause, the agent flags and escalates. The first time. The second time, after a human handles it, the agent learns. Initial novel-event handling is always slower than steady-state.
  • Operating without instrumentation.Every agent ships with eval gates. If the org isn’t willing to spend the operating budget on continuous evaluation, the agents drift. Most failed AI deployments we’ve seen die at this stage; not from the model, from the absence of measurement.
  • Making humans irrelevant.The thresholds explicitly route below-confidence work to humans with full context. The agents are accountable to a metric, the humans are accountable to the customer. Both stay in the loop.
Continuation

What ships next at Northwind.

Quarter two: scope expansion to outbound prospecting agent, pre-call brief generator, and renewal-risk scoring. Quarter three: BOT (build-operate-transfer) optionality. The methodology is portable; the agents are theirs.