Insurance · Agentic AI
Cutting new agent ramp time from 11 weeks to 4 using AI voice roleplay training
A mid-size US insurance carrier onboarding 40–60 new cold-calling agents per quarter was losing 11 weeks of productive capacity per agent to classroom training and shadowing. Managers were spending 6+ hours per week per new hire on manual roleplay sessions that were inconsistent, unscalable, and undocumented. We built an AI voice training simulator using VAPI and n8n — six distinct AI customer personas that agents call and practice against, with automated post-call scorecards delivered to managers. Ramp time dropped from 11 weeks to 4. Manager coaching time dropped by 70%.
Business Context
New agents were practicing on real prospects.
There was no other option.
The carrier ran a direct sales operation — 200+ agents cold-calling homeowners and small business owners to sell P&C policies. Turnover was high, as it is across the industry, meaning 40–60 new agents were onboarded every quarter. The training programme consisted of two weeks of classroom product knowledge, one week of call shadowing, and then live calls with a manager listening in. The problem: agents were not ready. The first 20–30 live calls were effectively practice — real prospects absorbing the cost of an undertrained agent fumbling objections, mispronouncing coverage terms, and failing compliance language requirements. Conversion rates for new agents in their first 60 days were 34% below the team average.
The cost of undertrained agents on live calls
- 11 wks
- average time to full productivity per new agent
- 34%
- below-average conversion rate in first 60 days
- 6 hrs
- weekly manager time per new hire on manual roleplay
From hire date to hitting 80% of experienced agent conversion rate
New agents practicing objection handling on real prospects with real pipeline impact
Inconsistent, undocumented, unscalable — and still insufficient practice volume
The manager roleplay problem was structural. Each manager ran sessions differently. There was no standard scorecard, no consistent objection set, and no documentation of what was practiced or how the agent performed. A new agent could complete onboarding having never practiced the three most common objection types — price objections, "I already have coverage" deflections, and compliance-sensitive questions about pre-existing conditions — because their manager happened not to include them.
The carrier had evaluated off-the-shelf sales training platforms. None of them offered voice-based roleplay with insurance-specific personas. The closest options were text-based scenario tools that bore no resemblance to the actual experience of a cold call. What agents needed was to pick up the phone, dial, and have a realistic conversation with a difficult customer — as many times as they needed, at any hour, with immediate feedback.
Scope of Work
What we were asked to build
AI customer persona library — 6 voice personas
Six distinct AI customer personas built on VAPI with GPT-4o, each with a unique personality profile, objection set, and behavioural pattern: the Price Shopper, the Skeptic, the Already Covered deflector, the Confused Elderly caller, the Aggressive Objector, and the Warm Lead. Each persona responds dynamically to agent language — not from a script, but from a character brief that drives realistic, unpredictable conversation.
Practice call infrastructure
Agents dial a dedicated training number from any phone. An n8n workflow routes the call to the selected persona via VAPI. The agent experiences a realistic cold call — hold music, ring tone, persona pickup — indistinguishable from a live call in feel. Sessions can be initiated on demand, 24/7, without manager involvement. Each session is recorded and transcribed automatically.
Automated post-call scorecard
After each session, an n8n workflow processes the transcript through GPT-4o with a structured scoring rubric: objection handling (0–25), compliance language accuracy (0–25), talk-to-listen ratio, empathy and tone markers, product knowledge accuracy, and call structure adherence. Scorecard generated and delivered to the agent and their manager within 90 seconds of call end.
Manager coaching dashboard
Web dashboard aggregating all practice session data per agent — session count, score trends over time, weakest scoring dimensions, most-failed objection types, and compliance language error frequency. Managers see exactly where each agent needs coaching before their 1:1 sessions, replacing generic roleplay with targeted skill remediation.
Constraints we worked within
- Personas had to pass a realism test with experienced agents — if they felt scripted or robotic, agents would not engage seriously
- Compliance language scoring required legal team sign-off on the rubric — two revision cycles before approval
- Call recordings required consent handling — agents briefed and consented at onboarding; no customer data involved
- VAPI latency had to stay under 800ms for the conversation to feel natural — required prompt engineering and model selection tuning
Explicitly not in scope
- Live call monitoring or real-time coaching during actual prospect calls
- CRM integration or lead management
- Product knowledge assessment or licensing exam preparation
- Manager performance evaluation or HR workflow integration
System Architecture
Agent dials in. AI answers. Scorecard lands in the manager dashboard 90 seconds later.
How We Worked
4 months. Agents in the loop from week 3. Full rollout in month 4.
Persona Design & Call Infrastructure
Interviewed 8 experienced agents and 3 sales managers to map the most common objection types, call structures, and failure modes. Built the 6 persona character briefs. VAPI infrastructure set up with dedicated training numbers. First persona — the Price Shopper — built and tested internally. Latency tuning required 2 weeks to get below 800ms consistently.
Remaining Personas & Scoring Rubric
Remaining 5 personas built and tested. Scoring rubric drafted with sales training lead and submitted to legal for compliance language review. First revision returned in week 3 — compliance section required more specific language around state-regulated disclosure requirements. Second revision approved. Scorecard pipeline built on n8n.
Pilot with New Hire Cohort
Piloted with a cohort of 12 new agents in their second week of onboarding. Agents completed 8–12 practice sessions each over 3 weeks. Manager feedback: scorecards were accurate and surfaced skill gaps they had not identified in manual roleplay. Agent feedback: the Aggressive Objector persona was "more realistic than most real calls." One agent completed 31 sessions in 3 weeks.
Full Rollout & Dashboard Launch
Rolled out to all new hire cohorts. Manager dashboard launched. Training programme restructured — classroom time reduced from 2 weeks to 1, with the second week replaced by 15 mandatory simulator sessions before live calls begin. Ramp time tracked from first cohort through full rollout: average time to 80% productivity dropped from 11 weeks to 4.
Working rhythm
- CadenceTwo-week sprints, weekly sales training team reviews
- Decision ownerVP of Sales and Head of Sales Training
- Primary metricTime to 80% productivity vs. experienced agent baseline
- Escalation SLA24 hours with written recommendation
Results
Measured across 3 full new hire cohorts post rollout.
reduction in time to full agent productivity
Was: 11 weeks average to reach 80% of experienced agent conversion rate
Ramp time dropped from 11 weeks to 4 weeks across the 3 post-rollout cohorts. The primary driver: agents arriving at their first live call having already handled 15+ realistic objection scenarios, including the 3 most common failure modes. First-60-day conversion rate for new agents improved from 34% below average to 8% below average.
reduction in manager time spent on new hire roleplay coaching
Was: 6 hours per week per new hire on manual, undocumented roleplay sessions
Managers now spend recovered time on targeted coaching based on scorecard data — addressing specific identified weaknesses rather than running generic practice sessions. Manager satisfaction with the coaching process improved significantly; they report higher confidence in new hire readiness before live calls.
more practice sessions per agent vs. manual roleplay
Was: 3–5 manager-led roleplay sessions during onboarding
Average new agent completes 22 simulator sessions before their first live call. The previous programme delivered 3–5 manager-led sessions. One agent in the pilot cohort completed 31 sessions in 3 weeks — a volume of practice that would have required 15+ hours of manager time under the old model.
compliance language accuracy score at first live call
Was: no baseline — compliance accuracy was not measured during onboarding
The scoring rubric introduced a measurable compliance language standard for the first time. Agents scoring below 80% on compliance in simulator sessions are flagged for additional practice before live calls. No compliance-related customer complaints from the post-rollout cohorts in the 90-day measurement window.
What This Means for You
Every call centre operation with high agent turnover has this problem. New agents practicing on real prospects is not a training strategy. It is a pipeline tax — paid in lost conversions, compliance risk, and manager time.
- 01New agent conversion rates are significantly below experienced agent rates for the first 60–90 days
- 02Managers spend a disproportionate share of their week on new hire roleplay that is inconsistent and undocumented
- 03Compliance language errors from new agents are a recurring risk that your current training programme cannot reliably prevent
This system was built in 4 months on VAPI and n8n — no enterprise platform licences, no proprietary infrastructure. The personas, scoring rubric, and call routing are all configurable. Adding a new persona for a new product line or a new objection type takes days, not months.
See how we approach Agentic AI for sales and training