Skip to main content

An AI Agent That Actually Works: Automating Client Onboarding at a Professional Services Firm

Neil Simpson
ai-engineeringdomain-intelligenceconsulting

Client

Professional services firm (anonymised)

Platform

Next.js / Claude API / PostgreSQL / Slack

Industry

Professional Services

70% tasks handled autonomouslyOnboarding time: 2 weeks → 3 days94% client satisfaction (post-onboarding NPS)$180K annual ops savings

The Client

A professional services firm with 150 employees, onboarding 15–20 new clients per month. Each new client requires a structured onboarding process: gathering documentation, verifying credentials, setting up accounts across three internal systems, scheduling kickoff meetings, and producing a tailored engagement plan.

The process involved four different teams and took an average of two weeks from contract signature to first delivery.

The Problem

Onboarding was the firm's biggest operational bottleneck. Not because it was complex — because it was repetitive, distributed across teams, and full of handoff delays.

The typical flow: a partner closes a deal and sends an email to operations. Operations sends the client a questionnaire. The client fills it out (eventually). Operations processes the responses, creates accounts, requests credentials, schedules meetings, and drafts an engagement plan. Each step involves waiting — for the client, for another team, for system access.

Two weeks elapsed time. About four hours of actual work. The rest was waiting and chasing.

They'd tried automating with Zapier and workflow tools. The result was a fragile chain of integrations that broke whenever a client sent information in an unexpected format — which was most of the time. Human names in the wrong field. Documents attached to emails instead of uploaded to the portal. Phone numbers with country codes, without country codes, with spaces, without spaces.

The structured automation tools couldn't handle the unstructured reality of how clients actually communicate.

What We Built

Discovery: Mapping the Real Process (1 week)

We shadowed the operations team for a week. Not the documented process — the actual process. The documented process had eight steps. The actual process had 23, including 11 that existed only in the operations manager's head.

The critical insight: about 70% of the work was information extraction and transformation. Taking unstructured client input (emails, PDFs, phone calls, forms filled out wrong) and turning it into structured data that the internal systems needed. The other 30% required genuine human judgement — assessing client complexity, assigning the right team members, making scope decisions.

This split defined the architecture. The AI handles extraction and transformation. Humans handle judgement and relationships.

Build: The Onboarding Agent (3 weeks)

The agent operates through a simple loop: receive input, extract structured data, take action, report status.

Input processing. When a new client engagement is created, the agent receives all available information — the signed contract, the client questionnaire (if completed), any emails from the sales process, and the partner's notes. It extracts: client entity details, key contacts, engagement scope, required credentials, system access needs, and scheduling preferences.

The extraction handles the messiness that broke the Zapier workflows. Phone numbers in any format get normalised. Names get parsed regardless of ordering. Missing fields get flagged for follow-up rather than causing failures.

Automated actions. With structured data in hand, the agent:

  • Creates client records in all three internal systems (via API integrations)
  • Sends a welcome email to the primary contact with a secure document upload link
  • Generates a draft engagement plan based on the scope and similar past engagements
  • Proposes a kickoff meeting schedule based on team availability and client timezone
  • Creates a Slack channel for the engagement team with a briefing summary

Follow-up management. The agent tracks outstanding items — missing documents, unsigned forms, pending credential requests — and sends contextual follow-up messages. Not generic reminders. Messages that reference the specific missing item and explain why it's needed, in the tone the operations team uses.

Escalation. When the agent encounters something it can't handle — ambiguous scope, conflicting information, a client request outside normal parameters — it escalates to the operations team with context. The escalation includes what it knows, what it doesn't, and a suggested action. The human resolves it; the agent learns the resolution pattern for next time.

Evaluation: The Domain Expert Gate (2 weeks)

Before going live, we ran 30 past onboardings through the agent and compared its outputs against what the operations team actually did.

The evaluation dataset was reviewed by the operations manager and two senior operations staff. They scored each output on:

  • Data accuracy: Did the agent extract the right information?
  • Action correctness: Did it take the right steps in the right order?
  • Communication quality: Would the client-facing messages be appropriate to send?
  • Escalation judgement: Did it escalate the right things and handle the rest?

First pass: 82% overall accuracy. The main failure modes were scope interpretation (the agent was too literal about contract language) and communication tone (too formal for this firm's culture).

After two rounds of context refinement — updating the system prompt with examples of the firm's actual communication style and adding structured scope interpretation guidelines — accuracy reached 96%.

The Result

The agent went live handling new onboardings alongside the operations team for two weeks (human review on every action). After the shadow period, it moved to autonomous operation with spot-check review on 20% of cases.

MetricBeforeAfter
Onboarding elapsed time2 weeks avg3 days avg
Operations staff hours per client4 hours1.2 hours
Client follow-ups needed3.5 avg1.1 avg
Data entry errors8% of fields0.3% of fields
Client NPS (post-onboarding)7294

The 70% autonomous task completion rate means the operations team handles 30% of tasks — the ones requiring judgement. They're spending their time on relationship building and complex problem-solving instead of copying data between systems and chasing missing documents.

Annual operational savings: approximately $180K in redirected staff time. The agent's running cost (API calls, infrastructure): approximately $2,400 per month.

What Made This Work

Starting with the domain, not the technology. The week spent shadowing the operations team was the highest-value investment in the project. The 70/30 split between automatable and judgement-required tasks came directly from observation, not assumption. If we'd started with the technology, we'd have tried to automate everything and failed at the 30% that requires human judgement.

Structured escalation. The agent knows what it doesn't know. This is the single feature that builds trust with the operations team. When something is ambiguous, it doesn't guess — it asks. And it asks with context, so the human can resolve it quickly.

Evaluation before deployment. Running 30 historical onboardings through the agent before going live caught every major failure mode. The operations team saw the agent's mistakes in a safe environment and helped fix them. By the time it went live, they trusted it — because they'd already seen it fail, and they'd seen the failures get fixed.

Gradual autonomy. Two weeks of shadow mode, then autonomous with 20% spot-checks. The operations team had veto power at every stage. Trust was earned incrementally, not demanded at launch.

Services used in this engagement

Need similar results?

Every engagement starts with understanding your problem. We'll tell you honestly whether we're the right fit.