The Parlay MCP server exposes 63 tools, organized into 10 domains. Tools that incur AI cost or take noticeable time are flagged.
Cost notation: $0 = read or static, ¢ = under one cent per call, $$ = a few cents. Latency notation: fast = under 1s, medium = 1–10s, slow = 10–90s, very slow = up to 5 min.
Cheap reads, no AI cost. Use these to validate slugs before passing them to other tools.
| Tool | Cost | Latency | Notes |
|---|
list_personas | $0 | fast | 5 sales personas (challenger, hard_worker, lone_wolf, problem_solver, relationship_builder) |
list_methodologies | $0 | fast | SPIN, Sandler, Challenger, MEDDIC, etc. |
list_industries | $0 | fast | Industry slugs for create_or_update_org |
list_environments | $0 | fast | cold_call, video_conference, in_person, etc. |
list_sales_motions | $0 | fast | inbound_sdr, outbound_ae, field_sales, etc. |
list_recording_types | $0 | fast | discovery_call, demo, close_call, etc. |
| Tool | Cost | Latency | Notes |
|---|
create_or_update_org | $0 | fast | Upsert by org_id. Always call before creating reps |
get_org | $0 | fast | Throws org_not_found if missing |
list_orgs | $0 | fast | Cursor-paginated, newest first |
archive_org | $0 | fast | Soft delete. Confirm with user first |
| Tool | Cost | Latency | Notes |
|---|
create_or_update_rep | $0 | fast | Upsert by (org_id, rep_id). Email must be unique within an org |
get_rep | $0 | fast | |
list_reps | $0 | fast | |
get_rep_stats | $0 | fast | Window-bucketed score aggregates + trend. Default 90-day window, weekly buckets |
archive_rep | $0 | fast | Soft delete. Confirm with user first |
| Tool | Cost | Latency | Notes |
|---|
analyze_recording | $$ real / $0 mock | slow real / fast mock | Auto-polls. Use mock://perfect-pitch etc. for testing. Set wait_for_completion=false to return job id immediately |
get_analysis | $0 | fast | Poll a queued job, or re-read a finished one |
list_analyses | $0 | fast | Filters: org_id, rep_id, status, time window, min_score |
archive_analysis | $0 | fast | Default mode archive (reversible). Mode purge is irreversible — never call without explicit user confirmation |
rescore_analysis | $$ | slow | Currently not_implemented — landing in a future release |
The assign_* and generate_* calls are async with internal polling. Default timeout: 3 min for analysis_ids path, 10 min for recording_urls path.
| Tool | Cost | Latency | Notes |
|---|
assign_rep_persona | ¢ | slow | Meta-analyzes a batch of calls → assigns persona. Either recording_urls or analysis_ids, not both |
assign_rep_methodology | ¢ | slow | Same shape as assign_rep_persona, returns primary + secondary methodology |
generate_rep_synthesis | ¢ | slow | Coaching summary: strengths, gaps, action plan, score trends |
get_rep_persona | $0 | fast | Latest assignment (manual or AI). Throws synthesis_not_found if never assigned |
get_rep_methodology | $0 | fast | Returns { primary, secondary } |
get_latest_rep_synthesis | $0 | fast | |
set_rep_persona_manually | $0 | fast | Override AI assignment. Appends to history with source: "manual" |
set_rep_methodology_manually | $0 | fast | Same, for primary + optional secondary |
Multi-turn coaching chat scoped to a single completed analysis. History is persisted server-side per analysis.
| Tool | Cost | Latency | Notes |
|---|
ask_scout_about_call | ¢ | medium | One Q&A turn. Returns { response, model, tokens, cost_usd_cents } |
get_scout_welcome_questions | ¢ | medium | 4 first-person starter questions tailored to the rep’s weakest pillars |
clear_scout_chat | $0 | fast | Wipe persisted history for one analysis. Irreversible — confirm with user |
| Tool | Cost | Latency | Notes |
|---|
create_playbook | $0 | fast | Manual upload. Use generate_playbook_draft for AI synthesis instead |
get_playbook | $0 | fast | |
list_playbooks | $0 | fast | active_only=true filters to currently-active |
update_playbook | $0 | fast | Content change snapshots prior version + bumps version number |
archive_playbook | $0 | fast | Soft delete + sets is_active=false |
| Tool | Cost | Latency | Notes |
|---|
upload_playbook_source | $0 | fast | Up to 500,000 chars per upload. Title + content |
list_playbook_sources | $0 | fast | |
generate_playbook_draft | ¢ | slow | Async. Synthesizes 1+ source materials into a markdown playbook via Gemini 2.5 Flash |
publish_playbook_draft | $0 | fast | Promotes a completed draft to an active playbook. Fires playbook.published webhook |
Org-level prompts injected into every analysis. Distinct from playbooks: prompts are many short focused criteria, each scored separately.
| Tool | Cost | Latency | Notes |
|---|
create_custom_prompt | $0 | fast | Up to 20,000 chars per prompt. prompt_type: scoring_criterion / evaluation_focus / disqualifier / other |
list_custom_prompts | $0 | fast | Ordered by order_index ascending |
update_custom_prompt | $0 | fast | |
reorder_custom_prompts | $0 | fast | Atomic bulk reorder by ordered ids array |
archive_custom_prompt | $0 | fast | Soft delete + sets is_active=false |
Tag the outcome (sold / lost / canceled / undispositioned) of an analysis. Lets org insights correlate scoring patterns to deal outcomes.
| Tool | Cost | Latency | Notes |
|---|
tag_disposition | $0 | fast | Upsert: re-tagging an analysis overwrites the prior disposition |
get_disposition | $0 | fast | |
list_dispositions | $0 | fast | Org-scoped. Filters: disposition value, time window |
clear_disposition | $0 | fast | Removes the disposition. Analysis itself untouched |
Org-level rollups across every rep + every analysis. The async generate_org_insights runs a Gemini meta-analysis; get_leaderboard is pure SQL with no AI cost.
| Tool | Cost | Latency | Notes |
|---|
generate_org_insights | $$ | slow | Async. Aggregates the org’s completed analyses in a window, runs a Gemini meta-analysis, persists insights_snapshots row. Default window 30 days. Polls up to 5 minutes |
get_latest_org_insights | $0 | fast | Most recent snapshot for an org. Throws synthesis_not_found if generate_org_insights has never run |
list_org_insights_history | $0 | fast | Paginated history of snapshots (without full AI summary — fetch by id for that) |
get_leaderboard | $0 | fast | Pure SQL. Ranks reps by score, volume, or improvement (first-half → second-half delta) |
| Tool | Cost | Latency | Notes |
|---|
register_webhook | $0 | fast | Returns signing_secret exactly once — store immediately. HMAC-SHA256 verification |
list_webhooks | $0 | fast | Newest first. NEVER returns the signing_secret |
update_webhook | $0 | fast | URL, events, paused, name, description. Use rotate_webhook_secret to change the secret |
test_webhook | $0 | fast | Fires a synthetic webhook.test event at the registered URL |
rotate_webhook_secret | $0 | fast | Returns new signing_secret exactly once. Old secret stops working immediately — update your endpoint first |
archive_webhook | $0 | fast | Soft delete. No further deliveries attempted |
| Tool | Cost | Latency | Notes |
|---|
echo_ping | $0 | fast | Returns { pong, echo, timestamp, base_url, key_environment }. Doesn’t call the API — purely for testing the MCP transport. Try this first if other tools feel broken |
These let the LLM answer “how does X work” questions by pulling content directly from docs.goparlay.io. Cached in memory per session — no separate Mintlify MCP needed.
| Tool | Cost | Latency | Notes |
|---|
search_parlay_docs | $0 | ~500ms first call, ~5ms cached | Keyword search across all docs pages. Returns top 5 with title, URL, slug, and excerpt |
read_parlay_doc | $0 | ~200ms first per page, ~5ms cached | Fetch full markdown of a specific page (mcp/installation, guides/webhooks, etc.) |
list_parlay_docs | $0 | ~200ms first, ~5ms cached | Return the page index with titles, URLs, descriptions |
What’s not in the catalog
Intentionally excluded — these stay REST-only:
- Admin (
/v1/admin/*) — creating partners, minting keys, suspending partners
- Internal infra (
/v1/internal/recover) — crash recovery scanner
- Health / status / version endpoints
- Webhook delivery log + replay (debug-focused; access via REST)
- Raw chat history (the tool responses already carry state)
- API key management (key creation/revocation is per-partner lifecycle, sensitive)
create_* — strict create (errors if duplicate)
create_or_update_* — upsert (used for orgs, reps that have stable partner-supplied ids)
get_* — fetch one by id
list_* — paginated collection
update_* — partial update (PATCH semantics)
archive_* — soft delete
assign_* — async AI classification
generate_* — async AI authoring
set_*_manually — manual override of an AI assignment
tag_* — apply a label / status
ask_* / get_*_questions / clear_* — Scout chat verbs
The LLM is good at picking the right tool from these prefixes alone. If you see it pick the wrong one, file an issue — usually the description needs a better hint.