Skip to main content
The Parlay MCP server exposes 63 tools, organized into 10 domains. Tools that incur AI cost or take noticeable time are flagged.
Cost notation: $0 = read or static, ¢ = under one cent per call, $$ = a few cents. Latency notation: fast = under 1s, medium = 1–10s, slow = 10–90s, very slow = up to 5 min.

Reference data (6 tools)

Cheap reads, no AI cost. Use these to validate slugs before passing them to other tools.
ToolCostLatencyNotes
list_personas$0fast5 sales personas (challenger, hard_worker, lone_wolf, problem_solver, relationship_builder)
list_methodologies$0fastSPIN, Sandler, Challenger, MEDDIC, etc.
list_industries$0fastIndustry slugs for create_or_update_org
list_environments$0fastcold_call, video_conference, in_person, etc.
list_sales_motions$0fastinbound_sdr, outbound_ae, field_sales, etc.
list_recording_types$0fastdiscovery_call, demo, close_call, etc.

Orgs (4 tools)

ToolCostLatencyNotes
create_or_update_org$0fastUpsert by org_id. Always call before creating reps
get_org$0fastThrows org_not_found if missing
list_orgs$0fastCursor-paginated, newest first
archive_org$0fastSoft delete. Confirm with user first

Reps (5 tools)

ToolCostLatencyNotes
create_or_update_rep$0fastUpsert by (org_id, rep_id). Email must be unique within an org
get_rep$0fast
list_reps$0fast
get_rep_stats$0fastWindow-bucketed score aggregates + trend. Default 90-day window, weekly buckets
archive_rep$0fastSoft delete. Confirm with user first

Analyses (5 tools)

ToolCostLatencyNotes
analyze_recording$$ real / $0 mockslow real / fast mockAuto-polls. Use mock://perfect-pitch etc. for testing. Set wait_for_completion=false to return job id immediately
get_analysis$0fastPoll a queued job, or re-read a finished one
list_analyses$0fastFilters: org_id, rep_id, status, time window, min_score
archive_analysis$0fastDefault mode archive (reversible). Mode purge is irreversible — never call without explicit user confirmation
rescore_analysis$$slowCurrently not_implemented — landing in a future release

Rep intelligence (8 tools)

The assign_* and generate_* calls are async with internal polling. Default timeout: 3 min for analysis_ids path, 10 min for recording_urls path.
ToolCostLatencyNotes
assign_rep_persona¢slowMeta-analyzes a batch of calls → assigns persona. Either recording_urls or analysis_ids, not both
assign_rep_methodology¢slowSame shape as assign_rep_persona, returns primary + secondary methodology
generate_rep_synthesis¢slowCoaching summary: strengths, gaps, action plan, score trends
get_rep_persona$0fastLatest assignment (manual or AI). Throws synthesis_not_found if never assigned
get_rep_methodology$0fastReturns { primary, secondary }
get_latest_rep_synthesis$0fast
set_rep_persona_manually$0fastOverride AI assignment. Appends to history with source: "manual"
set_rep_methodology_manually$0fastSame, for primary + optional secondary

Scout chat (3 tools)

Multi-turn coaching chat scoped to a single completed analysis. History is persisted server-side per analysis.
ToolCostLatencyNotes
ask_scout_about_call¢mediumOne Q&A turn. Returns { response, model, tokens, cost_usd_cents }
get_scout_welcome_questions¢medium4 first-person starter questions tailored to the rep’s weakest pillars
clear_scout_chat$0fastWipe persisted history for one analysis. Irreversible — confirm with user

Playbooks (5 tools)

ToolCostLatencyNotes
create_playbook$0fastManual upload. Use generate_playbook_draft for AI synthesis instead
get_playbook$0fast
list_playbooks$0fastactive_only=true filters to currently-active
update_playbook$0fastContent change snapshots prior version + bumps version number
archive_playbook$0fastSoft delete + sets is_active=false

Playbook source materials + AI drafting (4 tools)

ToolCostLatencyNotes
upload_playbook_source$0fastUp to 500,000 chars per upload. Title + content
list_playbook_sources$0fast
generate_playbook_draft¢slowAsync. Synthesizes 1+ source materials into a markdown playbook via Gemini 2.5 Flash
publish_playbook_draft$0fastPromotes a completed draft to an active playbook. Fires playbook.published webhook

Custom prompts (5 tools)

Org-level prompts injected into every analysis. Distinct from playbooks: prompts are many short focused criteria, each scored separately.
ToolCostLatencyNotes
create_custom_prompt$0fastUp to 20,000 chars per prompt. prompt_type: scoring_criterion / evaluation_focus / disqualifier / other
list_custom_prompts$0fastOrdered by order_index ascending
update_custom_prompt$0fast
reorder_custom_prompts$0fastAtomic bulk reorder by ordered ids array
archive_custom_prompt$0fastSoft delete + sets is_active=false

Dispositions (4 tools)

Tag the outcome (sold / lost / canceled / undispositioned) of an analysis. Lets org insights correlate scoring patterns to deal outcomes.
ToolCostLatencyNotes
tag_disposition$0fastUpsert: re-tagging an analysis overwrites the prior disposition
get_disposition$0fast
list_dispositions$0fastOrg-scoped. Filters: disposition value, time window
clear_disposition$0fastRemoves the disposition. Analysis itself untouched

Org insights + leaderboard (4 tools)

Org-level rollups across every rep + every analysis. The async generate_org_insights runs a Gemini meta-analysis; get_leaderboard is pure SQL with no AI cost.
ToolCostLatencyNotes
generate_org_insights$$slowAsync. Aggregates the org’s completed analyses in a window, runs a Gemini meta-analysis, persists insights_snapshots row. Default window 30 days. Polls up to 5 minutes
get_latest_org_insights$0fastMost recent snapshot for an org. Throws synthesis_not_found if generate_org_insights has never run
list_org_insights_history$0fastPaginated history of snapshots (without full AI summary — fetch by id for that)
get_leaderboard$0fastPure SQL. Ranks reps by score, volume, or improvement (first-half → second-half delta)

Webhooks (6 tools)

ToolCostLatencyNotes
register_webhook$0fastReturns signing_secret exactly once — store immediately. HMAC-SHA256 verification
list_webhooks$0fastNewest first. NEVER returns the signing_secret
update_webhook$0fastURL, events, paused, name, description. Use rotate_webhook_secret to change the secret
test_webhook$0fastFires a synthetic webhook.test event at the registered URL
rotate_webhook_secret$0fastReturns new signing_secret exactly once. Old secret stops working immediately — update your endpoint first
archive_webhook$0fastSoft delete. No further deliveries attempted

Connectivity diagnostic (1 tool)

ToolCostLatencyNotes
echo_ping$0fastReturns { pong, echo, timestamp, base_url, key_environment }. Doesn’t call the API — purely for testing the MCP transport. Try this first if other tools feel broken

Documentation lookup (3 tools)

These let the LLM answer “how does X work” questions by pulling content directly from docs.goparlay.io. Cached in memory per session — no separate Mintlify MCP needed.
ToolCostLatencyNotes
search_parlay_docs$0~500ms first call, ~5ms cachedKeyword search across all docs pages. Returns top 5 with title, URL, slug, and excerpt
read_parlay_doc$0~200ms first per page, ~5ms cachedFetch full markdown of a specific page (mcp/installation, guides/webhooks, etc.)
list_parlay_docs$0~200ms first, ~5ms cachedReturn the page index with titles, URLs, descriptions

What’s not in the catalog

Intentionally excluded — these stay REST-only:
  • Admin (/v1/admin/*) — creating partners, minting keys, suspending partners
  • Internal infra (/v1/internal/recover) — crash recovery scanner
  • Health / status / version endpoints
  • Webhook delivery log + replay (debug-focused; access via REST)
  • Raw chat history (the tool responses already carry state)
  • API key management (key creation/revocation is per-partner lifecycle, sensitive)

Tool-naming conventions

  • create_* — strict create (errors if duplicate)
  • create_or_update_* — upsert (used for orgs, reps that have stable partner-supplied ids)
  • get_* — fetch one by id
  • list_* — paginated collection
  • update_* — partial update (PATCH semantics)
  • archive_* — soft delete
  • assign_* — async AI classification
  • generate_* — async AI authoring
  • set_*_manually — manual override of an AI assignment
  • tag_* — apply a label / status
  • ask_* / get_*_questions / clear_* — Scout chat verbs
The LLM is good at picking the right tool from these prefixes alone. If you see it pick the wrong one, file an issue — usually the description needs a better hint.