Skip to main content
The Parlay MCP server (@goparlay/mcp-server) is a Model Context Protocol wrapper around both the REST API AND this documentation site. It lets AI-native clients — Claude Desktop, Cursor, Windsurf, Claude Code, custom agents — call Parlay conversationally. Your LLM gains 63 tools in two halves: API actions (analyze recordings, manage orgs, generate insights) and documentation lookup (search_parlay_docs, read_parlay_doc).

What is MCP?

Model Context Protocol is an open standard from Anthropic for exposing tools to LLMs. Think of it as “USB for AI agents” — one protocol, many providers. A user drops a one-line config block into Claude Desktop, pastes their Parlay API key, and their model can now do this:
“Analyze https://my-cdn.com/call-123.mp3 for rep alex-smith in acme-corp. Once it’s done, ask Scout where Alex needs the most help, and tag the call as ‘sold’ for $45,000.”
The model calls analyze_recording → polls until complete → calls ask_scout_about_call → calls tag_disposition. All in one paragraph of natural language. No partner-side code.

When to use MCP vs REST

Use MCP when…

  • You’re building an AI-agent UX (chat, autonomous workflows)
  • You’re a developer / analyst querying Parlay from your IDE
  • You want partners to drop Parlay into their own AI tools (Cursor, etc.)
  • You don’t want to hand-roll polling + retry logic

Stick with REST when…

  • You’re building a traditional web/mobile backend
  • You need full schema control and code-generated SDKs
  • You require streaming responses (MCP stdio is request/response)
  • You’re at extreme scale and want raw HTTPS
Most production integrations use REST. MCP is for the AI-agent surface area.

What you get

  • 50 curated tools grouped by domain (analyses, orgs, reps, rep intelligence, playbooks, prompts, dispositions, Scout, reference data)
  • Async polling built inanalyze_recording, assign_rep_persona, generate_playbook_draft etc. all wait for the result and return it synchronously to the model
  • Stable error translation — parlay-api error codes (org_not_found, rate_limited, analysis_failed) map cleanly into MCP errors the LLM can recover from
  • Idempotency, auth, retries — handled internally, never the model’s concern
  • One env var to configurePARLAY_API_KEY

What’s not in MCP

By design, the following stay REST-only:
  • Admin operations — creating partners, minting keys, suspending partners. Too sensitive for chat
  • Internal infra — the recovery scanner, health checks, status pages
  • Key rotation — your key lifecycle is yours, not your AI agent’s
You can use both together: MCP for ad-hoc work, REST for production integrations.

Architecture

┌─────────────────────┐         stdio          ┌──────────────────────┐
│  Claude Desktop     │ ◀─────────────────────▶│  @goparlay/mcp-server  │
│  (or Cursor / IDE)  │   JSON-RPC over pipe   │  (Bun process)       │
└─────────────────────┘                        └──────────────────────┘

                                                         │ HTTPS + Bearer

                                               ┌──────────────────────┐
                                               │   parlay-api         │
                                               │   (Cloud Run)        │
                                               └──────────────────────┘
The MCP server is a tiny, stateless wrapper. It runs on the user’s machine via npx, holds no data, and forwards every call to Parlay’s REST API. Same auth, same data, same SLAs as direct REST integration.

Next steps

Install per-client

Claude Desktop, Cursor, Windsurf, Continue, Zed, Claude Code, custom SDK

Tool catalog

All 63 tools, grouped by domain

Examples

Paste-ready prompts that demonstrate real workflows