Skip to main content
Each prompt below is one paragraph you can paste into Claude Desktop, Cursor, or any MCP-enabled client. Replace demo-acme / alex-smith with your own org / rep ids.

Verify connectivity

Use parlay tools to ping the server and tell me which environment I'm connected to.
Expected: echo_ping returns { pong: true, key_environment: "sandbox", ... } in under a second.

The hero workflow — analyze, classify, coach

For org "demo-acme" rep "alex-smith", submit mock://perfect-pitch and mock://average-pitch for analysis in parallel. Once both are done, assign Alex's sales persona using both calls. Then ask Scout what Alex's biggest improvement area is and give me a 4-sentence executive summary.
What it exercises: parallel async tool calls, persona meta-analysis, Scout multi-turn chat. ~60 seconds end to end. This is the demo to show stakeholders.

Coach a single rep

Generate a full coaching synthesis for rep alex-smith in org demo-acme using the most recent 5 analyses. Show me the double-down, top gaps, and this-week action plan.
What it exercises: generate_rep_synthesis with lookback_count=5 → returns synthesis with action plan citing specific calls.

Tag outcomes for win/loss analysis

For analysis <UUID>, tag it as "sold" with deal amount $45,000 and note "closed on second call". Then list every sold deal in demo-acme this quarter and sum the total pipeline.
What it exercises: tag_disposition, list_dispositions with date range. RevOps-flavored use case.

AI-generate a playbook from raw notes

For org "demo-acme", upload two playbook source materials:

1) Title "Top performer highlights" — content: "Our best reps ALWAYS confirm decision-maker authority before a demo. They reframe price objections as ROI questions: 'What would solving this be worth in the next 12 months?' They never send long follow-up emails — they book a 15-minute specific follow-up call instead."

2) Title "Closing patterns" — content: "Top reps use trial closes mid-call: 'If this worked perfectly, would it make sense to move forward?' They create urgency with deadlines tied to business outcomes, not artificial discounts. They always end calls with a specific next step booked on the calendar."

Then generate a playbook draft from both source materials. Show me the first 500 characters of the generated content, and publish it as the org's active playbook.
What it exercises: upload_playbook_source (×2), generate_playbook_draft (async, ~20s), publish_playbook_draft. The wow moment.

Configure custom scoring criteria

Create three custom prompts for demo-acme:
1) Title "Decision-maker confirmation", content "Did the rep confirm the name of the decision-maker before proposing a close?"
2) Title "Budget discovery", content "Did the rep ask about budget authority?"
3) Title "Next-step booking", content "Did the rep book a specific follow-up meeting before ending the call?"

All scoring_criterion type. Then list them ordered.
What it exercises: create_custom_prompt (×3), list_custom_prompts. Sales-leader admin flow.

Reorder prompts after creation

List the custom prompts for demo-acme. Then reorder them so "Decision-maker confirmation" is first and "Budget discovery" is second.
What it exercises: list_custom_prompts then reorder_custom_prompts with the right ids array. Tests model’s ability to chain identifiers.

Quick rep snapshot

Show me alex-smith's stats in demo-acme for the last 30 days — overall average, per-pillar scores, and the weekly trend. Then tell me which pillar is trending most.
What it exercises: get_rep_stats with custom window + interpretation by the model.

Manual persona override

Set alex-smith's persona to "challenger" in demo-acme with justification "veteran seller, AI-assigned hard_worker doesn't reflect 8 years of enterprise experience". Then fetch it back to confirm.
What it exercises: set_rep_persona_manually, get_rep_persona. Appends to history with source: "manual".

Scout deep-dive on a single call

For analysis <UUID>, get the welcome questions, then ask the first one and follow up with "what specifically should I have done differently in the moment?"
What it exercises: get_scout_welcome_questions, ask_scout_about_call (multi-turn — Scout remembers turn 1 in turn 2).

Audit configuration

Show me everything about demo-acme: org details, all reps, active playbook, all custom prompts, and the latest org insights.
What it exercises: get_org, list_reps, list_playbooks active_only=true, list_custom_prompts, get_latest_org_insights (Phase F). Tests parallel-call orchestration.

Generate an org-level insights snapshot

For org "demo-acme", generate an org-level insights snapshot covering the last 30 days. When it's done, summarize the executive_summary and tell me who the top performer is.
What it exercises: generate_org_insights (async, ~30–60s, ~0.020.02–0.05), then the model parses the snapshot. Sales-leader monthly-review use case.

Leaderboard — pure SQL, no AI cost

Show me the top 5 reps in demo-acme by score this month, then by improvement, then by volume. Tell me who appears on multiple lists.
What it exercises: get_leaderboard called three times with different sort_by values, then cross-referenced. No AI cost — pure SQL.

Set up + test a webhook

Register a webhook at https://my-app.com/parlay-webhook for analysis.completed and analysis.failed events. Show me the signing_secret. Then send a test event to confirm my endpoint receives it.
What it exercises: register_webhook (signing_secret returned ONCE), test_webhook. After this, rotate_webhook_secret is the only way to ever get a new secret.

Rotate a leaked webhook secret

The signing secret for webhook abc-123 was leaked. Rotate it and give me the new secret. Tell me what I need to do on my endpoint to avoid a verification gap.
What it exercises: rotate_webhook_secret. The model should also explain to update the verification code FIRST, then rotate, since the old secret stops working immediately.

Discoverability — “what can you do?”

What tools do you have available from the Parlay MCP server? Group them by category and tell me which ones cost money to run vs which are free.
The model will list all 60 tools grouped by domain, and (because every tool description has a cost callout) it’ll correctly tag the AI-cost ones.

Cleanup after demo

Archive the org demo-acme.
Soft-deletes the entire demo workspace. Reversible manually server-side.