Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.goparlay.io/llms.txt

Use this file to discover all available pages before exploring further.

A sales rep who records their entire workday produces a 6–12 hour audio file containing 5–15 separate customer conversations interleaved with driving, internal calls, and gaps. The all-day-session endpoint detects those conversations automatically, scores each one with the standard six-pillar V5 analysis, and returns a clean rollup. What you send: one HTTPS URL. What you get back: N first-class Analysis rows with parent_session_id set — every existing analysis tool you’ve already integrated (get_analysis, webhooks, dashboards) Just Works on the children.

When to use it

You haveYou wantUse this
One 30-minute callOne scorePOST /v1/analyses
One 8-hour day with 5 calls inside5 separate scores, one per conversationPOST /v1/all-day-sessions ← you are here
5 separate URLs5 separate scoresLoop POST /v1/analyses
Don’t reach for all-day if you already know the boundaries — submit each call individually. All-day is specifically for the “I have one long file, please find the conversations inside” case.

The 60-second mental model

You POST one URL ────► Server transcribes (Deepgram, ONCE)


                     Gemini detects conversation boundaries


                     For each segment with confidence ≥ 0.8:
                       └─► V5 analysis on sliced transcript


                     Returns: parent session + N child Analyses

Webhooks fire:
  • all_day_session.processing_started  (worker picked it up — "Parlay has it")
  • all_day_session.detected            (segments visible, before analyses run)
  • analysis.completed                  (existing event, fires per segment)
  • all_day_session.completed           (everything terminal — rollup ready)
Audio storage: we never re-host your file. The response carries start_seconds/end_seconds offsets — your UI uses HTTP Range requests on your original URL to play any segment.

Submit a session

curl -X POST https://api.goparlay.io/v1/all-day-sessions \
  -H "Authorization: Bearer pk_live_YOUR_KEY" \
  -H "Idempotency-Key: $(uuidgen)" \
  -H "Content-Type: application/json" \
  -d '{
    "recording_url": "https://your-bucket.s3.amazonaws.com/2026-05-06-rep1.mp3?...",
    "org_id": "acme-corp",
    "rep_id": "alex-smith"
  }'
You get back a 202:
{
  "id": "f33705f0-5360-4434-8548-c280066da7e7",
  "status": "queued",
  "created_at": "2026-05-07T00:50:34.607922+00:00"
}
The session moves through statuses asynchronously: queued → transcribing → detecting → analyzing → completed.

Production keys only

Sandbox keys (pk_sandbox_…) cannot submit all-day sessions — they reject with invalid_recording_url. The reason: an all-day call burns real Deepgram + Gemini credits proportional to audio length, and sandbox is supposed to be free. Test with a single short recording via POST /v1/analyses using mock://* fixtures during development; switch to a pk_live_… key for real audio.

Wait for results

You have three patterns. Pick whichever matches your existing setup. Register a webhook for the events you care about. Most partners already have analysis.completed wired in for standalone analyses — that handler runs unchanged for each segment, since segments are first-class Analyses with parent_session_id set.
curl -X POST https://api.goparlay.io/v1/webhooks \
  -H "Authorization: Bearer pk_live_YOUR_KEY" \
  -H "Idempotency-Key: $(uuidgen)" \
  -H "Content-Type: application/json" \
  -d '{
    "url": "https://your-app.com/parlay-webhook",
    "enabled_events": [
      "all_day_session.processing_started",
      "all_day_session.detected",
      "all_day_session.completed",
      "all_day_session.failed",
      "analysis.completed"
    ]
  }'
EventWhen it firesUse it for
all_day_session.processing_startedWorker picks the session up off the queue”Parlay has it” reassurance on backed-up days — fires before transcription starts so even an 8-hour file doesn’t sit in queued silence
all_day_session.detectedBoundaries found, before per-segment analysis startsShow the partner “we found 5 calls” while analyses are still processing
analysis.completedEach segment finishes (existing event, fires N times)Existing per-call coaching workflows
all_day_session.completedAll segments terminalRollup view (“Alex’s Tuesday at a glance”)
all_day_session.failedUnrecoverable error (no speech, transcription failed, etc.)Surface a partner-friendly error + offer retry

Pattern 2 — poll the session

curl https://api.goparlay.io/v1/all-day-sessions/$SESSION_ID \
  -H "Authorization: Bearer pk_live_YOUR_KEY"
Poll every 5–10 seconds until status === "completed". Typical end-to-end latency is 1.5 minutes for a 24-minute recording, 5–10 minutes for a full 8-hour day with 5 segments.

Pattern 3 — wait inline (the MCP tool does this for you)

If you’re calling from an AI agent, the submit_all_day_recording MCP tool waits up to 10 minutes for completion and returns the full session inline.

What you get back

Once status === "completed", the session response carries every detected segment plus their analysis_id forward-links:
{
  "id": "f33705f0-5360-4434-8548-c280066da7e7",
  "status": "completed",
  "duration_seconds": 1437,
  "segments_detected": 1,
  "segments_analyzed": 1,
  "segments_pending_review": 0,
  "segments_skipped": 0,
  "segments": [
    {
      "id": "21452f64-714e-4e59-a806-d8871e27ad36",
      "start_seconds": 0,
      "end_seconds": 1437,
      "confidence": 0.95,
      "customer_name": "Jackson",
      "is_sales_conversation": true,
      "detected_outcome": "ongoing",
      "summary": "James (rep) conducts a group product training and Q&A session for prospects Jackson, Keri, John, and Charles, focusing on search functionality, data export, and post-sale support, with discussions about the trial ending and signing an agreement.",
      "status": "analyzed",
      "analysis_id": "83171c80-6333-465b-b759-a67508909425",
      "analysis_url": "/v1/analyses/83171c80-6333-465b-b759-a67508909425",
      "created_at": "2026-05-07T00:51:20.391105+00:00",
      "analyzed_at": "2026-05-07T00:52:01.123Z"
    }
  ],
  "options": {},
  "metadata": {},
  "callback_url": null,
  "error": null,
  "created_at": "2026-05-07T00:50:34.607922+00:00",
  "started_at": "2026-05-07T00:50:34.700Z",
  "completed_at": "2026-05-07T00:52:01.487Z",
  "partner_org_id": "...",
  "partner_rep_id": "..."
}
Drill into any segment via GET /v1/analyses/:analysis_id to get the full canonical 6-pillar analysis (clarity, influence, objection, discovery, delivery, close + feedback_v5 + action_plan_v5). The shape is identical to a standalone analysis — same fields, same KPIs, same nesting. The only difference is parent_session_id is set.

Playing back a segment in your UI

This is the part that surprises new partners. You don’t ask Parlay for the segment audio — we don’t host it. You already have the original recording on your bucket. The session response gives you start_seconds and end_seconds, plus a ready-to-paste playback block on every segment that bundles both the browser snippet and an ffmpeg one-liner:
{
  "playback": {
    "recording_url": "https://your-bucket.s3.amazonaws.com/2026-05-06-rep1.mp3?...",
    "start_seconds": 1820,
    "end_seconds": 4100,
    "browser_seek_js": "const audio = document.querySelector('audio'); audio.currentTime = 1820; audio.play(); audio.addEventListener('timeupdate', () => { if (audio.currentTime >= 4100) audio.pause(); });",
    "ffmpeg_extract": "ffmpeg -ss 1820 -to 4100 -i \"https://your-bucket.s3.amazonaws.com/...\" -c copy segment.mp3"
  }
}
Browser playback — drop the browser_seek_js snippet into any page that has an <audio> element pointing at the original URL. Modern browsers issue HTTP Range requests automatically — only the relevant byte range downloads. Same behavior Spotify and Twitch use. No new file, no Parlay-hosted URL, no second download. Offline export — the ffmpeg_extract command uses -c copy (stream-copy, no re-encoding) so it runs instantly and preserves the original quality. Pipe it into a script if you want to archive every conversation as a separate file:
# Archive every analyzed segment from a session as segment-0.mp3, segment-1.mp3, ...
curl -s https://api.goparlay.io/v1/all-day-sessions/$ID \
  -H "Authorization: Bearer $KEY" \
  | jq -r '.segments[].playback.ffmpeg_extract' \
  | awk '{print $0 " segment-" NR-1 ".mp3"}' \
  | sh
Pre-signed URL expiry — if your S3/GCS pre-signed URL has expired by the time the user clicks play (typical TTL: 1 hour), your backend mints a fresh one on demand. The recording_url echoed back in the playback block is the same URL you submitted, so you can swap it for a freshly-signed one if needed.

Confidence + auto-analyze threshold

Each detected segment carries a confidence score from 0.0 to 1.0. The default behavior:
ConfidenceDefault actionStatus
≥ 0.8Auto-analyze with V5analyzed
0.5 – 0.8Skip auto-analysis, surface for human reviewneeds_review
< 0.5Skip entirely (probably not a sales conversation)skipped
You can tune the threshold or turn auto-analysis off entirely:
{
  "recording_url": "...",
  "org_id": "acme",
  "rep_id": "alex",
  "options": {
    "min_confidence": 0.7,
    "auto_analyze": true
  }
}
Setting auto_analyze: false returns the detected segments without running V5 — useful when you want to filter or review before paying for analysis.

Resolving playbooks per segment

Per-segment analyses run through the standard playbook resolver, which means per-rep playbooks just work. If you’ve created a rep-level playbook for alex-smith (via POST /v1/orgs/:org_id/playbooks with rep_id set), every segment in this session uses Alex’s playbook automatically. No extra config needed at the session level. The full resolution order at analysis time:
  1. options.playbook_content (request literal)
  2. options.playbook_id (explicit id)
  3. Active rep playbook for rep_id
  4. Active org playbook for org_id
  5. None (default scoring)

Pricing model

Two billable events per session, both itemized on your usage report:
EventTriggerApproximate cost
all_day_transcriptionOne Deepgram call on the full input~$0.30 per hour of audio
all_day_detectionSingle-shot Gemini boundary call~$0.05 per session
analysis (existing)One per analyzed segment~$0.06 per segment
Worked example — 8-hour day with 5 segments:
  • Deepgram: ~$2.40
  • Boundary detection: ~$0.05
  • 5 × analysis: ~$0.30
  • Total: ~$2.75
Compare to processing 5 separate recordings via /v1/analyses — same five analyses, but you’d pay Deepgram five times instead of once. For an 8-hour day, all-day sessions are roughly 4× cheaper than the equivalent N standalone analyses.

Limits + edge cases

LimitValue
Max audio length36 hours (Gemini boundary single-shot caps at ~15K utterances)
Max segments per sessionNo hard cap; typical days produce 5–15
Concurrent V5 analyses per session5
Audio formatsMP3, WAV, M4A, AAC, OGG, FLAC, OPUS, WEBM, MP4
URL must beHTTPS, no auth headers required, byte-range-fetchable
If your URL requires authentication (e.g., S3 bucket policy denies public reads), use a pre-signed URL — that’s what S3 and GCS pre-signed URLs are for, and Parlay handles them transparently.

Submitting via MCP

For AI-agent workflows, the @goparlay/mcp-server package exposes four conversational tools:
submit_all_day_recording({
  recording_url: "https://your-bucket/full-day.mp3",
  org_id: "acme",
  rep_id: "alex",
})
// → returns the parent session + every detected segment with analysis_ids,
//   waits up to 10 minutes for completion

get_all_day_session({ session_id: "..." })
list_all_day_sessions({ rep_id: "alex" })
archive_all_day_session({ session_id: "..." })
The killer demo:
“My rep recorded their whole Tuesday — here’s the URL. Tell me what happened and which calls need follow-up.” The AI calls submit_all_day_recording, gets the segments, summarizes each (Acme won, Beta Corp ongoing, Charlie skipped low-signal, Delta pricing pushback), and surfaces just the high-signal ones for review. One prompt, one URL, the rep’s whole day analyzed.

What we deliberately do NOT do

  • No second transcription. We Deepgram the full file ONCE; per-segment analyses run on the sliced text. You don’t pay Deepgram per segment.
  • No re-hosted audio. No segment_audio_url field exists. Use HTTP Range requests on your original URL to play.
  • No human-review UI. Segments below the confidence threshold are surfaced via needs_review status — what your dashboard does with them is up to you.
  • No automatic merge across multi-day recordings. One session per recording; if a rep records two 4-hour days separately, that’s two sessions.

See also