# LeMUR — Run LLMs Over AssemblyAI Transcripts > LeMUR runs Claude / GPT prompts over AssemblyAI transcripts already in context. Summaries, Q&A, action items, custom JSON extraction. ## Install Save the content below to `.claude/skills/` or append to your `CLAUDE.md`: ## Quick Use 1. Transcribe with `aai.Transcriber().transcribe(...)` 2. Call `transcript.lemur.summarize / question / task / action_items` 3. Pick `final_model` per cost/quality tradeoff --- ## Intro LeMUR (Leveraging Large Language Models to Understand Recognized Speech) is AssemblyAI's transcript-LLM bridge — once a transcript exists in your account, you can run Claude or GPT prompts against it without re-uploading or chunking. Endpoints: summary, Q&A, action items, custom prompt. Best for: meeting recap automation, call center QA, podcast show notes, any post-transcription analysis. Works with: assemblyai Python/Node SDK + LeMUR HTTP endpoints. Setup time: 5 minutes after a transcript exists. --- ### Summary endpoint ```python import assemblyai as aai aai.settings.api_key = ASSEMBLYAI_KEY transcript = aai.Transcriber().transcribe("call.mp3") summary = transcript.lemur.summarize( final_model=aai.LemurModel.claude3_5_sonnet, context="This is a customer support call about a missed refund.", answer_format="3 bullet points", ) print(summary.response) ``` ### Custom prompt (most flexible) ```python prompt = ''' You are a call center QA analyst. Score this support call on: - Empathy (0-10) - Resolution clarity (0-10) - Compliance: was the agent's name stated, was a case number provided? Return strict JSON with these fields plus a 'notes' string under 200 words. ''' result = transcript.lemur.task( prompt=prompt, final_model=aai.LemurModel.claude3_5_sonnet, temperature=0.0, max_output_size=600, ) import json print(json.loads(result.response)) ``` ### Q&A endpoint (multi-question) ```python qa = transcript.lemur.question( questions=[ aai.LemurQuestion(question="What was the customer's main complaint?"), aai.LemurQuestion(question="Did the agent offer a refund? If yes, how much?"), aai.LemurQuestion(question="What's the recommended next action?", answer_format="one sentence"), ], final_model=aai.LemurModel.claude3_5_sonnet, ) for r in qa.response: print(r.question, "→", r.answer) ``` ### Action items ```python action_items = transcript.lemur.action_items( final_model=aai.LemurModel.claude3_5_sonnet, context="Internal product planning meeting.", ) print(action_items.response) ``` ### Available models | Model | Best for | |---|---| | `claude3_5_sonnet` | Default — best quality, balanced cost | | `claude3_haiku` | Cheap, fast for short summaries | | `claude3_opus` | Top quality, slowest, highest cost | | `default` | AssemblyAI-tuned fast model | --- ### FAQ **Q: Why use LeMUR instead of feeding transcript to Claude myself?** A: Three reasons: (1) the transcript stays in AssemblyAI's secure data plane — no re-upload of potentially-PII content; (2) you skip the chunking + context management plumbing; (3) it's one billing invoice. For one-off scripts, calling Claude directly is fine; for production analyze-every-call flows, LeMUR is simpler. **Q: Can I run LeMUR on multiple transcripts at once?** A: Yes — `aai.Lemur().task(transcript_ids=[id1, id2, id3], prompt=...)`. Useful for weekly call-portfolio analysis. 100 transcripts max per call. **Q: Does LeMUR support tool calls?** A: Not yet — LeMUR is text-in/text-out. For tool use, fetch the transcript, then pass it to your own Claude/OpenAI call with tools enabled. --- ## Source & Thanks > Built by [AssemblyAI](https://github.com/AssemblyAI). LeMUR docs at [assemblyai.com/docs/lemur](https://assemblyai.com/docs/lemur). > > [AssemblyAI/assemblyai-python-sdk](https://github.com/AssemblyAI/assemblyai-python-sdk) --- ## 快速使用 1. 用 `aai.Transcriber().transcribe(...)` 转录 2. 调 `transcript.lemur.summarize / question / task / action_items` 3. 按成本/质量折中选 `final_model` --- ## 简介 LeMUR(Leveraging Large Language Models to Understand Recognized Speech)是 AssemblyAI 的转录-LLM 桥 —— 转录在你账户里之后,可以对它跑 Claude 或 GPT prompt,不用重新上传或分块。Endpoint:summary、Q&A、action items、custom prompt。适合会议复盘自动化、呼叫中心 QA、播客 show notes、任何转录后分析。兼容 assemblyai Python/Node SDK + LeMUR HTTP endpoint。装机时间:转录存在后 5 分钟。 --- ### Summary endpoint ```python import assemblyai as aai aai.settings.api_key = ASSEMBLYAI_KEY transcript = aai.Transcriber().transcribe("call.mp3") summary = transcript.lemur.summarize( final_model=aai.LemurModel.claude3_5_sonnet, context="这是关于一笔未到账退款的客服通话。", answer_format="3 个要点", ) print(summary.response) ``` ### 自定义 prompt(最灵活) ```python prompt = ''' 你是呼叫中心 QA 分析师。给这次客服通话打分: - 同理心(0-10) - 解决清晰度(0-10) - 合规:agent 报过名字吗?有给案件编号吗? 返回严格 JSON 含这些字段加 'notes' 字符串 <200 字。 ''' result = transcript.lemur.task( prompt=prompt, final_model=aai.LemurModel.claude3_5_sonnet, temperature=0.0, max_output_size=600, ) import json print(json.loads(result.response)) ``` ### Q&A endpoint(多问题) ```python qa = transcript.lemur.question( questions=[ aai.LemurQuestion(question="客户主要抱怨什么?"), aai.LemurQuestion(question="客服提了退款吗?多少?"), aai.LemurQuestion(question="下一步建议?", answer_format="一句话"), ], final_model=aai.LemurModel.claude3_5_sonnet, ) for r in qa.response: print(r.question, "→", r.answer) ``` ### Action items ```python action_items = transcript.lemur.action_items( final_model=aai.LemurModel.claude3_5_sonnet, context="内部产品规划会。", ) print(action_items.response) ``` ### 可用模型 | 模型 | 最佳用途 | |---|---| | `claude3_5_sonnet` | 默认 —— 质量最好、成本均衡 | | `claude3_haiku` | 短摘要便宜快 | | `claude3_opus` | 顶级质量,最慢,最贵 | | `default` | AssemblyAI 调过的快速模型 | --- ### FAQ **Q: 为啥用 LeMUR 不自己把转录喂 Claude?** A: 三个原因:(1) 转录留在 AssemblyAI 安全数据面 —— 不用重新上传可能含 PII 的内容;(2) 跳过分块 + 上下文管理水管;(3) 一张账单。一次性脚本直接调 Claude 没问题;生产分析每通电话流程 LeMUR 更简单。 **Q: 能一次性对多个转录跑 LeMUR 吗?** A: 能 —— `aai.Lemur().task(transcript_ids=[id1, id2, id3], prompt=...)`。适合每周通话组合分析。每次最多 100 个转录。 **Q: LeMUR 支持 tool call 吗?** A: 还不支持 —— LeMUR 是文本进文本出。要 tool use 就先拉转录再传给你自己开 tools 的 Claude/OpenAI 调用。 --- ## 来源与感谢 > Built by [AssemblyAI](https://github.com/AssemblyAI). LeMUR docs at [assemblyai.com/docs/lemur](https://assemblyai.com/docs/lemur). > > [AssemblyAI/assemblyai-python-sdk](https://github.com/AssemblyAI/assemblyai-python-sdk) --- Source: https://tokrepo.com/en/workflows/lemur-run-llms-over-assemblyai-transcripts Author: AssemblyAI