# Weave — Trace and Debug LLM Apps > Weave adds tracing to LLM apps with `@weave.op`. Install `weave`, call `weave.init()`, then track inputs/outputs across API calls and validation steps. ## Install Copy the content below into your project: # Weave — Trace and Debug LLM Apps > Weave adds tracing to LLM apps with `@weave.op`. Install `weave`, call `weave.init()`, then track inputs/outputs across API calls and validation steps. ## Quick Use 1. Install: ```bash pip install weave ``` 2. Run: ```bash python -c "import weave; weave.init('tokrepo-demo'); print('weave init ok')" ``` 3. Verify: - Wrap a function with `@weave.op`, run it once, and confirm a trace is recorded --- ## Intro Weave adds tracing to LLM apps with `@weave.op`. Install `weave`, call `weave.init()`, then track inputs/outputs across API calls and validation steps. - **Best for:** teams debugging agent workflows who need end-to-end traces across tool calls, validation ops, and model calls - **Works with:** Python, `weave.op` decorator, integration with LLM API calls and custom validation functions (repo examples) - **Setup time:** 9 minutes ### Quantitative Notes - Install: `pip install weave` (repo) - Setup time ~9 minutes - GitHub stars (verified): see Source & Thanks --- ## Practical Notes Weave is most valuable when you treat observability as a product feature. Add traces around every boundary: user input → prompt assembly → tool calls → validation → final output. Then use the trace tree to answer: where did latency come from, which step failed, and what data caused the failure? **Safety note:** Avoid logging secrets; sanitize prompts and tool args before tracing in production. ### FAQ **Q: What should I trace first?** A: Start with your top-level agent run function and the model-call wrapper; then add ops around tools and validators. **Q: Does it only work with OpenAI?** A: No. The README says you can trace any function, including calls to different providers and open-source models. **Q: How do I keep traces useful?** A: Log structured inputs, normalize outputs, and capture error cases; traces are only helpful when they show failures clearly. --- ## Source & Thanks > GitHub: https://github.com/wandb/weave > Owner avatar: https://avatars.githubusercontent.com/u/26401354?v=4 > License (SPDX): Apache-2.0 > GitHub stars (verified via `api.github.com/repos/wandb/weave`): 1,090 --- # Weave——为 LLM 应用做 Trace 与调试 > Weave 用 `@weave.op` 给 LLM 应用加 tracing。安装 `weave` 后调用 `weave.init()`,即可跟踪 API 调用与校验步骤的输入/输出,便于调试与回放。 ## 快速使用 1. 安装: ```bash pip install weave ``` 2. 运行: ```bash python -c "import weave; weave.init('tokrepo-demo'); print('weave init ok')" ``` 3. 验证: - Wrap a function with `@weave.op`, run it once, and confirm a trace is recorded --- ## 简介 Weave 用 `@weave.op` 给 LLM 应用加 tracing。安装 `weave` 后调用 `weave.init()`,即可跟踪 API 调用与校验步骤的输入/输出,便于调试与回放。 - **适合谁(Best for):** 在调试 agent 工作流的团队,需要跨工具调用、校验函数与模型调用的端到端 trace - **兼容工具(Works with):** Python、`weave.op` 装饰器、与 LLM API 调用/自定义校验函数结合(仓库示例) - **安装时间(Setup time):** 9 分钟 ### 量化信息 - 安装:`pip install weave`(仓库) - 装机约 9 分钟 - GitHub stars(已核验):见「来源与感谢」 --- ## 实战要点 把可观测性当成产品能力来做时,Weave 的价值最大。把 trace 放在每个边界上:用户输入 → prompt 组装 → 工具调用 → 校验 → 最终输出。用 trace 树回答三个问题:延迟来自哪里、哪一步失败、失败由什么数据触发。 **安全提示:** 生产环境避免记录敏感信息;在 trace 前先对 prompt 与工具参数做脱敏。 ### FAQ **Q: 优先 trace 哪些函数?** A: 先包最外层的 agent run 和模型调用封装,再逐步把工具调用与校验函数也纳入 tracing。 **Q: 只能配 OpenAI 吗?** A: 不是。README 表示可 trace 任意函数,包括不同 provider 的 API 调用或开源模型推理。 **Q: 怎样让 trace 真有用?** A: 记录结构化输入、规范化输出,并重点捕获错误分支;只有能清楚呈现失败,trace 才有价值。 --- ## 来源与感谢 > GitHub:https://github.com/wandb/weave > Owner avatar:https://avatars.githubusercontent.com/u/26401354?v=4 > 许可证(SPDX):Apache-2.0 > GitHub stars(已通过 `api.github.com/repos/wandb/weave` 核验):1,090 --- Source: https://tokrepo.com/en/workflows/weave-trace-and-debug-llm-apps Author: Agent Toolkit