# DSPy Micro Agent — CLI + FastAPI + Evals > evalops/dspy-micro-agent is a minimal agent runtime (CLI + FastAPI + eval harness); verified 73★ and documents `micro-agent ask` and `run_evals.py --n 50`. ## Install Copy the content below into your project: ## Quick Use ```bash uv venv && source .venv/bin/activate uv pip install -e . cp .env.example .env # set OPENAI_API_KEY or configure Ollama micro-agent ask --question "What's 2*(3+5)?" --utc python evals/run_evals.py --n 50 ``` ## Intro evalops/dspy-micro-agent is a minimal agent runtime (CLI + FastAPI + eval harness); verified 73★ and documents `micro-agent ask` and `run_evals.py --n 50`. **Best for:** Builders who want a small, readable Plan/Act loop with traces, plus quick evals before adding complexity **Works with:** Python 3.10+, OpenAI or Ollama providers, CLI + FastAPI deployments **Setup time:** 10-25 minutes ### Key facts (verified) - GitHub: 73 stars · 6 forks · pushed 2026-04-25. - License: MIT · owner avatar + repo URL verified via GitHub API. - README-backed entrypoint: `micro-agent ask --question "What's 2*(3+5)?" --utc`. ## Main - Treat traces as a first-class artifact: README stores JSONL traces under `traces/` and includes replay support. - Use the eval harness early (`--n 50`) to guard against regressions when you add tools, prompts, or provider switches. - Start with the CLI, then graduate to the HTTP API when you need multi-client access or UI integrations. ### Source-backed notes - README quickstart shows `micro-agent ask ... --utc`, a FastAPI server via `uvicorn`, and evals via `python evals/run_evals.py --n 50`. - Docs describe provider config for OpenAI and Ollama, plus tracing under `traces/.jsonl`. - README lists eval metrics like success_rate/avg_latency_sec/avg_cost_usd and notes usage/cost capture can be best-effort. ### FAQ - **Is this a full framework?**: No — README frames it as a minimal runtime + DSPy modules you can read end-to-end. - **Can I run it without OpenAI?**: Yes — README includes an Ollama provider path via env vars. - **How do I keep changes safe?**: Use the built-in eval harness and store/replay traces to compare behavior over time. ## Source & Thanks > Source: https://github.com/evalops/dspy-micro-agent > License: MIT > GitHub stars: 73 · forks: 6 --- ## Quick Use ```bash uv venv && source .venv/bin/activate uv pip install -e . cp .env.example .env # set OPENAI_API_KEY or configure Ollama micro-agent ask --question "What's 2*(3+5)?" --utc python evals/run_evals.py --n 50 ``` ## Intro evalops/dspy-micro-agent 提供最小 agent runtime:CLI + FastAPI 服务 + 小型 eval harness;已验证 73★,README 展示 `micro-agent ask` 与 `run_evals.py --n 50`。 **Best for:** 想先用小而清晰的 Plan/Act 循环 + trace,再逐步扩展复杂度的开发者 **Works with:** Python 3.10+,支持 OpenAI/Ollama provider,适配 CLI 与 FastAPI 部署 **Setup time:** 10-25 minutes ### Key facts (verified) - GitHub:73 stars · 6 forks;最近更新 2026-04-25。 - 许可证:MIT;作者头像与仓库链接均已通过 GitHub API 复核。 - README 中可对照的入口命令:`micro-agent ask --question "What's 2*(3+5)?" --utc`。 ## Main - 把 trace 当成一等产物:README 将 JSONL trace 存在 `traces/`,并提供回放能力。 - 尽早用 eval harness(`--n 50`)做回归门禁,避免加工具/改 prompt/换 provider 后变差。 - 先用 CLI 跑通;需要多端接入或 UI 集成时再上 HTTP API。 ### Source-backed notes - README quickstart 包含 `micro-agent ask ... --utc`、`uvicorn` 启动 FastAPI、以及 `python evals/run_evals.py --n 50`。 - 文档说明 OpenAI/Ollama provider 配置,并将 trace 写入 `traces/.jsonl`。 - README 列出 eval 指标(success_rate、avg_latency_sec、avg_cost_usd 等),并说明成本统计可能是 best-effort。 ### FAQ - **它是完整框架吗?**:不是。README 强调它是最小 runtime + DSPy 模块,便于端到端阅读。 - **不接 OpenAI 能跑吗?**:能。README 提供 Ollama 的环境变量配置路径。 - **如何让改动更安全?**:用自带 eval harness 做门禁,并保存/回放 traces 对比行为变化。 ## Source & Thanks > Source: https://github.com/evalops/dspy-micro-agent > License: MIT > GitHub stars: 73 · forks: 6 --- Source: https://tokrepo.com/en/workflows/dspy-micro-agent-cli-fastapi-evals Author: Agent Toolkit