Practical Notes
Adopt PromptFlow as your team’s LLM dev loop: template a flow, store connections as named resources, and run interactive tests locally. Then add batch tests and evaluations so every prompt change is measurable. For agents, model each tool step as a node so you can debug failures with clear inputs/outputs per node.
Safety note: Keep secrets in connections, not in YAML committed to git; rotate keys and restrict scopes.
FAQ
Q: Do I need Azure to use PromptFlow? A: No. The README includes both OpenAI and Azure OpenAI connection examples; you can start locally.
Q: Where should I use flows vs code? A: Use flows for repeatable LLM pipelines (prompt → tool → evaluator). Use code for custom logic and integrations.
Q: How do I keep quality high? A: Use batch tests + evaluation runs; treat prompts as versioned artifacts and gate releases on eval metrics.