Langflow — Visual AI Agent Builder with API
Langflow is a visual platform for building and deploying AI agents as APIs or MCP servers. 146K+ GitHub stars. Multi-agent orchestration, playground, observability. MIT.
What it is
Langflow is an open-source visual platform for building AI agents, RAG pipelines, and multi-agent systems. You design workflows by dragging and connecting blocks in a browser-based canvas -- each block represents an LLM call, a tool, a vector store, or a data transformation. Once built, workflows can be exported as REST APIs or MCP servers.
Langflow targets developers and teams who want to prototype AI workflows visually before committing to code. The playground lets you test and iterate on agents in real time, and the observability panel shows token usage, latency, and execution traces.
How it saves time or tokens
Langflow replaces the cycle of writing Python code, testing in a notebook, and refactoring into production. The visual canvas makes it easy to rearrange workflow steps, swap LLM providers, and add tools without rewriting code. Exporting as an API means your prototype becomes production-ready without a separate engineering effort.
The built-in playground eliminates the need for separate testing scripts. You can send messages to your agent directly from the builder and see how each block processes the input.
How to use
- Install Langflow:
uv pip install langflow -U. - Start the server:
uv run langflow run. - Open
http://127.0.0.1:7860in your browser, create a new flow, and start connecting blocks.
Example
# Install Langflow
uv pip install langflow -U
# Start the server
uv run langflow run
# Open http://127.0.0.1:7860
# Create a flow:
# 1. Add an 'OpenAI' LLM block
# 2. Connect a 'Prompt' block with your system message
# 3. Add a 'Chat Input' and 'Chat Output' block
# 4. Connect them: Input -> Prompt -> LLM -> Output
# 5. Click the playground button to test
Related on TokRepo
- AI agent tools -- frameworks for building AI agents
- Automation tools -- workflow automation platforms
Common pitfalls
- Visual workflows can become hard to maintain at scale; consider breaking complex agents into sub-flows for readability.
- The default installation includes all provider integrations which increases install size; use
langflow[minimal]if you only need specific providers. - API export generates a single endpoint; for production deployments with authentication and rate limiting, place a reverse proxy in front of the Langflow API.
Frequently Asked Questions
Yes. Every Langflow workflow can be exported as a REST API endpoint. You can also deploy workflows as MCP servers, making them accessible to AI agents that support the Model Context Protocol.
Langflow supports OpenAI, Anthropic, Google, Azure, Ollama, and many other LLM providers. Each provider is represented as a block that you can drag into your workflow canvas.
Yes. Langflow supports multi-agent orchestration where multiple agents can collaborate, delegate tasks, and share context within a single workflow. You can connect agent blocks in sequence or parallel.
Langflow is open-source under the MIT license and free to self-host. DataStax offers a managed cloud version with additional features, but the core platform is fully functional when self-hosted.
Yes. Langflow integrates with Ollama and other local LLM runners. You can build workflows that use entirely local models with no API calls to external providers.
Citations (3)
- Langflow GitHub— Langflow is a visual platform for building AI agents
- Langflow Documentation— Visual AI workflow builder with API export
- Anthropic MCP— Model Context Protocol for AI agent tool integration
Related on TokRepo
Source & Thanks
Created by Langflow AI. Licensed under MIT. langflow-ai/langflow — 146,000+ GitHub stars