WorkflowsApr 1, 2026·2 min read

Flowise — Build AI Agents Visually

Flowise is a low-code platform for building AI agents and workflows with drag-and-drop. 51.3K+ GitHub stars. LangChain integration, RAG, multi-agent, self-hosted. Apache 2.0.

TL;DR
Flowise is a low-code, self-hosted platform for building AI agents, chatbots, and RAG pipelines with a visual drag-and-drop interface and LangChain integration.
§01

What it is

Flowise is a low-code platform for building AI agents, chatbots, and RAG (retrieval-augmented generation) pipelines through a visual drag-and-drop interface. Built on LangChain, it provides pre-built nodes for LLMs, vector stores, document loaders, tools, and memory. You connect nodes visually to create agent workflows without writing code.

Flowise targets developers, product teams, and non-technical users who want to prototype and deploy AI applications quickly. It is self-hosted, giving you control over your data and models. The platform supports OpenAI, Anthropic, local models via Ollama, and dozens of other providers.

§02

How it saves time or tokens

Flowise eliminates the boilerplate of LangChain development. Instead of writing Python code to connect an LLM to a vector store to a document loader, you drag and drop nodes and draw connections. Built-in chat testing lets you iterate on agent behavior without deploying. The API endpoint generated for each flow enables instant integration with your application.

§03

How to use

  1. Install and start Flowise: npm install -g flowise && npx flowise start.
  2. Open the visual builder at http://localhost:3000.
  3. Drag nodes onto the canvas, connect them, and test your agent via the built-in chat.
§04

Example

# Install and start
npm install -g flowise
npx flowise start

# Or with Docker
docker compose up -d

# Open http://localhost:3000

Building a RAG chatbot visually:

  1. Drag a 'Document Loader' node (PDF, web page, etc.)
  2. Connect to a 'Text Splitter' node
  3. Connect to a 'Vector Store' node (Pinecone, Chroma, etc.)
  4. Connect to an 'LLM' node (OpenAI, Anthropic, Ollama)
  5. Add a 'Retrieval QA Chain' node
  6. Test in the built-in chat panel
§05

Related on TokRepo

§06

Common pitfalls

  • Flowise is a visual wrapper around LangChain. Complex custom logic may require switching to code. The platform supports custom JavaScript nodes for advanced use cases.
  • Self-hosting means you manage updates, backups, and security. Keep Flowise updated and restrict access to the admin interface.
  • Node connections must match types (e.g., a vector store node expects embeddings input). Mismatched connections produce errors at runtime, not at design time.

Frequently Asked Questions

Is Flowise free?+

Yes. Flowise is open source under the Apache 2.0 license. It is fully self-hosted with no paid tiers for the platform itself. You pay only for external API calls (LLM providers, vector databases).

What LLM providers does Flowise support?+

Flowise supports OpenAI, Anthropic (Claude), Google (Gemini), Azure OpenAI, local models via Ollama, HuggingFace, and many others through LangChain's provider ecosystem.

Can Flowise build multi-agent systems?+

Yes. Flowise supports multi-agent workflows where agents delegate to specialized sub-agents. You connect agent nodes to create hierarchical or collaborative agent architectures.

How do I deploy a Flowise chatbot to my website?+

Flowise generates an API endpoint and an embeddable chat widget for each flow. Use the API for custom integrations or embed the chat widget directly in your website with a script tag.

Does Flowise support RAG?+

Yes. RAG is one of Flowise's primary use cases. It provides nodes for document loading, text splitting, embedding generation, vector storage, and retrieval-augmented generation chains.

Citations (3)
🙏

Source & Thanks

Created by FlowiseAI. Licensed under Apache 2.0. FlowiseAI/Flowise — 51,300+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.