Multi-Agent Framework
AgentScope — Alibaba’s Flexible Multi-Agent Platform logo

AgentScope — Alibaba’s Flexible Multi-Agent Platform

AgentScope is Alibaba’s open-source multi-agent framework — message-passing primitives, distributed deployment, a Gradio-based Studio, and strong support for complex topologies like games and multi-role simulations.

Why AgentScope

AgentScope’s differentiator is topology flexibility and distributed deployment. Where CrewAI fixes the structure (sequential/hierarchical) and LangGraph makes you declare it, AgentScope gives you message-passing primitives (msghub, pipelines, reply hooks) and lets you assemble any topology — circular, star, mesh. For simulations (werewolf game, debate, multi-persona roleplay) it’s among the most expressive frameworks.

Distributed execution is another strength. Each agent can run in its own process or host, communicating over RPC. Useful for computationally expensive agents (e.g., ones that call local models with GPUs) or for isolating untrusted agent sandboxes. Few other frameworks ship this out of the box.

The trade-off: English docs and community are smaller than CrewAI/LangGraph. The project is strong in the Chinese LLM community (ModelScope ecosystem) and the docs are bilingual but lag in English examples. For teams comfortable reading GitHub source and a mix of en/zh docs, the expressiveness is worth it.

Quick Start — Two-Agent Debate

sequentialpipeline runs agents in order, passing the output of one as input to the next. For more complex topologies use msghub (shared message broker) or IfElsePipeline for conditional routing. init() reads a list of model configs — switching models is a config change, not a code change.

# pip install agentscope
import agentscope
from agentscope.agents import DialogAgent, UserAgent
from agentscope.message import Msg
from agentscope.pipelines import sequentialpipeline

# Initialize with a model config
agentscope.init(
    model_configs=[{
        "config_name": "gpt-4o-mini",
        "model_type": "openai_chat",
        "model_name": "gpt-4o-mini",
        "api_key": "sk-...",
    }],
)

bull = DialogAgent(
    name="Bull",
    sys_prompt="You argue that AI agents are overhyped. Be concise and cite one concrete data point per turn.",
    model_config_name="gpt-4o-mini",
)
bear = DialogAgent(
    name="Bear",
    sys_prompt="You argue AI agents will transform every industry within 3 years. Concise, data-backed.",
    model_config_name="gpt-4o-mini",
)

topic = Msg(name="moderator", content="Resolved: multi-agent systems will dominate enterprise SW by 2028.", role="user")
# 3 rounds of Bull → Bear using a simple pipeline
for _ in range(3):
    topic = sequentialpipeline([bull, bear], x=topic)

print("Final:", topic.content)

Key Features

Pipelines + msghub

Pipelines (sequential, if-else, switch) chain agents; msghub is a pub/sub broker where any agent can broadcast or subscribe. Compose any topology without adopting a heavy graph abstraction.

Distributed deployment

Each agent can run in its own process or remote host via RPC. Scales agents independently; isolates untrusted agent code. Rare feature in the multi-agent space.

AgentScope Studio

Gradio-based web UI for building and running agent workflows visually. Supports dragging agents, configuring prompts, and streaming outputs. Closer to AutoGen Studio than LangGraph Studio — for demos and no-code users.

Message-based primitives

Msg is the first-class object. Every agent exposes reply(Msg) -> Msg. This keeps the mental model small and makes it easy to write custom agents or integrate non-LLM services.

Rich built-in agents

DialogAgent, UserAgent, ReActAgent, DictDialogAgent, and others for common patterns. Plus a ModelScope model zoo for Chinese-first LLMs.

Game and simulation examples

Official examples include werewolf game, debate, and mystery escape simulations. These showcase topologies you cannot easily build in CrewAI/Swarm.

Comparison

 Topology FlexibilityDistributedCommunity SizeBest Fit
AgentScopethisVery high (any graph)Yes (RPC)Growing, China-strongSimulations, custom topologies
CrewAIMedium (roles + tasks)NoVery largeRole pipelines
LangGraphHigh (explicit graph)No native, via external orchestrationVery largeProduction control flow
AutoGenHigh (conversation)v0.4 actor runtimeLargeResearch, coding tasks

Use Cases

01. Multi-persona simulations

Debates, roleplay, and games where 3+ agents interact asymmetrically. AgentScope’s msghub and topology freedom fit these naturally.

02. Distributed agent teams

When agents need to live on different hosts (privacy, hardware, latency), AgentScope’s RPC model handles it without a separate orchestration layer.

03. ModelScope-centric stacks

Teams using Alibaba’s ModelScope (Qwen, DashScope) models get tight integration and official examples tuned for Chinese-first workflows.

Pricing & License

AgentScope: Apache 2.0 open source. Free. Backed by Alibaba Tongyi Lab.

Model cost: pay for whatever LLMs you plug in. Works with OpenAI, Anthropic, Gemini, DashScope (Qwen), Ollama, vLLM, LM Studio, and generic OpenAI-compatible endpoints.

Infra cost: none for single-process; distributed mode uses RPC over whatever transport you configure (gRPC, HTTP).

Frequently Asked Questions

AgentScope vs AutoGen?+

Both are conversation/topology-oriented. AutoGen has a larger Western community and tighter Microsoft integration; AgentScope has stronger distributed support and simulation examples. Pick AutoGen for mainstream research; pick AgentScope for custom topologies or distributed agents.

Does AgentScope work with Qwen / DashScope models?+

Yes — first-class. Alibaba Tongyi Lab builds it partly to showcase Qwen agent capabilities. Config config_name="dashscope_chat" in model_configs.

Is AgentScope Studio production-ready?+

Studio is primarily for demos and rapid prototyping. For production deploy workflows defined in code and run them as services — Studio is the design-time companion.

How mature is distributed mode?+

Stable since mid-2024 and actively used inside Alibaba. Expect some rough edges around deployment tooling compared to mature service frameworks; not a substitute for Kubernetes operators for serious multi-host deploys.

Can I use AgentScope with LangChain tools?+

Indirectly. Wrap LangChain tools as Python functions and register them via AgentScope’s tool interface. There is no official adapter; the glue is a few lines per tool.

Compare Alternatives