Agent Frameworks Across Languages
Spring AI for Java, LangChain4j, Candle for Rust, Mastra for TypeScript, FastHTML for Python, Axum for Rust APIs — agents in every stack.
What's in this pack
This pack collects the six agent frameworks that let teams stay in their existing language ecosystem instead of bolting a Python service onto a Java or Rust backend just for LLM features. The list spans the four languages where serious agent work is happening outside Python: Java, Rust, TypeScript, and Python (with FastHTML for full-stack rendering).
| # | Framework | Language | Best for |
|---|---|---|---|
| 1 | Spring AI | Java | Spring Boot apps adding LLM features |
| 2 | LangChain4j | Java/Kotlin | community port of LangChain to JVM |
| 3 | Candle | Rust | local model inference |
| 4 | Mastra | TypeScript | full-stack TS agent + workflow framework |
| 5 | FastHTML | Python | server-rendered Python UIs for agents |
| 6 | Axum | Rust | high-throughput agent APIs |
The pack mirrors the reality that LLM features now ship in every backend, not just Python ones. A Spring Boot e-commerce backend doesn't want to spin up a separate FastAPI service for a chat feature — Spring AI lives in the same JVM. A Rust API gateway doesn't want a Python sidecar — Axum + Candle compile into one binary.
Why language-native matters
The Python-first agent world (LangChain, LangGraph, CrewAI) is great if Python is already your primary stack. If you're a Java shop, you face three bad options without this pack:
- Run a Python sidecar service (extra deployment, separate observability, marshalling JSON between two processes).
- Use a SaaS LLM gateway (vendor lock-in, latency, fees).
- Hand-roll OpenAI HTTP calls inside your Spring controllers (no abstractions, no eval framework, no agent loop).
Spring AI and LangChain4j fix this. They give Java the same primitives — chat models, tool calling, prompt templates, vector stores, agent loops — that LangChain gives Python, with idiomatic Spring or Kotlin DSLs. Same for Rust (Candle for inference, Axum for the API surface) and TypeScript (Mastra for the framework).
The trade-off is feature lag. LangChain ships an integration to a new model on day one; Spring AI takes weeks; LangChain4j sometimes catches up faster than Spring AI because it's community-driven. Plan for "the model launched yesterday and only Python has it" being a real scenario.
Install in one command
# Install the pack — drops a starter project for each framework
tokrepo install pack/agent-frameworks-multilang
# Or pick by language
tokrepo install spring-ai
tokrepo install langchain4j
tokrepo install mastra
tokrepo install candle
The TokRepo CLI handles each ecosystem's package manager — Maven for Spring AI and LangChain4j, Cargo for Candle and Axum, npm/pnpm for Mastra, uv for FastHTML. The starter projects all include a passing test that calls a real model so you can verify your API keys before writing logic.
Common pitfalls
- Spring AI vs LangChain4j is not "official vs unofficial." Spring AI is from VMware/Broadcom; LangChain4j is community. Both are widely used. Choose by feature parity with the model you're using and ergonomic preference (Spring DSL vs LangChain semantics).
- Mastra is young. It's well-maintained but the API has shifted between minor releases. Pin a specific version and read release notes before upgrading. Production users typically wait two minor releases before upgrading.
- Candle is for inference, not orchestration. It runs models locally (Llama, Mistral, etc) and gives you embeddings — but it's not a full agent framework. Pair it with Axum for the API layer if you want a Rust-only stack.
- FastHTML is HTMX-based. It's not a SPA framework. If your team expects React or Vue, FastHTML will feel alien. It's perfect when you want server-rendered Python UIs that wire into agents directly without a separate frontend repo.
- Tool calling formats differ. OpenAI, Anthropic, and Gemini all have different JSON schemas for tool calls. Each framework abstracts this differently — Spring AI's
@Toolannotation, LangChain4j's@Toolwith reflection, Mastra's typedcreateTool(). Don't assume a tool definition ports across.
When this pack alone isn't enough
This pack gives you the runtime in your language. You'll still need:
- An LLM provider. OpenAI, Anthropic, Bedrock, or self-hosted via Ollama. All six frameworks support multiple providers via configuration.
- Vector storage if you're doing RAG. See the Vector DB Showdown pack — most entries have native clients in Java, Rust, and TS.
- Eval pipeline. Promptfoo from the LLM Eval & Guardrails pack runs language-agnostic — point it at your endpoint regardless of backend.
For Python-only stacks where multilang isn't the question, see Python Agent Frameworks. For platform-agnostic side-by-side comparison without picking a language, see Multi-Agent Frameworks.
6 assets in this pack
Frequently asked questions
Are these frameworks free?
All six are Apache 2.0 or MIT open-source — no per-seat costs from the framework. You'll pay for the LLM API calls (OpenAI, Anthropic, etc) regardless of language. Spring AI is from Broadcom but free; LangChain4j is community-driven and free; Mastra is venture-funded with free open-source plus optional cloud. Candle, Axum, FastHTML are pure OSS.
How does Spring AI compare to LangChain4j?
Spring AI is the Spring-team-built canonical answer — tight integration with Spring Boot autoconfig, Spring Data, Spring Security. LangChain4j is community-driven and ports the LangChain conceptual model (chains, agents, callbacks) to Java with broader feature coverage. Pick Spring AI if you're a Spring shop and want canonical idioms; LangChain4j if you want the LangChain mental model in JVM with maximum integration breadth.
Will any of these work with Claude Code or Cursor?
These are runtime frameworks, not editor integrations. The frameworks themselves run inside your service (Spring Boot, Axum binary, Mastra app). Your editor (Claude Code, Cursor) helps write the framework code. Cursor and Claude Code both have decent Java/Rust/TS support, with Spring AI specifically getting good autocomplete from JetBrains' Spring IDE plugin.
Difference vs the Python Agent Frameworks pack?
Python Agent Frameworks is Python-only — five frameworks all running in CPython. This pack is the opposite: it surfaces the non-Python options. If you're already committed to Python, that pack is denser; if you're on Java/Rust/TS or picking the language, this pack is the right starting point. The two packs are complementary, not overlapping.
What's the operational gotcha with Mastra?
Mastra has a strong opinion that workflows live alongside agents in the same project, with shared types. This is great for a clean codebase but tightly couples your workflow engine to your agent framework version. If you upgrade Mastra and a workflow breaks, you can't roll back the agent without rolling back the workflow. Plan a separate testing stage for Mastra version bumps.
12 packs · 80+ hand-picked assets
Browse every curated bundle on the home page
Back to all packs