LangChain4j — LLM Integration for Java
LangChain4j integrates 20+ LLM providers and 30+ vector stores into Java apps. 11.4K+ stars. Unified API, RAG, MCP, Spring Boot. Apache 2.0.
What it is
LangChain4j is an open-source Java framework that brings LLM integration patterns to the Java ecosystem. It provides a unified API across 20+ LLM providers (OpenAI, Anthropic, Google, Ollama, and others) and 30+ vector store backends. The framework supports RAG pipelines, tool use, MCP protocol, structured output, and Spring Boot auto-configuration.
The project targets Java and Kotlin developers building AI-powered enterprise applications who need the same capabilities that Python developers get from LangChain, but in a Java-native form factor.
How it saves time or tokens
LangChain4j eliminates the need to write provider-specific HTTP clients for each LLM API. Switching from OpenAI to Anthropic requires changing one configuration line, not rewriting API integration code. The declarative AI Services interface lets developers define AI capabilities as Java interfaces with annotations, reducing boilerplate to near-zero. RAG pipelines are assembled from composable components rather than built from scratch.
How to use
- Add the Maven dependency:
<dependency>
<groupId>dev.langchain4j</groupId>
<artifactId>langchain4j</artifactId>
<version>1.0.0</version>
</dependency>
- Create a chat model and send a message:
ChatLanguageModel model = OpenAiChatModel.withApiKey("sk-...");
String answer = model.chat("Hello!");
System.out.println(answer);
- Define an AI Service interface for structured interaction:
interface Assistant {
@SystemMessage("You are a helpful coding assistant.")
String chat(String userMessage);
}
Assistant assistant = AiServices.create(Assistant.class, model);
String response = assistant.chat("Explain Java records.");
Example
// RAG pipeline with in-memory embedding store
import dev.langchain4j.data.document.Document;
import dev.langchain4j.store.embedding.inmemory.InMemoryEmbeddingStore;
EmbeddingModel embeddingModel = OpenAiEmbeddingModel.withApiKey("sk-...");
InMemoryEmbeddingStore<TextSegment> store = new InMemoryEmbeddingStore<>();
// Ingest documents
Document doc = Document.from("LangChain4j supports 20+ LLM providers.");
EmbeddingStoreIngestor.ingest(doc, embeddingModel, store);
// Query with RAG
ContentRetriever retriever = EmbeddingStoreContentRetriever.from(store);
Assistant ragAssistant = AiServices.builder(Assistant.class)
.chatLanguageModel(model)
.contentRetriever(retriever)
.build();
String answer = ragAssistant.chat("How many providers?");
Related on TokRepo
- RAG tools directory -- Retrieval-augmented generation frameworks and tools
- AI tools for coding -- Developer tools for AI application development
Common pitfalls
- Version 1.0.0 introduced breaking API changes from the 0.x series; migration guides are available in the documentation
- The AI Services interface uses runtime proxy generation; ensure your build tool does not strip reflection metadata
- Spring Boot auto-configuration requires the separate
langchain4j-spring-boot-starterdependency, not just the core library
Frequently Asked Questions
LangChain4j is inspired by Python LangChain but is a separate project built natively for Java. It follows Java conventions with interfaces, annotations, and dependency injection rather than Python's decorator and chain patterns. The feature set covers similar ground: LLM integration, RAG, tool use, and agents.
LangChain4j supports OpenAI, Anthropic, Google Vertex AI, Google Gemini, Azure OpenAI, Hugging Face, Ollama, Mistral, Cohere, and more. Each provider has a dedicated module. Switching providers requires changing the model instantiation line without modifying business logic.
Yes. The langchain4j-spring-boot-starter provides auto-configuration for chat models, embedding models, and vector stores. You configure LLM providers in application.yml and inject them as Spring beans. This integrates naturally with existing Spring Boot applications.
Yes. The langchain4j-ollama module connects to a local Ollama instance. Configure the base URL and model name, and LangChain4j routes requests to your local models with the same API as cloud providers.
LangChain4j supports 30+ vector stores including Pinecone, Weaviate, Milvus, Qdrant, Chroma, pgvector, Elasticsearch, Redis, and an in-memory store for development. Each has a dedicated module with consistent API.
Citations (3)
- LangChain4j GitHub— LangChain4j integrates 20+ LLM providers and 30+ vector stores
- LangChain4j Documentation— AI Services interface for declarative LLM interaction in Java
- LangChain4j Spring Boot Guide— Spring Boot auto-configuration support
Related on TokRepo
Source & Thanks
langchain4j/langchain4j — 11,400+ GitHub stars
Discussion
Related Assets
Claude-Flow — Multi-Agent Orchestration for Claude Code
Layers swarm and hive-mind multi-agent orchestration on top of Claude Code with 64 specialized agents, SQLite memory, and parallel execution.
ccusage — Real-Time Token Cost Tracker for Claude Code
CLI that reads ~/.claude logs and breaks down Claude Code token spend by day, session, and project — pluggable into your statusline.
SuperClaude — Workflow Framework for Claude Code
Adds 16+ slash commands, 9 cognitive personas, and a smart flag system to Claude Code in one pipx install.