SkillsApr 1, 2026·1 min read

LangChain4j — LLM Integration for Java

LangChain4j integrates 20+ LLM providers and 30+ vector stores into Java apps. 11.4K+ stars. Unified API, RAG, MCP, Spring Boot. Apache 2.0.

TL;DR
LangChain4j provides a unified Java API across 20+ LLM providers with RAG, tool use, and Spring Boot support.
§01

What it is

LangChain4j is an open-source Java framework that brings LLM integration patterns to the Java ecosystem. It provides a unified API across 20+ LLM providers (OpenAI, Anthropic, Google, Ollama, and others) and 30+ vector store backends. The framework supports RAG pipelines, tool use, MCP protocol, structured output, and Spring Boot auto-configuration.

The project targets Java and Kotlin developers building AI-powered enterprise applications who need the same capabilities that Python developers get from LangChain, but in a Java-native form factor.

§02

How it saves time or tokens

LangChain4j eliminates the need to write provider-specific HTTP clients for each LLM API. Switching from OpenAI to Anthropic requires changing one configuration line, not rewriting API integration code. The declarative AI Services interface lets developers define AI capabilities as Java interfaces with annotations, reducing boilerplate to near-zero. RAG pipelines are assembled from composable components rather than built from scratch.

§03

How to use

  1. Add the Maven dependency:
<dependency>
    <groupId>dev.langchain4j</groupId>
    <artifactId>langchain4j</artifactId>
    <version>1.0.0</version>
</dependency>
  1. Create a chat model and send a message:
ChatLanguageModel model = OpenAiChatModel.withApiKey("sk-...");
String answer = model.chat("Hello!");
System.out.println(answer);
  1. Define an AI Service interface for structured interaction:
interface Assistant {
    @SystemMessage("You are a helpful coding assistant.")
    String chat(String userMessage);
}

Assistant assistant = AiServices.create(Assistant.class, model);
String response = assistant.chat("Explain Java records.");
§04

Example

// RAG pipeline with in-memory embedding store
import dev.langchain4j.data.document.Document;
import dev.langchain4j.store.embedding.inmemory.InMemoryEmbeddingStore;

EmbeddingModel embeddingModel = OpenAiEmbeddingModel.withApiKey("sk-...");
InMemoryEmbeddingStore<TextSegment> store = new InMemoryEmbeddingStore<>();

// Ingest documents
Document doc = Document.from("LangChain4j supports 20+ LLM providers.");
EmbeddingStoreIngestor.ingest(doc, embeddingModel, store);

// Query with RAG
ContentRetriever retriever = EmbeddingStoreContentRetriever.from(store);
Assistant ragAssistant = AiServices.builder(Assistant.class)
    .chatLanguageModel(model)
    .contentRetriever(retriever)
    .build();
String answer = ragAssistant.chat("How many providers?");
§05

Related on TokRepo

§06

Common pitfalls

  • Version 1.0.0 introduced breaking API changes from the 0.x series; migration guides are available in the documentation
  • The AI Services interface uses runtime proxy generation; ensure your build tool does not strip reflection metadata
  • Spring Boot auto-configuration requires the separate langchain4j-spring-boot-starter dependency, not just the core library

Frequently Asked Questions

How does LangChain4j compare to LangChain in Python?+

LangChain4j is inspired by Python LangChain but is a separate project built natively for Java. It follows Java conventions with interfaces, annotations, and dependency injection rather than Python's decorator and chain patterns. The feature set covers similar ground: LLM integration, RAG, tool use, and agents.

Which LLM providers does LangChain4j support?+

LangChain4j supports OpenAI, Anthropic, Google Vertex AI, Google Gemini, Azure OpenAI, Hugging Face, Ollama, Mistral, Cohere, and more. Each provider has a dedicated module. Switching providers requires changing the model instantiation line without modifying business logic.

Does LangChain4j work with Spring Boot?+

Yes. The langchain4j-spring-boot-starter provides auto-configuration for chat models, embedding models, and vector stores. You configure LLM providers in application.yml and inject them as Spring beans. This integrates naturally with existing Spring Boot applications.

Can I use LangChain4j with local LLMs via Ollama?+

Yes. The langchain4j-ollama module connects to a local Ollama instance. Configure the base URL and model name, and LangChain4j routes requests to your local models with the same API as cloud providers.

What vector stores are supported for RAG?+

LangChain4j supports 30+ vector stores including Pinecone, Weaviate, Milvus, Qdrant, Chroma, pgvector, Elasticsearch, Redis, and an in-memory store for development. Each has a dedicated module with consistent API.

Citations (3)
🙏

Source & Thanks

langchain4j/langchain4j — 11,400+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets