Cette page est affichée en anglais. Une traduction française est en cours.
ScriptsMar 29, 2026·1 min de lecture

LlamaIndex — Data Framework for LLM Applications

Connect your data to large language models. The leading framework for RAG, document indexing, knowledge graphs, and structured data extraction.

Introduction

LlamaIndex (formerly GPT Index) is the go-to framework for connecting custom data to LLMs. Ingest data from 160+ sources, build indexes for fast retrieval, and create production-ready RAG pipelines.

Best for: Document Q&A, knowledge base search, data extraction, enterprise RAG Works with: OpenAI, Anthropic, Google, Ollama, HuggingFace


Key Features

Data Connectors

Ingest from PDFs, databases, APIs, Notion, Slack, Google Drive, and 160+ sources:

from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()

Indexing & Retrieval

from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is the revenue trend?")

Agents

Build data-aware agents that can query multiple data sources, use tools, and maintain conversation state.


FAQ

Q: What is LlamaIndex? A: Connect your data to large language models. The leading framework for RAG, document indexing, knowledge graphs, and structured data extraction.

Q: How do I install LlamaIndex? A: Check the Quick Use section above for step-by-step installation instructions. Most assets can be set up in under 2 minutes.

🙏

Source et remerciements

Created by LlamaIndex. Licensed under MIT. run-llama/llama_index — 38K+ GitHub stars

Discussion

Connectez-vous pour rejoindre la discussion.
Aucun commentaire pour l'instant. Soyez le premier à partager votre avis.

Actifs similaires