Esta página se muestra en inglés. Una traducción al español está en curso.
ScriptsMar 29, 2026·1 min de lectura

LlamaIndex — Data Framework for LLM Applications

Connect your data to large language models. The leading framework for RAG, document indexing, knowledge graphs, and structured data extraction.

Introducción

LlamaIndex (formerly GPT Index) is the go-to framework for connecting custom data to LLMs. Ingest data from 160+ sources, build indexes for fast retrieval, and create production-ready RAG pipelines.

Best for: Document Q&A, knowledge base search, data extraction, enterprise RAG Works with: OpenAI, Anthropic, Google, Ollama, HuggingFace


Key Features

Data Connectors

Ingest from PDFs, databases, APIs, Notion, Slack, Google Drive, and 160+ sources:

from llama_index.core import SimpleDirectoryReader
documents = SimpleDirectoryReader("./data").load_data()

Indexing & Retrieval

from llama_index.core import VectorStoreIndex
index = VectorStoreIndex.from_documents(documents)
query_engine = index.as_query_engine()
response = query_engine.query("What is the revenue trend?")

Agents

Build data-aware agents that can query multiple data sources, use tools, and maintain conversation state.


FAQ

Q: What is LlamaIndex? A: Connect your data to large language models. The leading framework for RAG, document indexing, knowledge graphs, and structured data extraction.

Q: How do I install LlamaIndex? A: Check the Quick Use section above for step-by-step installation instructions. Most assets can be set up in under 2 minutes.

🙏

Fuente y agradecimientos

Created by LlamaIndex. Licensed under MIT. run-llama/llama_index — 38K+ GitHub stars

Discusión

Inicia sesión para unirte a la discusión.
Aún no hay comentarios. Sé el primero en compartir tus ideas.

Activos relacionados