# txtai — All-in-One Embeddings Database > txtai is an all-in-one embeddings database for semantic search, LLM orchestration, and language model workflows. 10.4K+ GitHub stars. Vector search + SQL + RAG pipelines. Apache 2.0. ## Install Save as a script file and run: ## Quick Use ```bash # Install pip install txtai # Semantic search in 3 lines python -c " from txtai import Embeddings embeddings = Embeddings() embeddings.index(['AI is transforming search', 'Vector databases are fast', 'Python is great for ML']) results = embeddings.search('machine learning', 1) print(results) # [(2, 0.85)] " ``` --- ## Intro txtai is an all-in-one embeddings database combining semantic search, LLM orchestration, and language model workflows in a single library. With 10,400+ GitHub stars and Apache 2.0 license, txtai provides vector search with SQL support, RAG pipelines, extractive QA, labeling, transcription, translation, summarization, and workflow automation. It supports local and cloud LLMs, and can be deployed as an API server. txtai is designed to be the simplest way to build semantic search and AI-powered applications. **Best for**: Developers who want semantic search + LLM pipelines in one lightweight library **Works with**: Claude Code, OpenAI Codex, Cursor, Gemini CLI, Windsurf **Features**: Vector search, SQL, RAG, QA, translation, summarization --- ## Key Features - **Embeddings database**: Vector search with SQL query support - **RAG pipelines**: Retrieval-augmented generation out of the box - **LLM orchestration**: Chain multiple AI models in workflows - **Extractive QA**: Answer questions from documents - **Translation + summarization**: Built-in NLP pipelines - **API server**: Deploy as a REST API with one command - **Local + cloud**: Works with local models and cloud providers --- ### FAQ **Q: What is txtai?** A: txtai is an all-in-one embeddings database with 10.4K+ stars for semantic search, RAG, and LLM workflows. Vector search + SQL + NLP pipelines in one library. Apache 2.0. **Q: How do I install txtai?** A: Run `pip install txtai`. Create an `Embeddings()` instance, `index()` your data, and `search()` semantically. --- ## Source & Thanks > Created by [NeuML](https://github.com/neuml). Licensed under Apache 2.0. > [neuml/txtai](https://github.com/neuml/txtai) — 10,400+ GitHub stars --- Source: https://tokrepo.com/en/workflows/b732febc-d945-4500-92c6-f90049c36c56 Author: Script Depot