Sentence Transformers — State-of-the-Art Embeddings
Sentence Transformers computes text embeddings for semantic search, similarity, and reranking. 18.5K+ GitHub stars. 15,000+ pre-trained models, dense/sparse/reranker, multi-lingual. Apache 2.0.
What it is
Sentence Transformers is a Python library for computing dense text embeddings using transformer models. It powers semantic search, text similarity, clustering, and reranking pipelines. The library provides access to over 15,000 pre-trained models on Hugging Face, covering dense embeddings, sparse embeddings, and cross-encoder rerankers across multiple languages.
It is built for ML engineers, search engineers, and developers building RAG pipelines, recommendation systems, or any application that needs to understand text meaning beyond keyword matching.
How it saves time or tokens
Sentence Transformers provides a two-line API for generating embeddings. Instead of writing custom model loading, tokenization, and pooling code, you call model.encode() and get normalized vectors ready for cosine similarity or vector database insertion. Pre-trained models eliminate the need for training from scratch.
How to use
- Install the library:
pip install sentence-transformers. - Load a pre-trained model and encode your texts.
- Use the resulting vectors for search, similarity scoring, or clustering.
Example
from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim
# Load a pre-trained model
model = SentenceTransformer('all-MiniLM-L6-v2')
# Encode sentences
sentences = [
'How to deploy a Docker container',
'Docker container deployment guide',
'Best pizza recipes in New York'
]
embeddings = model.encode(sentences)
# Compute similarity
print(cos_sim(embeddings[0], embeddings[1])) # high similarity
print(cos_sim(embeddings[0], embeddings[2])) # low similarity
Related on TokRepo
- AI tools for RAG -- Retrieval-augmented generation tools and pipelines.
- AI tools for research -- Research and knowledge retrieval workflows.
Common pitfalls
- Model choice matters.
all-MiniLM-L6-v2is fast but less accurate than larger models likeall-mpnet-base-v2. Benchmark on your data before committing. - Embeddings from different models are not compatible. You cannot mix vectors from MiniLM with vectors from mpnet in the same index.
- Long texts get truncated to the model's max token length (typically 256 or 512 tokens). Chunk long documents before encoding.
- Cross-encoder rerankers are slow because they process query-document pairs individually. Use them only for reranking a short candidate list, not for initial retrieval.
- GPU acceleration requires PyTorch with CUDA. CPU inference works but is significantly slower for large batches.
Frequently Asked Questions
Dense embeddings represent text as fixed-length floating-point vectors (e.g., 384 or 768 dimensions). Sparse embeddings represent text as high-dimensional vectors with mostly zero values, similar to TF-IDF but learned. Dense embeddings capture semantic meaning; sparse embeddings excel at exact term matching.
For English, all-MiniLM-L6-v2 offers a good speed-accuracy trade-off. For higher accuracy, use all-mpnet-base-v2. For multilingual search, use paraphrase-multilingual-MiniLM-L12-v2. The best choice depends on your latency and accuracy requirements.
Yes. The library provides training utilities for fine-tuning on your domain data. You need pairs of similar/dissimilar sentences. Fine-tuning on domain data typically improves retrieval quality by 5-15% compared to generic pre-trained models.
Encode your documents with model.encode(), then insert the resulting vectors into a vector database (Pinecone, Weaviate, Qdrant, Milvus). At query time, encode the query with the same model and perform a nearest-neighbor search against the stored vectors.
Yes. Cross-encoder models in the library score query-document pairs for relevance. Use a bi-encoder for initial retrieval (fast) and a cross-encoder for reranking the top results (accurate). This two-stage approach balances speed and quality.
Citations (3)
- Sentence Transformers GitHub— Sentence Transformers provides 15,000+ pre-trained embedding models
- Sentence Transformers Docs— Dense and sparse embedding support with cross-encoder rerankers
- Hugging Face— Pre-trained models hosted on Hugging Face Hub
Related on TokRepo
Source & Thanks
Created by UKP Lab. Licensed under Apache 2.0. UKPLab/sentence-transformers — 18,500+ GitHub stars
Discussion
Related Assets
NAPI-RS — Build Node.js Native Addons in Rust
Write high-performance Node.js native modules in Rust with automatic TypeScript type generation and cross-platform prebuilt binaries.
Mamba — Fast Cross-Platform Package Manager
A drop-in conda replacement written in C++ that resolves environments in seconds instead of minutes.
Plasmo — The Browser Extension Framework
Build, test, and publish browser extensions for Chrome, Firefox, and Edge using React or Vue with hot-reload and automatic manifest generation.