ScriptsMar 31, 2026·2 min read

Sentence Transformers — State-of-the-Art Embeddings

Sentence Transformers computes text embeddings for semantic search, similarity, and reranking. 18.5K+ GitHub stars. 15,000+ pre-trained models, dense/sparse/reranker, multi-lingual. Apache 2.0.

TL;DR
Sentence Transformers computes text embeddings for semantic search, similarity, and reranking with 15,000+ pre-trained models available.
§01

What it is

Sentence Transformers is a Python library for computing dense text embeddings using transformer models. It powers semantic search, text similarity, clustering, and reranking pipelines. The library provides access to over 15,000 pre-trained models on Hugging Face, covering dense embeddings, sparse embeddings, and cross-encoder rerankers across multiple languages.

It is built for ML engineers, search engineers, and developers building RAG pipelines, recommendation systems, or any application that needs to understand text meaning beyond keyword matching.

§02

How it saves time or tokens

Sentence Transformers provides a two-line API for generating embeddings. Instead of writing custom model loading, tokenization, and pooling code, you call model.encode() and get normalized vectors ready for cosine similarity or vector database insertion. Pre-trained models eliminate the need for training from scratch.

§03

How to use

  1. Install the library: pip install sentence-transformers.
  2. Load a pre-trained model and encode your texts.
  3. Use the resulting vectors for search, similarity scoring, or clustering.
§04

Example

from sentence_transformers import SentenceTransformer
from sentence_transformers.util import cos_sim

# Load a pre-trained model
model = SentenceTransformer('all-MiniLM-L6-v2')

# Encode sentences
sentences = [
    'How to deploy a Docker container',
    'Docker container deployment guide',
    'Best pizza recipes in New York'
]
embeddings = model.encode(sentences)

# Compute similarity
print(cos_sim(embeddings[0], embeddings[1]))  # high similarity
print(cos_sim(embeddings[0], embeddings[2]))  # low similarity
§05

Related on TokRepo

§06

Common pitfalls

  • Model choice matters. all-MiniLM-L6-v2 is fast but less accurate than larger models like all-mpnet-base-v2. Benchmark on your data before committing.
  • Embeddings from different models are not compatible. You cannot mix vectors from MiniLM with vectors from mpnet in the same index.
  • Long texts get truncated to the model's max token length (typically 256 or 512 tokens). Chunk long documents before encoding.
  • Cross-encoder rerankers are slow because they process query-document pairs individually. Use them only for reranking a short candidate list, not for initial retrieval.
  • GPU acceleration requires PyTorch with CUDA. CPU inference works but is significantly slower for large batches.

Frequently Asked Questions

What is the difference between dense and sparse embeddings?+

Dense embeddings represent text as fixed-length floating-point vectors (e.g., 384 or 768 dimensions). Sparse embeddings represent text as high-dimensional vectors with mostly zero values, similar to TF-IDF but learned. Dense embeddings capture semantic meaning; sparse embeddings excel at exact term matching.

Which model should I use for semantic search?+

For English, all-MiniLM-L6-v2 offers a good speed-accuracy trade-off. For higher accuracy, use all-mpnet-base-v2. For multilingual search, use paraphrase-multilingual-MiniLM-L12-v2. The best choice depends on your latency and accuracy requirements.

Can I fine-tune a Sentence Transformer model?+

Yes. The library provides training utilities for fine-tuning on your domain data. You need pairs of similar/dissimilar sentences. Fine-tuning on domain data typically improves retrieval quality by 5-15% compared to generic pre-trained models.

How do I use Sentence Transformers with a vector database?+

Encode your documents with model.encode(), then insert the resulting vectors into a vector database (Pinecone, Weaviate, Qdrant, Milvus). At query time, encode the query with the same model and perform a nearest-neighbor search against the stored vectors.

Does Sentence Transformers support reranking?+

Yes. Cross-encoder models in the library score query-document pairs for relevance. Use a bi-encoder for initial retrieval (fast) and a cross-encoder for reranking the top results (accurate). This two-stage approach balances speed and quality.

Citations (3)
🙏

Source & Thanks

Created by UKP Lab. Licensed under Apache 2.0. UKPLab/sentence-transformers — 18,500+ GitHub stars

Discussion

Sign in to join the discussion.
No comments yet. Be the first to share your thoughts.

Related Assets