Sentence Transformers

all-MiniLM-L6-v2

Tiny embedding model. Only 23MB. Perfect for on-device search.

0.023B parametersbertapache-2.00K context0.1GB - 0.1GB VRAM

About This Model

The all-MiniLM-L6-v2 model, developed by Sentence Transformers, is a lightweight BERT-based architecture designed for efficient feature extraction and embedding generation. With only 23 million parameters, this model is remarkably compact, making it an excellent choice for resource-constrained environments. It excels in generating high-quality sentence embeddings that can be used for a variety of natural language processing tasks, such as semantic similarity, clustering, and classification. The model's ability to handle sequences up to 256 tokens long ensures it can process a wide range of text inputs effectively.

Despite its small size, the all-MiniLM-L6-v2 punches well above its weight in terms of performance. It offers a good balance between computational efficiency and embedding quality, often outperforming larger models in tasks where fine-grained semantic understanding is crucial. This makes it particularly suitable for applications where real-time processing is necessary, such as chatbots, search engines, and content recommendation systems. Users with limited hardware resources, such as those running models on edge devices or low-end GPUs, will find this model highly practical. The minimal VRAM requirement of 0.1–0.1 GB further enhances its accessibility, allowing it to run smoothly on a wide range of devices without significant performance degradation.

Check Your Hardware

See which quantizations of all-MiniLM-L6-v2 your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q8_080.023 GB0.1 GB0.2 GB
92%

Frequently Asked Questions

How much VRAM do I need to run all-MiniLM-L6-v2?

all-MiniLM-L6-v2 requires 0.1GB VRAM minimum with Q8_0 quantization. For full precision, you need 0.1GB VRAM.

What is the best quantization for all-MiniLM-L6-v2?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.