Nomic AI

Nomic Embed Text v1.5

High quality text embedding model. 137M params. Good for RAG and search.

0.137B parametersnomic-bertapache-2.08K context0.3GB - 0.76GB VRAM

About This Model

Nomic Embed Text v1.5 is a compact yet powerful model designed for generating high-quality embeddings from text inputs. With just 137 million parameters, this model leverages the nomic-bert architecture to produce dense vector representations that capture semantic meaning effectively. It stands out with its impressive context length of 8192 tokens, allowing it to handle longer documents and maintain contextual coherence, which is particularly useful for tasks like document similarity, clustering, and retrieval. The model is licensed under Apache 2.0, making it freely available for both research and commercial applications.

In its size class, Nomic Embed Text v1.5 punches well above its weight. Despite its relatively small parameter count, it delivers embeddings that rival those from larger models in terms of quality and utility. This efficiency makes it an excellent choice for environments where computational resources are limited. The model supports quantization options like Q8_0 and FP16, further enhancing its performance and reducing memory usage. With a VRAM range of 0.3–0.8 GB, it can be deployed on a wide range of hardware, from low-end GPUs to more powerful systems. Users looking for a balance between performance and resource efficiency, such as developers working on embedded systems or those with budget constraints, will find this model particularly valuable.

Check Your Hardware

See which quantizations of Nomic Embed Text v1.5 your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q8_080.139 GB0.3 GB0.5 GB
98%
FP16160.255 GB0.76 GB1.25 GB
100%

Frequently Asked Questions

How much VRAM do I need to run Nomic Embed Text v1.5?

Nomic Embed Text v1.5 requires 0.3GB VRAM minimum with Q8_0 quantization. For full precision, you need 0.76GB VRAM.

What is the best quantization for Nomic Embed Text v1.5?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.