Stability AI / Apple

Stable Diffusion 2.1 Base (CoreML)

Smallest CoreML image generation model. Palettized for minimal size (1.14GB). Runs on any iPhone with 6GB RAM. Default image generation model.

0.86B parametersunet-diffusioncreativeml-openrail-m1.56GB - 1.56GB VRAM

About This Model

Stable Diffusion 2.1 Base (CoreML) is a lightweight version of the popular text-to-image generation model developed by Stability AI, optimized for Apple devices through CoreML. With just 0.86 billion parameters, this model is designed to generate high-quality images from textual descriptions while maintaining efficient performance on local hardware. It excels in producing detailed and coherent images, making it a solid choice for users who need a balance between image quality and computational resources. Despite its smaller size, it manages to retain much of the creative potential and versatility of larger models, which is particularly impressive given its reduced parameter count.

Compared to other models in its size class, Stable Diffusion 2.1 Base (CoreML) punches well above its weight. It offers good efficiency, requiring only 1.6 GB of VRAM, which makes it accessible on a wide range of Apple devices, including older Macs and iPads. This efficiency is a significant advantage for users who want to experiment with AI-generated art without investing in high-end hardware. The model is ideal for hobbyists, artists, and developers who are looking for a powerful yet lightweight solution for text-to-image tasks. Realistically, anyone with an Apple device that meets the VRAM requirements can benefit from this model, making it a versatile and user-friendly option for local deployment.

Check Your Hardware

See which quantizations of Stable Diffusion 2.1 Base (CoreML) your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
CoreML-Palettized61.063 GB1.56 GB2.06 GB
85%

Try It — Diffusion Generation Demo

Click "Generate" to watch how Flux.1 creates an image from noise. Real outputs from RunThisModel.com.

A cozy wooden cabin in snowy mountains at golden hour sunset

"A cozy wooden cabin in snowy mountains at golden hour sunset"

A friendly humanoid robot reading a book in a library

"A friendly humanoid robot reading a book in a library"

Gourmet sushi platter, professional food photography

"Gourmet sushi platter, professional food photography"

Woman scientist in a modern lab, natural lighting

"Woman scientist in a modern lab, natural lighting"

Snow leopard on mountain peak at dawn, golden rim light

"Snow leopard on mountain peak at dawn, golden rim light"

Cyberpunk city at night, neon signs, rain reflections

"Cyberpunk city at night, neon signs, rain reflections"

Animation simulates the diffusion denoising process at recorded generation speed. Actual generation requires GPU hardware or cloud service.

Frequently Asked Questions

How much VRAM do I need to run Stable Diffusion 2.1 Base (CoreML)?

Stable Diffusion 2.1 Base (CoreML) requires 1.56GB VRAM minimum with CoreML-Palettized quantization. For full precision, you need 1.56GB VRAM.

What is the best quantization for Stable Diffusion 2.1 Base (CoreML)?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.