Runway
Stable Diffusion 1.5 (CoreML)
Classic image generation model. Pre-converted to CoreML for iOS/Mac. Downloads as zip, auto-extracts.
About This Model
Stable Diffusion 1.5 (CoreML) by Runway is a powerful text-to-image generation model optimized for local deployment, particularly on Apple devices. With 0.86 billion parameters, this model strikes a balance between performance and resource efficiency, making it suitable for generating high-quality images from textual descriptions. It excels in creating detailed and diverse visual content, from realistic landscapes to abstract art, and is known for its ability to maintain coherence and aesthetic quality across a wide range of prompts. The CoreML optimization ensures smooth operation on Apple's hardware, leveraging the efficiency of the M1 and newer chips.
Compared to other models in its size class, Stable Diffusion 1.5 (CoreML) punches well above its weight. It offers impressive results with relatively low VRAM requirements (2.5 GB), making it accessible to a broader range of users without the need for high-end graphics cards. This efficiency is particularly notable, as it allows for real-time or near-real-time image generation on consumer-grade hardware. Ideal for artists, designers, and hobbyists looking to create visual content locally, this model is best suited for those with Apple devices, especially Macs and iPads equipped with M1 or later processors. Its robust performance and ease of use make it a go-to choice for anyone seeking a balance between quality and computational resources.
Check Your Hardware
See which quantizations of Stable Diffusion 1.5 (CoreML) your hardware can run.
Quantization Options
| Quantization | Bits | File Size | VRAM Needed | RAM Needed | Quality |
|---|---|---|---|---|---|
| CoreML-Palettized | 6 | 1.46 GB | 2.5 GB | 4 GB | 90% |
Try It — Diffusion Generation Demo
Click "Generate" to watch how Flux.1 creates an image from noise. Real outputs from RunThisModel.com.

"A cozy wooden cabin in snowy mountains at golden hour sunset"

"A friendly humanoid robot reading a book in a library"

"Gourmet sushi platter, professional food photography"

"Woman scientist in a modern lab, natural lighting"

"Snow leopard on mountain peak at dawn, golden rim light"

"Cyberpunk city at night, neon signs, rain reflections"
Animation simulates the diffusion denoising process at recorded generation speed. Actual generation requires GPU hardware or cloud service.
Frequently Asked Questions
How much VRAM do I need to run Stable Diffusion 1.5 (CoreML)?
Stable Diffusion 1.5 (CoreML) requires 2.5GB VRAM minimum with CoreML-Palettized quantization. For full precision, you need 2.5GB VRAM.
What is the best quantization for Stable Diffusion 1.5 (CoreML)?
Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.