LG AI

EXAONE 3.5 2.4B

Compact model from LG. Optimized for Korean and English.

2.4B parametersexaoneother32K context2.03GB - 3.14GB VRAM

About This Model

EXAONE 3.5 2.4B by LG AI is a robust language model designed for text generation tasks. With 2.4 billion parameters, it offers a balance between performance and resource efficiency, making it suitable for a wide range of applications such as content creation, chatbots, and natural language understanding. The model's architecture, exaone, is optimized for long context lengths up to 32,768 tokens, which is particularly useful for generating coherent and contextually rich text over extended sequences. This capability sets it apart from many models in its size class, as it can maintain context and coherence over much longer passages.

Compared to other models with similar parameter counts, EXAONE 3.5 2.4B punches above its weight in terms of efficiency and performance. It requires only 2.0–3.1 GB of VRAM, making it accessible for deployment on a variety of hardware, including consumer-grade GPUs. The available quantizations (Q4_K_M, Q8_0) further enhance its efficiency, allowing for faster inference times and lower memory usage without significant loss in quality. This makes it an excellent choice for users who need powerful text generation capabilities but have limited computational resources. Ideal users include developers, researchers, and hobbyists looking to deploy a capable language model on mid-range hardware for projects ranging from simple chatbots to more complex text analysis tasks.

Check Your Hardware

See which quantizations of EXAONE 3.5 2.4B your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.51.532 GB2.03 GB2.53 GB
85%
Q8_082.644 GB3.14 GB3.64 GB
98%

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run EXAONE 3.5 2.4B?

EXAONE 3.5 2.4B requires 2.03GB VRAM minimum with Q4_K_M quantization. For full precision, you need 3.14GB VRAM.

What is the best quantization for EXAONE 3.5 2.4B?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.