LG AI

EXAONE 3.5 7.8B

7.8B model from LG. Strong bilingual Korean/English.

7.8B parametersexaoneother32K context4.94GB - 8.24GB VRAM

About This Model

EXAONE 3.5 7.8B by LG AI is a robust language model designed for text generation tasks, boasting 7.8 billion parameters. This model stands out for its impressive context length of 32,768 tokens, which allows it to handle long-form content and maintain coherence over extensive passages. It excels in generating high-quality text for a variety of applications, including creative writing, summarization, and conversational AI. The model's large context window makes it particularly suitable for tasks that require understanding and generating contextually rich content, such as writing detailed articles or maintaining coherent dialogues.

In its size class, EXAONE 3.5 7.8B holds its own, offering a balance between performance and efficiency. While it may not be the most lightweight model available, its capabilities and context length make it a strong contender for users who need more than just basic text generation. The model is available with quantizations Q4_K_M and Q8_0, making it accessible for a range of hardware configurations. With VRAM requirements ranging from 4.9 to 8.2 GB, it is well-suited for mid-to-high-end GPUs, making it a practical choice for users with modern hardware. Ideal users include content creators, developers working on conversational agents, and researchers needing a powerful yet efficient model for text generation tasks.

Check Your Hardware

See which quantizations of EXAONE 3.5 7.8B your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.54.443 GB4.94 GB5.44 GB
85%
Q8_087.741 GB8.24 GB8.74 GB
98%

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run EXAONE 3.5 7.8B?

EXAONE 3.5 7.8B requires 4.94GB VRAM minimum with Q4_K_M quantization. For full precision, you need 8.24GB VRAM.

What is the best quantization for EXAONE 3.5 7.8B?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.