Meta

Llama 3.2 3B Instruct

Meta's compact 3B model designed for edge and mobile deployment.

3.2B parametersllamallama3.2128K context2.38GB - 3.69GB VRAM

Check Your Hardware

See which quantizations of Llama 3.2 3B Instruct your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.51.881 GB2.38 GB2.88 GB
85%
Q5_K_M5.52.163 GB2.66 GB3.16 GB
90%
Q8_083.187 GB3.69 GB4.19 GB
98%

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run Llama 3.2 3B Instruct?

Llama 3.2 3B Instruct requires 2.38GB VRAM minimum with Q4_K_M quantization. For full precision, you need 3.69GB VRAM.

What is the best quantization for Llama 3.2 3B Instruct?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.