Meta

Llama 3.2 1B Instruct

Ultra-compact 1B model. Runs on virtually any device including smartphones.

1.24B parametersllamallama3.2128K context1.25GB - 2.81GB VRAM

Check Your Hardware

See which quantizations of Llama 3.2 1B Instruct your hardware can run.

Quantization Options

QuantizationBitsFile SizeVRAM NeededRAM NeededQuality
Q4_K_M4.50.752 GB1.25 GB1.75 GB
85%
Q8_081.23 GB1.73 GB2.23 GB
98%
FP16162.309 GB2.81 GB3.31 GB
100%

See It In Action

Real model outputs generated via RunThisModel.com — watch responses stream in real time.

Llama 3.3 70B responding...

Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.

Frequently Asked Questions

How much VRAM do I need to run Llama 3.2 1B Instruct?

Llama 3.2 1B Instruct requires 1.25GB VRAM minimum with Q4_K_M quantization. For full precision, you need 2.81GB VRAM.

What is the best quantization for Llama 3.2 1B Instruct?

Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.