HuggingFace
SmolLM2 1.7B
Capable 1.7B model from HuggingFace. Good balance for mobile devices.
1.7B parameterssmollmapache-2.08K context1.48GB - 2.2GB VRAM
Check Your Hardware
See which quantizations of SmolLM2 1.7B your hardware can run.
Quantization Options
| Quantization | Bits | File Size | VRAM Needed | RAM Needed | Quality |
|---|---|---|---|---|---|
| Q4_K_M | 4.5 | 0.983 GB | 1.48 GB | 1.98 GB | 85% |
| Q8_0 | 8 | 1.695 GB | 2.2 GB | 2.7 GB | 98% |
See It In Action
Real model outputs generated via RunThisModel.com — watch responses stream in real time.
Llama 3.3 70B responding...
Outputs generated by real AI models via RunThisModel.com. Generation speed shown is from cloud inference. Local speeds vary by hardware — check your device.
Frequently Asked Questions
How much VRAM do I need to run SmolLM2 1.7B?
SmolLM2 1.7B requires 1.48GB VRAM minimum with Q4_K_M quantization. For full precision, you need 2.2GB VRAM.
What is the best quantization for SmolLM2 1.7B?
Q4_K_M offers the best balance of quality and VRAM usage. Q8_0 is near-lossless if you have enough VRAM.