AI Models

Browse 20+ AI models with detailed hardware requirements and compatibility info.

Featured

Meta

Llama 3.2 1B Instruct

Ultra-compact 1B model. Runs on virtually any device including smartphones.

1.24B paramsllamallama3.21.3GB+ VRAM
Featured

Meta

Llama 3.2 3B Instruct

Meta's compact 3B model designed for edge and mobile deployment.

3.2B paramsllamallama3.22.6GB+ VRAM
Featured

Microsoft

Phi-3.5 Mini 3.8B

Tiny but capable 3.8B model. Runs on almost any hardware including phones.

3.8B paramsphi3mit3GB+ VRAM
Featured

Mistral AI

Mistral 7B Instruct v0.3

Efficient 7B model from Mistral AI with strong performance for its size.

7.3B paramsmistralapache-2.05GB+ VRAM
Featured

Alibaba

Qwen 2.5 7B Instruct

Efficient 7B model with strong coding and reasoning abilities. Great for local deployment.

7.6B paramsqwen2apache-2.05.3GB+ VRAM
Featured

Meta

Llama 3.1 8B Instruct

Meta's 8B parameter instruction-tuned model. Great balance of performance and efficiency for local deployment.

8B paramsllamallama3.15.5GB+ VRAM
Featured

DeepSeek

DeepSeek R1 Distill 8B

Compact reasoning model. Good reasoning capabilities in a small package.

8B paramsllamamit5.5GB+ VRAM
Featured

Google

Gemma 2 9B Instruct

Google's efficient 9B model. Great performance-to-size ratio.

9.2B paramsgemma2gemma6.2GB+ VRAM
Featured

Microsoft

Phi-4

Microsoft's 14B parameter model. Punches well above its weight class on reasoning benchmarks.

14B paramsphi3mit9.5GB+ VRAM
Featured

Mistral AI

Codestral 22B

Mistral's dedicated code generation model. Supports 80+ programming languages.

22B paramsmistralmnpl14GB+ VRAM
Featured

Google

Gemma 2 27B Instruct

Google's 27B instruction-tuned model. Strong general performance with efficient architecture.

27B paramsgemma2gemma17GB+ VRAM
Featured

Alibaba

Qwen 2.5 Coder 32B

Specialized coding model. One of the best open-source coding assistants available.

32B paramsqwen2apache-2.020GB+ VRAM
Featured

DeepSeek

DeepSeek R1 Distill 32B

Reasoning-focused model distilled from DeepSeek R1. Excellent at complex problem-solving.

32B paramsqwen2mit20GB+ VRAM
Featured

Mistral AI

Mixtral 8x7B Instruct

Mixture-of-experts model with 8 experts of 7B each. Excellent quality at moderate resource requirements.

46.7B paramsmixtralapache-2.028GB+ VRAM
Featured

Meta

Llama 3.1 70B Instruct

Meta's flagship 70B parameter model. Excellent performance rivaling GPT-4 on many benchmarks.

70B paramsllamallama3.142GB+ VRAM
Featured

Meta

Llama 3.3 70B Instruct

Latest Meta 70B model with improved reasoning and multilingual capabilities.

70B paramsllamallama3.342GB+ VRAM
Featured

Alibaba

Qwen 2.5 72B Instruct

Alibaba's top-tier 72B model. Excellent at coding, math, and multilingual tasks.

72B paramsqwen2apache-2.044GB+ VRAM
Featured

Cohere

Command R+ 104B

Cohere's 104B parameter model. Excellent for RAG, tool use, and enterprise applications.

104B paramscoherecc-by-nc-4.062GB+ VRAM
Featured

DeepSeek

DeepSeek V3

DeepSeek's latest flagship model with 685B total parameters (37B active via MoE). State-of-the-art performance.

685B paramsdeepseekdeepseek390GB+ VRAM

01.AI

Yi 1.5 34B Chat

Strong 34B model from 01.AI with good multilingual and reasoning performance.

34B paramsyiapache-2.021GB+ VRAM