Skip to content
OpenRouterOpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Models
  • Providers
  • Pricing
  • Enterprise

Company

  • About
  • Announcements
  • CareersHiring
  • Partners
  • Privacy
  • Terms of Service
  • Support
  • State of AI

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube

DeepSeek: R1 Distill Llama 70B

deepseek/deepseek-r1-distill-llama-70b

Created Jan 23, 2025131,072 context
$0.03/M input tokens$0.11/M output tokens

DeepSeek R1 Distill Llama 70B is a distilled large language model based on Llama-3.3-70B-Instruct, using outputs from DeepSeek R1. The model combines advanced distillation techniques to achieve high performance across multiple benchmarks, including:

  • AIME 2024 pass@1: 70.0
  • MATH-500 pass@1: 94.5
  • CodeForces Rating: 1633

The model leverages fine-tuning from DeepSeek R1's outputs, enabling competitive performance comparable to larger frontier models.

Recent activity on R1 Distill Llama 70B

Total usage per day on OpenRouter

Prompt
6.01M
Reasoning
3.82M
Completion
694K

Prompt tokens measure input size. Reasoning tokens show internal thinking before a response. Completion tokens reflect total output length.