https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B Hugging Face's logo Hugging Face [ ] * Models * Datasets * Spaces * Posts * Docs * Enterprise * Pricing * * ----------------------------------------------------------------- * Log In * Sign Up [xMBly9PUMp] deepseek-ai / DeepSeek-R1-Distill-Qwen-1.5B like 102 Follow [xMBly9PUMp] DeepSeek 7.84k Safetensors qwen2 Model card Files Files and versions Community 3 YAML Metadata Warning: empty or missing yaml metadata in repo card ( https://huggingface.co/docs/hub/model-cards#model-card-metadata) * DeepSeek-R1 + 1. Introduction + 2. Model Summary + 3. Model Downloads o DeepSeek-R1 Models o DeepSeek-R1-Distill Models + 4. Evaluation Results o DeepSeek-R1-Evaluation o Distilled Model Evaluation + 5. Chat Website & API Platform + 6. How to Run Locally o DeepSeek-R1 Models o DeepSeek-R1-Distill Models + 7. License + 8. Citation + 9. Contact DeepSeek-R1 DeepSeek-V3 --------------------------------------------------------------------- Homepage Chat Hugging Face Discord Wechat Twitter Follow Code License Model License Paper Link[?] 1. Introduction We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrated remarkable performance on reasoning. With RL, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. However, DeepSeek-R1-Zero encounters challenges such as endless repetition, poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1 across math, code, and reasoning tasks. To support the research community, we have open-sourced DeepSeek-R1-Zero, DeepSeek-R1, and six dense models distilled from DeepSeek-R1 based on Llama and Qwen. DeepSeek-R1-Distill-Qwen-32B outperforms OpenAI-o1-mini across various benchmarks, achieving new state-of-the-art results for dense models. [benchmark] 2. Model Summary --------------------------------------------------------------------- Post-Training: Large-Scale Reinforcement Learning on the Base Model * We directly apply reinforcement learning (RL) to the base model without relying on supervised fine-tuning (SFT) as a preliminary step. This approach allows the model to explore chain-of-thought (CoT) for solving complex problems, resulting in the development of DeepSeek-R1-Zero. DeepSeek-R1-Zero demonstrates capabilities such as self-verification, reflection, and generating long CoTs, marking a significant milestone for the research community. Notably, it is the first open research to validate that reasoning capabilities of LLMs can be incentivized purely through RL, without the need for SFT. This breakthrough paves the way for future advancements in this area. * We introduce our pipeline to develop DeepSeek-R1. The pipeline incorporates two RL stages aimed at discovering improved reasoning patterns and aligning with human preferences, as well as two SFT stages that serve as the seed for the model's reasoning and non-reasoning capabilities. We believe the pipeline will benefit the industry by creating better models. --------------------------------------------------------------------- Distillation: Smaller Models Can Be Powerful Too * We demonstrate that the reasoning patterns of larger models can be distilled into smaller models, resulting in better performance compared to the reasoning patterns discovered through RL on small models. The open source DeepSeek-R1, as well as its API, will benefit the research community to distill better smaller models in the future. * Using the reasoning data generated by DeepSeek-R1, we fine-tuned several dense models that are widely used in the research community. The evaluation results demonstrate that the distilled smaller dense models perform exceptionally well on benchmarks. We open-source distilled 1.5B, 7B, 8B, 14B, 32B, and 70B checkpoints based on Qwen2.5 and Llama3 series to the community. 3. Model Downloads DeepSeek-R1 Models Model #Total #Activated Context Download Params Params Length DeepSeek-R1-Zero 671B 37B 128K HuggingFace DeepSeek-R1 671B 37B 128K HuggingFace DeepSeek-R1-Zero & DeepSeek-R1 are trained based on DeepSeek-V3-Base. For more details regrading the model architecture, please refer to DeepSeek-V3 repository. DeepSeek-R1-Distill Models Model Base Model Download DeepSeek-R1-Distill-Qwen-1.5B Qwen2.5-Math-1.5B HuggingFace DeepSeek-R1-Distill-Qwen-7B Qwen2.5-Math-7B HuggingFace DeepSeek-R1-Distill-Llama-8B Llama-3.1-8B HuggingFace DeepSeek-R1-Distill-Qwen-14B Qwen2.5-14B HuggingFace DeepSeek-R1-Distill-Qwen-32B Qwen2.5-32B HuggingFace DeepSeek-R1-Distill-Llama-70B Llama-3.3-70B-Instruct HuggingFace DeepSeek-R1-Distill models are fine-tuned based on open-source models, using samples generated by DeepSeek-R1. We slightly change their configs and tokenizers. Please use our setting to run these models. 4. Evaluation Results DeepSeek-R1-Evaluation For all our models, the maximum generation length is set to 32,768 tokens. For benchmarks requiring sampling, we use a temperature of $0.6$, a top-p value of $0.95$, and generate 64 responses per query to estimate pass@1. Category Benchmark Claude-3.5-Sonnet-1022 GPT-4o DeepSeek OpenAI OpenAI DeepSeek (Metric) 0513 V3 o1-mini o1-1217 R1 Architecture - - MoE - - MoE # Activated - - 37B - - 37B Params # Total Params - - 671B - - 671B English MMLU (Pass@1) 88.3 87.2 88.5 85.2 91.8 90.8 MMLU-Redux 88.9 88.0 89.1 86.7 - 92.9 (EM) MMLU-Pro (EM) 78.0 72.6 75.9 80.3 - 84.0 DROP (3-shot 88.3 83.7 91.6 83.9 90.2 92.2 F1) IF-Eval (Prompt 86.5 84.3 86.1 84.8 - 83.3 Strict) GPQA-Diamond 65.0 49.9 59.1 60.0 75.7 71.5 (Pass@1) SimpleQA 28.4 38.2 24.9 7.0 47.0 30.1 (Correct) FRAMES (Acc.) 72.5 80.5 73.3 76.9 - 82.5 AlpacaEval2.0 52.0 51.1 70.0 57.8 - 87.6 (LC-winrate) ArenaHard 85.2 80.4 85.5 92.0 - 92.3 (GPT-4-1106) Code LiveCodeBench 33.8 34.2 - 53.8 63.4 65.9 (Pass@1-COT) Codeforces 20.3 23.6 58.7 93.4 96.6 96.3 (Percentile) Codeforces 717 759 1134 1820 2061 2029 (Rating) SWE Verified 50.8 38.8 42.0 41.6 48.9 49.2 (Resolved) Aider-Polyglot 45.3 16.0 49.6 32.9 61.7 53.3 (Acc.) Math AIME 2024 16.0 9.3 39.2 63.6 79.2 79.8 (Pass@1) MATH-500 78.3 74.6 90.2 90.0 96.4 97.3 (Pass@1) CNMO 2024 13.1 10.8 43.2 67.6 - 78.8 (Pass@1) Chinese CLUEWSC (EM) 85.4 87.9 90.9 89.9 - 92.8 C-Eval (EM) 76.7 76.0 86.5 68.9 - 91.8 C-SimpleQA 55.4 58.7 68.0 40.3 - 63.7 (Correct) Distilled Model Evaluation AIME AIME MATH-500 GPQA LiveCodeBench CodeForces Model 2024 2024 pass@1 Diamond pass@1 rating pass@1 cons@64 pass@1 GPT-4o-0513 9.3 13.4 74.6 49.9 32.9 759 Claude-3.5-Sonnet-1022 16.0 26.7 78.3 65.0 38.9 717 o1-mini 63.6 80.0 90.0 60.0 53.8 1820 QwQ-32B-Preview 44.0 60.0 90.6 54.5 41.9 1316 DeepSeek-R1-Distill-Qwen-1.5B 28.9 52.7 83.9 33.8 16.9 954 DeepSeek-R1-Distill-Qwen-7B 55.5 83.3 92.8 49.1 37.6 1189 DeepSeek-R1-Distill-Qwen-14B 69.7 80.0 93.9 59.1 53.1 1481 DeepSeek-R1-Distill-Qwen-32B 72.6 83.3 94.3 62.1 57.2 1691 DeepSeek-R1-Distill-Llama-8B 50.4 80.0 89.1 49.0 39.6 1205 DeepSeek-R1-Distill-Llama-70B 70.0 86.7 94.5 65.2 57.5 1633 5. Chat Website & API Platform You can chat with DeepSeek-R1 on DeepSeek's official website: chat.deepseek.com, and switch on the button "DeepThink" We also provide OpenAI-Compatible API at DeepSeek Platform: platform.deepseek.com 6. How to Run Locally DeepSeek-R1 Models Please visit DeepSeek-V3 repo for more information about running DeepSeek-R1 locally. DeepSeek-R1-Distill Models DeepSeek-R1-Distill models can be utilized in the same manner as Qwen or Llama models. For instance, you can easily start a service using vLLM: vllm serve deepseek-ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 2 --max-model-len 32768 --enforce-eager NOTE: We recommend setting an appropriate temperature (between 0.5 and 0.7) when running these models, otherwise you may encounter issues with endless repetition or incoherent output. 7. License This code repository and the model weights are licensed under the MIT License. DeepSeek-R1 series support commercial use, allow for any modifications and derivative works, including, but not limited to, distillation for training other LLMs. Please note that: * DeepSeek-R1-Distill-Qwen-1.5B, DeepSeek-R1-Distill-Qwen-7B, DeepSeek-R1-Distill-Qwen-14B and DeepSeek-R1-Distill-Qwen-32B are derived from Qwen-2.5 series, which are originally licensed under Apache 2.0 License, and now finetuned with 800k samples curated with DeepSeek-R1. * DeepSeek-R1-Distill-Llama-8B is derived from Llama3.1-8B-Base and is originally licensed under llama3.1 license. * DeepSeek-R1-Distill-Llama-70B is derived from Llama3.3-70B-Instruct and is originally licensed under llama3.3 license. 8. Citation 9. Contact If you have any questions, please raise an issue or contact us at service@deepseek.com. Downloads last month 0 Safetensors Model size 1.78B params Tensor type BF16 * Inference API Unable to determine this model's library. Check the docs . Model tree for deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B Finetunes 2 models Quantizations 29 models Space using deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B 1 stzhao/DeepSeek-R1-Distill-Qwen-1.5B System theme Company TOS Privacy About Jobs Website Models Datasets Spaces Pricing Docs