https://huggingface.co/stabilityai/stablelm-2-12b Hugging Face's logo Hugging Face [ ] * Models * Datasets * Spaces * Posts * Docs * Solutions * Pricing * * ----------------------------------------------------------------- * Log In * Sign Up [7vmYr2XwVc] stabilityai / stablelm-2-12b like 24 Text Generation Transformers Safetensors tiiuae/falcon-refinedweb togethercomputer/RedPajama-Data-1T uonlp/CulturaX CarperAI/pilev2-dev bigcode/starcoderdata DataProvenanceInitiative/Commercially-Verified-Licenses 7 languages stablelm causal-lm custom_code Inference Endpoints 12 papers License: other Model card Files Files and versions Community Train Deploy Use in Transformers Edit model card * Stable LM 2 12B + Model Description + Usage o Run with Flash Attention 2 [?][?] + Model Details o Model Architecture + Training o Training Dataset o Training Procedure o Training Infrastructure + Use and Limitations o Intended Use o Limitations and Bias + How to Cite Stable LM 2 12B Model Description Stable LM 2 12B is a 12.1 billion parameter decoder-only language model pre-trained on 2 trillion tokens of diverse multilingual and code datasets for two epochs. Usage Get started generating text with Stable LM 2 12B by using the following code snippet: from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b") model = AutoModelForCausalLM.from_pretrained( "stabilityai/stablelm-2-12b", torch_dtype="auto", trust_remote_code=True ) model.cuda() inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device) tokens = model.generate( **inputs, max_new_tokens=64, temperature=0.70, top_p=0.95, do_sample=True, ) print(tokenizer.decode(tokens[0], skip_special_tokens=True)) Run with Flash Attention 2 [?][?] Click to expand from transformers import AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("stabilityai/stablelm-2-12b") model = AutoModelForCausalLM.from_pretrained( "stabilityai/stablelm-2-12b", torch_dtype="auto", attn_implementation="flash_attention_2", trust_remote_code=True ) model.cuda() inputs = tokenizer("The weather is always wonderful", return_tensors="pt").to(model.device) tokens = model.generate( **inputs, max_new_tokens=64, temperature=0.70, top_p=0.95, do_sample=True, ) print(tokenizer.decode(tokens[0], skip_special_tokens=True)) Model Details * Developed by: Stability AI * Model type: Stable LM 2 12B models are auto-regressive language models based on the transformer decoder architecture. * Language(s): English * Paper: Stable LM 2 Technical Report * Library: GPT-NeoX * License: Stability AI Non-Commercial Research Community License. If you'd like to use this model for commercial products or purposes, please contact us here to learn more. * Contact: For questions and comments about the model, please email lm@stability.ai Model Architecture The model is a decoder-only transformer with the following architecture: Parameters Hidden Size Layers Heads KV Heads Sequence Length 12,143,605,760 5120 40 32 8 4096 * Position Embeddings: Rotary Position Embeddings (Su et al., 2021) applied to the first 25% of head embedding dimensions for improved throughput following Black et al. (2022). * Parallel Layers: Parallel attention and feed-forward residual layers with a single input LayerNorm (Wang, 2021). * Normalization: LayerNorm (Ba et al., 2016) without biases. Furthermore, we apply per-head QK normalization (Dehghani et al., 2023, Wortsman et al., 2023). * Biases: We remove all bias terms from the feed-forward networks and grouped-query self-attention layers. * Tokenizer: We use Arcade100k, a BPE tokenizer extended from OpenAI's tiktoken.cl100k_base. We split digits into individual tokens following findings by Liu & Low (2023). Training Training Dataset The dataset is comprised of a filtered mixture of open-source large-scale datasets available on the HuggingFace Hub: Falcon RefinedWeb extract (Penedo et al., 2023), RedPajama-Data (Together Computer., 2023) and The Pile (Gao et al., 2020) both without the Books3 subset, and StarCoder (Li et al., 2023). We further supplement our training with multi-lingual data from CulturaX (Nguyen et al., 2023) and, in particular, from its OSCAR corpora, as well as restructured data in the style of Yuan & Liu (2022). * Given the large amount of web data, we recommend fine-tuning the base Stable LM 2 12B for your downstream tasks. Training Procedure The model is pre-trained on the aforementioned datasets in bfloat16 precision, optimized with AdamW, and trained using the Arcade100k tokenizer with a vocabulary size of 100,352. We outline the complete hyperparameters choices in the project's GitHub repository - config*. Training Infrastructure * Hardware: Stable LM 2 12B was trained on the Stability AI cluster across 384 NVIDIA H100 GPUs (AWS P5 instances). * Software: We use a fork of gpt-neox (EleutherAI, 2021), train under 2D parallelism (Data and Tensor Parallel) with ZeRO-1 ( Rajbhandari et al., 2019), and rely on flash-attention as well as SwiGLU and Rotary Embedding kernels from FlashAttention-2 (Dao et al., 2023) Use and Limitations Intended Use The model is intended to be used as a foundational base model for application-specific fine-tuning. Developers must evaluate and fine-tune the model for safe performance in downstream applications. Limitations and Bias As a base model, this model may exhibit unreliable, unsafe, or other undesirable behaviors that must be corrected through evaluation and fine-tuning prior to deployment. The pre-training dataset may have contained offensive or inappropriate content, even after applying data cleansing filters, which can be reflected in the model-generated text. We recommend that users exercise caution when using these models in production systems. Do not use the models if they are unsuitable for your application, or for any applications that may cause deliberate or unintentional harm to others. How to Cite @article{bellagente2024stable, title={Stable LM 2 1.6 B Technical Report}, author={Bellagente, Marco and Tow, Jonathan and Mahan, Dakota and Phung, Duy and Zhuravinskyi, Maksym and Adithyan, Reshinth and Baicoianu, James and Brooks, Ben and Cooper, Nathan and Datta, Ashish and others}, journal={arXiv preprint arXiv:2402.17834}, year={2024} } Downloads last month 7 Safetensors Model size 12.1B params Tensor type BF16 * Datasets used to train stabilityai/stablelm-2-12b tiiuae/falcon-refinedweb Viewer * Updated Jun 20, 2023 * 90.6k * 707 uonlp/CulturaX Viewer * Updated Feb 4 * 12.6k * 363 bigcode/starcoderdata Updated May 16, 2023 * 6.07k * 314 Collection including stabilityai/stablelm-2-12b Stable LM Collection Suite of LLMs trained on English * 7 items * Updated about 3 hours ago * 18 Company (c) Hugging Face TOS Privacy About Jobs Website Models Datasets Spaces Pricing Docs