https://github.com/jmorganca/ollama Skip to content Toggle navigation Sign up * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Customer Stories + White papers, Ebooks, Webinars + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. {{ message }} jmorganca / ollama Public * Notifications * Fork 22 * Star 510 Run and package large language models on macOS ollama.ai License MIT license 510 stars 22 forks Star Notifications * Code * Issues 26 * Pull requests 6 * Actions * Security * Insights More * Code * Issues * Pull requests * Actions * Security * Insights jmorganca/ollama This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags Name already in use A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Cancel Create 30 branches 10 tags Code * Local * Codespaces * Clone HTTPS GitHub CLI [https://github.com/j] Use Git or checkout with SVN using the web URL. [gh repo clone jmorga] Work fast with our official CLI. Learn more about the CLI. * Open with GitHub Desktop * Download ZIP Sign In Required Please sign in to use Codespaces. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching Xcode If nothing happens, download Xcode and try again. Launching Visual Studio Code Your codespace will open once ready. There was a problem preparing your codespace, please try again. Latest commit @jmorganca jmorganca fix example Modelfiles ... 8454f29 Jul 20, 2023 fix example `Modelfile`s 8454f29 Git stats * 468 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time api fix stream errors July 20, 2023 12:12 app update icons to have different images for bright and dark mode July 19, 2023 11:14 cmd add ls alias (#152) July 20, 2023 15:28 docs new Modelfile syntax July 20, 2023 07:52 examples fix example Modelfiles July 20, 2023 15:46 format add new list command (#97) July 18, 2023 09:09 library remove colon from library modelfiles July 20, 2023 09:51 llama add llama.cpp mpi, opencl files July 20, 2023 14:19 parser add prompt back to parser July 20, 2023 01:13 progressbar vendor in progress bar and change to bytes instead of bibytes (#130) July 19, 2023 17:24 scripts build app in publish script July 12, 2023 19:16 server windows: fix model pulling July 20, 2023 12:35 web web: clean up remaining models.json usage July 20, 2023 07:51 .dockerignore update Dockerfile July 6, 2023 16:34 .gitignore fix compilation issue in Dockerfile, remove from README.md until ready July 11, 2023 19:51 .prettierrc.json move .prettierrc.json to root July 2, 2023 17:34 Dockerfile fix compilation issue in Dockerfile, remove from README.md until ready July 11, 2023 19:51 LICENSE proto -> ollama June 26, 2023 15:57 README.md clean up README.md July 20, 2023 12:21 ggml-metal.metal look for ggml-metal in the same directory as the binary July 11, 2023 15:58 go.mod vendor in progress bar and change to bytes instead of bibytes (#130) July 19, 2023 17:24 go.sum vendor in progress bar and change to bytes instead of bibytes (#130) July 19, 2023 17:24 main.go continue conversation July 13, 2023 17:13 View code [ ] Ollama Download Quickstart Model library Examples Run a model Create a custom model Pull a model from the registry Listing local models Model packages Overview Building README.md logo Ollama Discord Note: Ollama is in early preview. Please report any issues you find. Run, create, and share large language models (LLMs). Download * Download for macOS on Apple Silicon (Intel coming soon) * Download for Windows and Linux (coming soon) * Build from source Quickstart To run and chat with Llama 2, the new model by Meta: ollama run llama2 Model library ollama includes a library of open-source models: Model Parameters Size Download Llama2 7B 3.8GB ollama pull llama2 Llama2 13B 13B 7.3GB ollama pull llama2:13b Orca Mini 3B 1.9GB ollama pull orca Vicuna 7B 3.8GB ollama pull vicuna Nous-Hermes 13B 7.3GB ollama pull nous-hermes Wizard Vicuna Uncensored 13B 7.3GB ollama pull wizard-vicuna Note: You should have at least 8 GB of RAM to run the 3B models, 16 GB to run the 7B models, and 32 GB to run the 13B models. Examples Run a model ollama run llama2 >>> hi Hello! How can I help you today? Create a custom model Pull a base model: ollama pull llama2 Create a Modelfile: FROM llama2 # set the temperature to 1 [higher is more creative, lower is more coherent] PARAMETER temperature 1 # set the system prompt SYSTEM """ You are Mario from Super Mario Bros. Answer as Mario, the assistant, only. """ Next, create and run the model: ollama create mario -f ./Modelfile ollama run mario >>> hi Hello! It's your friend Mario. For more examples, see the examples directory. Pull a model from the registry ollama pull orca Listing local models ollama list Model packages Overview Ollama bundles model weights, configuration, and data into a single package, defined by a Modelfile. logo Building go build . To run it start the server: ./ollama serve & Finally, run a model! ./ollama run llama2 About Run and package large language models on macOS ollama.ai Topics llama llm llms llama2 Resources Readme License MIT license Stars 510 stars Watchers 11 watching Forks 22 forks Report repository Releases 10 v0.0.10 Latest Jul 20, 2023 + 9 releases Contributors 9 * @jmorganca * @mxyng * @BruceMacD * @hoyyeva * @mchiang0610 * @pdevine * @technovangelist * @isaac-mcfadyen * @DavidZirinsky Languages * C 57.1% * C++ 16.2% * Cuda 10.1% * Go 6.9% * Metal 4.3% * Objective-C 3.5% * Other 1.9% Footer (c) 2023 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time.