https://github.com/abi/secret-llama Skip to content Navigation Menu Toggle navigation Sign in * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} abi / secret-llama Public * Notifications * Fork 32 * Star 840 * Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3. secretllama.com License Apache-2.0 license 840 stars 32 forks Branches Tags Activity Star Notifications * Code * Issues 4 * Pull requests 0 * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Actions * Projects * Security * Insights abi/secret-llama This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main BranchesTags Go to file Code Folders and files Name Name Last commit Last commit message date Latest commit History 53 Commits src src .eslintrc.cjs .eslintrc.cjs .gitattributes .gitattributes .gitignore .gitignore LICENSE LICENSE README.md README.md components.json components.json index.html index.html package.json package.json postcss.config.js postcss.config.js tailwind.config.js tailwind.config.js tsconfig.json tsconfig.json tsconfig.node.json tsconfig.node.json vite.config.ts vite.config.ts yarn.lock yarn.lock View all files Repository files navigation * README * Apache-2.0 license Secret Llama secret llama Entirely-in-browser, fully private LLM chatbot supporting Llama 3, Mistral and other open source models. * Fully private = No conversation data ever leaves your computer * Runs in the browser = No server needed and no install needed! * Works offline * Easy-to-use interface on par with ChatGPT, but for open source LLMs Big thanks to the inference engine provided by webllm. Join us on Discord [6874747073] System Requirements To run this, you need a modern browser with support for WebGPU. According to caniuse, WebGPU is supported on: * Google Chrome * Microsoft Edge It's also available in Firefox, but it needs to be enabled manually through the dom.webgpu.enabled flag. Safari on MacOS also has experimental support for WebGPU which can be enabled through the WebGPU experimental feature. In addition to WebGPU support, various models might have specific RAM requirements. Try it out You can try it here. To compile the React code yourself, download the repo and then, run yarn yarn build-and-preview If you're looking to make changes, run the development environment with live reload: yarn yarn dev Supported models Model Model Size TinyLlama-1.1B-Chat-v0.4-q4f32_1-1k 600MB Llama-3-8B-Instruct-q4f16_1 4.3GB Phi1.5-q4f16_1-1k 1.2GB Mistral-7B-Instruct-v0.2-q4f16_1 4GB Looking for contributors We would love contributions to improve the interface, support more models, speed up initial model loading time and fix bugs. Other Projects by Author Check out screenshot to code and Pico - AI-powered app builder About Fully private LLM chatbot that runs entirely with a browser with no server needed. Supports Mistral and LLama 3. secretllama.com Resources Readme License Apache-2.0 license Activity Stars 840 stars Watchers 6 watching Forks 32 forks Report repository Releases No releases published Packages 0 No packages published Languages * TypeScript 84.4% * JavaScript 9.0% * CSS 5.6% * HTML 1.0% Footer (c) 2024 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact * Manage cookies * Do not share my personal information You can't perform that action at this time.