https://github.com/nilsherzig/LLocalSearch Skip to content Toggle navigation Sign in * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} nilsherzig / LLocalSearch Public * Notifications * Fork 6 * Star 130 * This is a completely locally running meta search engine using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed. License Apache-2.0 license 130 stars 6 forks Branches Tags Activity Star Notifications * Code * Issues 10 * Pull requests 0 * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Actions * Projects * Security * Insights nilsherzig/LLocalSearch This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main BranchesTags Go to file Code Folders and files Last commit Last Name Name message commit date Latest commit History 75 Commits backend backend searxng searxng src src static static .eslintignore .eslintignore .eslintrc.cjs .eslintrc.cjs .gitignore .gitignore .npmrc .npmrc .prettierignore .prettierignore .prettierrc .prettierrc Dockerfile Dockerfile Dockerfile.dev Dockerfile.dev LICENSE LICENSE Makefile Makefile README.md README.md docker-compose.dev.yaml docker-compose.dev.yaml docker-compose.yaml docker-compose.yaml infra.drawio infra.drawio package-lock.json package-lock.json package.json package.json postcss.config.js postcss.config.js svelte.config.js svelte.config.js tailwind.config.js tailwind.config.js tsconfig.json tsconfig.json vite.config.ts vite.config.ts View all files Repository files navigation * README * Apache-2.0 license LLocalSearch What it is This is a completely locally running search engine using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed. Now with follow-up questions: demo.mp4 image Features * [?] Completely local (no need for API keys) * Runs on "low end" LLM Hardware (demo video uses a 7b model) * Progress logs, allowing for a better understanding of the search process * Follow-up questions * Mobile friendly interface * Fast and easy to deploy with Docker Compose * Web interface, allowing for easy access from any device * Handcrafted UI with light and dark mode Status This project is still in its very early days. Expect some bugs. How it works Please read infra to get the most up-to-date idea. Self-hosting & Development Requirements * A running Ollama server, reachable from the container + GPU is not needed, but recommended * Docker Compose Run the latest release Recommended, if you don't intend to develop on this project. git clone https://github.com/nilsherzig/LLocalSearch.git cd ./LLocalSearch # check the env vars inside the compose file and add your ollama servers host:port docker-compose up You should now be able to open the web interface on http:// localhost:3000. Nothing else is exposed by default. Run the current git version Newer features, but potentially less stable. git clone https://github.com/nilsherzig/LLocalsearch.git # 1. make sure to check the env vars inside the `docker-compose.dev.yaml`. # 2. Make sure you've really checked the dev compose file not the normal one. # 3. build the containers and start the services make dev # Both front and backend will hot reload on code changes. If you don't have make installed, you can run the commands inside the Makefile manually. Now you should be able to access the frontend on http:// localhost:3000. About This is a completely locally running meta search engine using LLM Agents. The user can ask a question and the system will use a chain of LLMs to find the answer. The user can see the progress of the agents and the final answer. No OpenAI or Google API keys are needed. Topics search-engine llm Resources Readme License Apache-2.0 license Activity Stars 130 stars Watchers 1 watching Forks 6 forks Report repository Releases 1 tags Packages 0 No packages published Contributors 2 * * Languages * Go 51.4% * Svelte 38.7% * Makefile 2.7% * TypeScript 2.7% * JavaScript 1.9% * Dockerfile 1.9% * Other 0.7% Footer (c) 2024 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact * Manage cookies * Do not share my personal information You can't perform that action at this time.