[HN Gopher] Show HN: Reor - An AI note-taking app that runs mode...
___________________________________________________________________
Show HN: Reor - An AI note-taking app that runs models locally
Reor is an open-source AI note-taking app that runs models locally.
The four main things to know are: 1. Notes are connected
automatically with vector search. You can do semantic search +
related notes are automatically connected. 2. You can do RAG Q&A
on your notes using the local LLM of your choice. 3. Embedding
model, LLM, vector db and files are all run or stored locally. 4.
Point it to a directory of markdown files (like an Obsidian vault)
and it works seamlessly alongside Obsidian. Under the hood, Reor
uses Llama.cpp (node-llama-cpp integration), Transformers.js and
Lancedb to power the local AI features. Reor was built right from
the start to support local models. The future of knowledge
management involves using lots of AI to organize pieces of
knowledge - but crucially, that AI should run as much as possible
privately & locally. It's available for Mac, Windows & Linux on
the project Github: https://github.com/reorproject/reor
Author : samlhuillier
Score : 199 points
Date : 2024-02-14 17:00 UTC (5 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| calebdre wrote:
| This is really cool! Something i've actually been thinking about
| for a while.
|
| Would you mind a pull request that spruces up the design a bit?
| samlhuillier wrote:
| Absolutely! Would love your help.
| calebdre wrote:
| is there a roadmap for features/improvements that you're
| wanting to make? what's your vision for the future of the
| app?
| frankcort wrote:
| Wow cool, can I import my One Note notebooks?!!??
| samlhuillier wrote:
| If you can convert your One Note notes to markdown files then
| yes. On startup, you'll be asked to choose your vault directory
| - which needs to be a directory full of markdown files.
| clscott wrote:
| You can use Obsidian to create markdown from One Note.
|
| https://help.obsidian.md/import/onenote
| fastball wrote:
| _Does_ the future of knowledge management involve using lots of
| AI to organize pieces of knowledge?
|
| I think "here be dragons", and that over-relying on AI to do all
| your organization for you will very possibly (probably?) cause
| you to become worse at thinking.
|
| No data to back this up because it is still early days in the
| proliferation of such tools, but historically making learning and
| thinking and "knowledge management" _more passive_ does not
| improve outcomes.
| samlhuillier wrote:
| I agree with this.
|
| In some cases, hard thinking and searching for things manually
| can really enhance understanding and build your knowledge.
|
| In other cases, particularly when ideating for example, you
| want to be given "inspiration" from other related ideas to
| build upon other ideas you've had previously.
|
| I think it's a mix of both - reaching for AI as and when when
| you need it - but avoiding it intentionally at times as well.
| bhpm wrote:
| > I think "here be dragons", and that over-relying on AI to do
| all your organization for you will very possibly (probably?)
| cause you to become worse at thinking.
|
| Socrates said exactly this.
|
| But when they came to writing, Theuth said: "O King, here is
| something that, once learned, will make the Egyptians wiser and
| will improve their memory; I have discovered a potion for
| memory and for wisdom." Thamus, however, replied: "O most
| expert Theuth, one man can give birth to the elements of an
| art, but only another can judge how they can benefit or harm
| those who will use them. And now, since you are the father of
| writing, your affection for it has made you describe its
| effects as the opposite of what they really are. In fact, it
| will introduce forgetfulness into the soul of those who learn
| it: they will not practice using their memory because they will
| put their trust in writing, which is external and depends on
| signs that belong to others, instead of trying to remember from
| the inside, completely on their own. You have not discovered a
| potion for remembering, but for reminding; you provide your
| students with the appearance of wisdom, not with its reality.
| Your invention will enable them to hear many things without
| being properly taught, and they will imagine that they have
| come to know much while for the most part they will know
| nothing. And they will be difficult to get along with, since
| they will merely appear to be wise instead of really being so."
| OJFord wrote:
| > > I think "here be dragons", and that over-relying on AI
| [...]
|
| > Socrates said exactly this.
|
| I roughly recalled where you were going to go with that
| afterwards, but I couldn't help but 'spit take' at that given
| some of the quotes he does get credited with!
| davidy123 wrote:
| So if you only converse with LLMs (and never write or read
| anything), is the problem solved?
| ParetoOptimal wrote:
| I think you want to organize your own knowledge graph and then
| use the LLM to find novel connections or answer questions based
| upon it.
| wbogusz wrote:
| Great to see something like this actualized. I'm a huge fan of
| Obsidian and its graph based connections for note taking.
|
| Always see parallels drawn between Obsidian note structures and
| whole "2nd brain" idea for personal knowledge management, had
| seemed like a natural next step would be to implement note
| retrieval for intelligent references. Will have to check this out
| dkarras wrote:
| I had been researching stuff related to this for some time.
| Interesting project! Why not an obsidian plugin to tap into the
| ecosystem?
| cfcfcf wrote:
| Seconded. I like this idea but wouldn't want to trade the
| Obsidian UI. Would love to see something like this as a plugin.
| samlhuillier wrote:
| Two reasons:
|
| 1. The libraries I used to run models locally didn't work
| inside a plugin.
|
| 2. I believe AI is a fairly big paradigm shift that requires
| new software.
| SamBam wrote:
| So if I point this at my existing Obsidian library, what happens?
| Does this add to existing files, or add new files, to store the
| output of things generated by the AI? Doe the chunking of the
| files only happen within the vector database? What if I later
| edit my files in Obsidian and only open up Reor after -- does the
| full chucking happen every time, or can it notice that only a few
| new files exist?
|
| Just wondering what the interaction might be for someone who uses
| Obsidian but might turn to this occasionally.
| samlhuillier wrote:
| It's filesystem mapping 1:1. Basically the same thing Obsidian
| does when you open a vault. You can create new files with Reor,
| create directories and edit existing files. Chunking happens
| only in vector DB and everything is synced automatically so you
| shouldn't notice anything if you reopen Reor after using
| Obsidian.
|
| In short, yes it'd work seamlessly if you wanted to use it
| occasionally.
| humbleferret wrote:
| Great job!
|
| I played around with this on a couple of small knowledge bases
| using an open Hermes model I had downloaded. The "related notes"
| feature didn't provide much value in my experience, often the
| link was so weak it was nonsensical. The Q&A mode was
| surprisingly helpful for querying notes and providing overviews,
| but asking anything specific typically just resulted in less than
| helpful or false answers. I'm sure this could be improved with a
| better model etc.
|
| As a concept, I strongly support the development of private,
| locally-run knowledge management tools. Ideally, these solutions
| should prioritise user data privacy and interoperability,
| allowing users to easily export and migrate their notes if a new
| service better fits their needs. Or better yet, be completely
| local, but have functionality for 'plugins' so a user can import
| their own models or combine plugins. A bit like how Obsidian[1]
| allows for user created plugins to enable similar functionality
| to Reor, such as the Obsidan-LLM[2] plugin.
|
| [1] https://obsidian.md/ [2]
| https://github.com/zatevakhin/obsidian-local-llm
| samlhuillier wrote:
| Thank you for your feedback!
|
| Working hard on improving the chunking to improve related notes
| section. RAG is fairly naive right now, with lots of
| improvements coming in the next few weeks.
| CrypticShift wrote:
| Some suggestions :
|
| - Create multiple independent "vaults" (like obsidian).
|
| - Append links to related notes, so you can use (Obsidian's)
| graph view to map the AI connections.
|
| - "Minimize" the UI to just the chat window.
|
| - Read other formats (mainly pdfs).
|
| - Integrate with browser history/bookmarks (maybe just a script
| to manually import them as markdown ?)
|
| Thanks for Reor !
| samlhuillier wrote:
| Thanks for your feedback!
|
| - Multiple vaults is in fact in a PR right now:
| https://github.com/reorproject/reor/pull/28
|
| - Manual linking is coming.
|
| - Minimizing the UI to chat is interesting. Right now I guess
| you can drag chat to cover anything - but yes perhaps a toggle
| between two modes could be interesting.
|
| - Read other formats also something in the pipeline. Just need
| to sort out the editor itself to support something like this.
| Perhaps pdfs would just be embedded into the vector db but not
| accessible to the editor.
|
| - Integrating with browser history and bookmarks is a big
| feature. Things like web clipping and bringing in context from
| different places are interesting...
| mcbetz wrote:
| Interesting project, wishing you all the best!
|
| If you are using Obsidian, Smart Connections in v2 (1) does also
| support local embeddings and shows related notes based on
| semantic similarity.
|
| It's not super great on bi/multi-lingual vaults (DE + EN in my
| case), but it's improving rapidly and might soon support
| embedding models that cater for these cases as well.
|
| (1) https://github.com/brianpetro/obsidian-smart-connections
| gavmor wrote:
| Seems cool, but didn't utilize my GPU? At any rate, definitely a
| futuristic POC, and prototype for the way I see desktop software
| going in the next few years.
| samlhuillier wrote:
| Yes unfortunately not implemented yet. Will be coming soon
| though :)
| kepano wrote:
| This is a good reminder of why storing Obsidian notes as
| individual Markdown files is much more useful than stuffing those
| notes in a database and having Markdown as an export format. The
| direct manipulation of files allows multiple apps to coexist and
| do useful things on top of the same files.
| samlhuillier wrote:
| Absolutely! Really respect the work you folks are doing.
| michaelmior wrote:
| It's very possible to have multiple apps coexisting using a
| database. Although I'll certainly concede that it's probably a
| lot easier with just a bunch of Markdown files.
| kepano wrote:
| Sure, it's possible, but whichever app owns the database
| ultimately controls the data, the schema, etc. The file
| system provides a neutral database that all apps can
| cooperate within.
| toddmorey wrote:
| Yes it was one of the best product decisions y'all made. Been
| so useful to have direct access to the files and options on how
| my data is processed and backed up.
| Ringz wrote:
| That was the reason why I gave up Joplin very quickly. The last
| Joplin thread, here on Hacker News, has also shown once again
| that some still do not understand why "But Joplin can export
| Markdown from the database!" is not the same as simple, flat
| Markdown files.
| traverseda wrote:
| Yeah, that's also why I dropped it. Got too complicated when
| I wanted to start linking my notes into my work timesheets.
| erybodyknows wrote:
| May I ask what you switched to? Running into the same issue.
| xenodium wrote:
| I got an iOS journaling app on beta. It's offline, no sign-in,
| no lock-in, social, etc. Saves to plain text. Syncs to your
| desktop if needed.
|
| https://xenodium.com/an-ios-journaling-app-powered-by-org-pl...
| bemusedthrow75 wrote:
| I think I struggle to see any application of LLMs for my notes
| that wouldn't, in practice, be just as easily implemented as a
| search facility.
|
| My main challenge with my notes (that I've been collecting for
| about 15 years) is remembering to consult them before I google.
|
| I suppose a unified interface to both my notes via LLM and
| internet search would help, but then I get that with my Apple
| Notes and the Mac's systemwide search, if I remember to use it.
| traverseda wrote:
| It's not the application of LLMs for your notes, it's the
| application of your notes for an LLM. Like if you're running a
| custom code-generation LLM, it could refer back to parts of
| your notes using retrieval aided generation to get some more
| context on the work you're having it do.
|
| But yes, a good application is probably a ways away. Still, LLM
| vector embedding make a good search engine pretty easy to
| implement, especially if you're working with small sets of well
| curated data where exact keyword matching might not work great.
|
| Like if you search for "happy" you could get your happiest
| journal entries, even if none of them explicitly mention the
| word happy.
| davidy123 wrote:
| Super interesting project. I like its focus. Wondering if the
| author looked into Cozodb, or other databases that combine vector
| + graph/triples. Since probably neuro-symbolic is the best path.
| https://docs.cozodb.org/en/latest/releases/v0.6.html talks about
| this idea.
| samlhuillier wrote:
| Interesting. Thanks for sharing will take a look!
| toddmorey wrote:
| "crucially, that AI should run as much as possible privately &
| locally"
|
| Just wanted to say thank you so much for this perspective and
| fighting the good fight.
| samlhuillier wrote:
| Thank you!
| nerdjon wrote:
| I have been looking for a while for a better way to take notes,
| what I was using worked fine but it did tend to end up being a
| blackhole.
|
| I just downloaded this, I realize that it is still a new tool.
| But I think a critical feature needs to be context. The ability
| to have completely separate contexts of notes, maybe even
| completely different databases.
|
| That way similar sounding to an LLM but contextually different
| don't get brought up. I figured that is what "new directory" did
| but it does not appear that way.
|
| So is there any plans to implement a switcher for database? I
| can't find a way to change where it is right now.
|
| But doing some quick tests importing some notes in it does seem
| very promising and I really like where you are taking it. It is
| just confusing notes that should be in distinct contexts.
|
| Edit: I see this is already in PR! Awesome.
| mrtesthah wrote:
| It doesn't seem to view my plain text notes. What file formats
| are currently supported, if plain text is not?
| erickf1 wrote:
| I like the idea. Unfortunately, could not get it to work on
| Linux. Making a note caused a crash. Searching notes crashed. LLM
| chat would cause crash. Hope to see it work some time.
| rcarmo wrote:
| I did my usual test for these things - I tossed in the Markdown
| source for my site, which has 20 years of notes
| (https://taoofmac.com/static/graph).
|
| Surprisingly, indexing sort of worked. But since I have an
| index.md per folder (so that media is grouped with text for every
| note) the editor is confused, and clicking on links always took
| me to a blank screen.
|
| Also, pretty much every question gives an error message that says
| "Error: The default context shift strategy did not return a
| history that fits the context size", likely because there is too
| much context...
|
| Edit: Fixed most of it by using a mistral instruct model. But the
| editor does not know what front matter is (neither in editing nor
| in previews, where front matter looks like huge heading blocks)
| lenerdenator wrote:
| How would this run on, say, a M2 Pro MBP with 32GB RAM?
| alsetmusic wrote:
| That should be more than enough. I've been running Ollama on an
| M1 Max with 64GB of RAM without issue.
| 2024throwaway wrote:
| Running with a Local LLM on a Mac M1, this completely locked up
| my system for minutes. I tried to let it run, because the
| progress bar was ticking every now and then, but after 10 minutes
| I gave up and killed it.
| tarruda wrote:
| I wouldn't recommend unless you got at least 16gb ram (though
| possibly more is needed depending on what model is used).
| 2024throwaway wrote:
| I do have 16gb ram.
| haswell wrote:
| Literally yesterday I spun up a project with the intent to build
| something exactly like this for Obsidian.
|
| Excited to see something already far more realized, and I'm
| looking forward to trying this out.
|
| I've been working on a larger than small writing project using
| Obsidian, and my ultimate goal is to have conversations with the
| corpus of what I've written, and to use this to hone ideas and
| experiment with new ways of exploring the content.
|
| Not sure if local LLMs are powerful enough yet to enable
| meaningful/reliable outcomes, but this is the kind of stuff that
| really excites me about the future of this tech.
___________________________________________________________________
(page generated 2024-02-14 23:00 UTC)