[HN Gopher] Show HN: Core - open source memory graph for LLMs - ...
___________________________________________________________________
Show HN: Core - open source memory graph for LLMs - shareable, user
owned
I keep running in the same problem of each AI app "remembers" me in
its own silo. ChatGPT knows my project details, Cursor forgets
them, Claude starts from zero... so I end up re-explaining myself
dozens of times a day across these apps. The deeper problem 1.
Not portable - context is vendor-locked; nothing travels across
tools. 2. Not relational - most memory systems store only the
latest fact ("sticky notes") with no history or provenance. 3. Not
yours - your AI memory is sensitive first-party data, yet you have
no control over where it lives or how it's queried. Demo video:
https://youtu.be/iANZ32dnK60 Repo:
https://github.com/RedPlanetHQ/core What we built - CORE (Context
Oriented Relational Engine): An open source, shareable knowledge
graph (your memory vault) that lets any LLM (ChatGPT, Cursor,
Claude, SOL, etc.) share and query the same persistent context. -
Temporal + relational: Every fact gets a full version history (who,
when, why), and nothing is wiped out when you change it--just
timestamped and retired. - Local-first or hosted: Run it offline
in Docker, or use our hosted instance. You choose which memories
sync and which stay private. Try it - Hosted free tier (HN
launch): https://core.heysol.ai - Docs:
https://docs.heysol.ai/core/overview
Author : Manik_agg
Score : 41 points
Date : 2025-07-01 16:24 UTC (6 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| funnym0nk3y wrote:
| I don't see the advantage over a simple text file accessible by
| MCP. Could you elaborate?
| jadbox wrote:
| I don't know this project, but the is probably
| simplicity/performance benefits to using a proxy over MCP as in
| theory there is less overhead.
| _joel wrote:
| It certainly looks interesting, how does this differ from a
| plan.md?
| adamkochanowicz wrote:
| For those asking how this is different from a simple text based
| memory archive, I think that is answered here:
|
| --- Unlike most memory systems--which act like basic sticky
| notes, only showing what's true right now. C.O.R.E is built as a
| dynamic, living temporal knowledge graph:
|
| Every fact is a first-class "Statement" with full history, not
| just a static edge between entities. Each statement includes what
| was said, who said it, when it happened, and why it matters. You
| get full transparency: you can always trace the source, see what
| changed, and explore why the system "believes" something. ---
| ramoz wrote:
| I'm not sure the graph offers any clear advantage in the
| demonstrated use case.
|
| It's overhead in coding.
|
| The source is the doc. Raw text is as much of a fact as an
| abstracted data structure derived from that text (which is done
| by an external LLM - provenance seems to break here btw, what
| other context is used to support that transcription, why is it
| more reliable than a doc within the actual codebase?).
| sutterbomb wrote:
| how would you say you compare to graphiti from zep?
| ianbicking wrote:
| I've been building a memory system myself, so I have some
| thoughts...
|
| Why use a knowledge graph/triples? I have not been able to come
| up with any use for the predicate or reason to make these
| associations. Simple flat statements seem entirely sufficient and
| more accurate to the source material.
|
| ... OK, looking a little more, I'm guessing it is a way to see
| when a memory should be updated; you can match on the first two
| items of the predicate. In a sense you are normalizing the input
| and hoping that shows an update or duplicate memory.
|
| I would be curious how well this works in practice. I've spent a
| fair amount of effort trying to merge and deduplicate memories in
| a more ad hoc way, generally using the LLM for this process
| (giving it a new memory and a list of old memories). It would
| feel much more deterministic and understandable to do this in a
| structured way. On the other hand I'm not sure how stable these
| triples would be. Would they all end up attached to the user? And
| will the predicate be helpful to establish meaningful
| relationships, or could the memories simply be attached to an
| entity?
|
| For instance I could list a bunch of facts related to my house:
| the address, which room I sleep in, upcoming and past repairs,
| observations on the yard, etc. Many (but not all) of these could
| be best represented as one "about my house" memory, with all the
| details embedded in one string of natural language text. It would
| be great to structure repairs... but how will that work? (my
| house, needs repair, attic bath)? Or (my house, has room, attic
| bathroom) and (attic bathroom, needs repair, bath)? Will the
| system pick one somewhat arbitrarily then, being able to see that
| past memory, replicate its structure?
|
| Another representation that occurs to for detecting duplicates
| and updates is simply "is related to entities". This creates a
| flatter database where there's less ambiguity in how memories are
| represented.
|
| Anyway, that's one area that stuck out to me. It wasn't clear to
| me where the schema for memories is in the codebase, I think that
| would be very useful to understanding the system.
| lukev wrote:
| So, this is cool and a per-user memory is obviously relevant for
| effective LLM use. And major props for the temporal focus.
|
| However, keeping a tight, constrained context turns out to
| actually be pretty important for correct LLM results
| (https://www.dbreunig.com/2025/06/22/how-contexts-fail-and-
| ho...).
|
| Do you have a take on how we reconcile the tension between these
| objectives? How to make sure the model has access to relevant
| info, while explicitly excluding irrelevant or confounding
| factors from the context?
| smcleod wrote:
| > Local Setup > Prerequisites > OpenAI API Key
|
| This does not seem to be local and additionally appears to be
| tied to one SaaS LLM provider?
| _joel wrote:
| You can run OpenAI compatible servers locally. vLLM, ollama,
| LMStudio and others.
| sparacha wrote:
| https://news.ycombinator.com/item?id=44436031
| Manik_agg wrote:
| Hey we are actively working on improving support for Llama
| models. At the moment, CORE does not provide optimal results
| with Llama-based models, but we are making progress to ensure
| better compatibility and output in the near future.
|
| Also we build core first internally for our main project SOL -
| AI personal assistant. Along the journey of building a better
| memory for our assistant we realised it's importance and are of
| the opinion that memory should not be vendor locked. It should
| be pluggable and belong to the user. Hence build it as a
| separate service.
| khaledh wrote:
| I love how we have come full circle. Anybody remembers the
| "semantic web" (RDF-based knowledge graph)? It didn't take off
| because building and maintaining such a graph requires extensive
| knowledge engineering work and tools. Fast forward a couple of
| decades and we have LLMs, which is basically auto-complete on
| steroids based on general knowledge, with the downside that it
| doesn't "remember" any facts unless you spoon-feed it with the
| right context. We're now back to: "let's encode context knowledge
| as a graph and plug it into LLMs". Fun times :)
| mt_ wrote:
| Why do open source projects do not version control their
| CLAUDE.md?
___________________________________________________________________
(page generated 2025-07-01 23:00 UTC)