[HN Gopher] SymbolicAI: A neuro-symbolic perspective on LLMs
___________________________________________________________________
SymbolicAI: A neuro-symbolic perspective on LLMs
Author : futurisold
Score : 56 points
Date : 2025-06-27 18:49 UTC (4 hours ago)
(HTM) web link (github.com)
(TXT) w3m dump (github.com)
| sram1337 wrote:
| This is the voodoo that excites me.
|
| Examples I found interesting:
|
| Semantic map lambdas S = Symbol(['apple',
| 'banana', 'cherry', 'cat', 'dog']) print(S.map('convert all
| fruits to vegetables')) # => ['carrot', 'broccoli',
| 'spinach', 'cat', 'dog']
|
| comparison parameterized by context # Contextual
| greeting comparison greeting = Symbol('Hello, good
| morning!') similar_greeting = 'Hi there, good day!'
| # Compare with specific greeting context result =
| greeting.equals(similar_greeting, context='greeting context')
| print(result) # => True # Compare with different
| contexts for nuanced evaluation formal_greeting =
| Symbol('Good morning, sir.') casual_greeting = 'Hey,
| what\'s up?' # Context-aware politeness comparison
| politeness_comparison = formal_greeting.equals(casual_greeting,
| context='politeness level') print(politeness_comparison) #
| => False
|
| bitwise ops # Semantic logical conjunction -
| combining facts and rules horn_rule = Symbol('The horn only
| sounds on Sundays.', semantic=True) observation = Symbol('I
| hear the horn.') conclusion = horn_rule & observation # =>
| Logical inference
|
| `interpret()` seems powerful.
|
| OP, what inspired you to make this? Where are you applying it?
| What has been your favorite use case so far?
| futurisold wrote:
| That's gonna be a very, very, long answer. What's funny is that
| not much changed since 2022 (eoy) when the projected started;
| the models just got better, but we had a good chunk of the
| primitives since gpt-3.
|
| What's more recent is the DbC contribution which I think is
| unique. It literally solved anything agent related I've thrown
| at it -- especially because I can chain contracts together and
| the guardrails propagate nicely.
|
| I've built most the custom tools myself. For instance, not only
| perplexity was rendered useless by openai's web search, but
| openai's web search itself is not good enough for what you can
| customize yourself. To this end, I've built my own deep
| research agent. Here's a thread with some results from the
| first day it was working:
| https://x.com/futurisold/status/1931751644233945216
|
| I'm also running a company, and we've built an e2e document
| generation just from contracts (3 contracts chained together in
| this case). Here's an output (sorry about the PDF rendering,
| that's not what we serve, it's just what I've quickly hacked
| for local dev): https://drive.google.com/file/d/1Va7ALq_N-
| fTYeumKhH4jSxsTrWD...
|
| This was the input:
|
| ---
|
| Prompt:
|
| > I want the files to be analyzed and I am interested in
| finding patterns; feel free to make suggestions as well. I want
| to understand how different providers use their system prompts,
| therefore things like: what kind of tags do they use - are they
| XML, markdown, etc, are they prone toward sycophancy or trying
| to manipulate the user, are they using tools and if so how,
| etc. I want the tech report to deconstruct and synthesize and
| compare the information, find interesting patterns that would
| be hard to spot.
|
| Generated instructions:
|
| (a) Query: Conduct a comparative analysis of system prompts
| across major AI providers (OpenAI, Google, Anthropic, xAI,
| etc.) to identify structural patterns, linguistic frameworks,
| and operational constraints that shape AI behavior and
| responses.
|
| (b) Specific Questions:
|
| 1. What syntactic structures and formatting conventions (XML,
| markdown, JSON, etc.) are employed across different AI system
| prompts, and how do these technical choices reflect different
| approaches to model instruction?
|
| 2. To what extent do system prompts encode instructions for
| deference, agreeability, or user manipulation, and how do these
| psychological frameworks vary between commercial and research-
| focused models?
|
| 3. How do AI providers implement and constrain tool usage in
| their system prompts, and what patterns emerge in permission
| structures, capability boundaries, and function calling
| conventions?
|
| 4. What ethical guardrails and content moderation approaches
| appear consistently across system prompts, and how do
| implementation details reveal different risk tolerance levels
| between major AI labs?
|
| 5. What unique architectural elements in specific providers'
| system prompts reveal distinctive engineering approaches to
| model alignment, and how might these design choices influence
| downstream user experiences?
|
| ---
|
| Contracts were introduced in March in this post:
| https://futurisold.github.io/2025-03-01-dbc/
|
| They evolved a lot since then, but the foundation and
| motivation didn't change.
| futurisold wrote:
| Btw, besides the prompt, the other input to the technical
| report (the gdrive link) was this repo:
| https://github.com/elder-plinius/CL4R1T4S/tree/main
| futurisold wrote:
| One last comment here on contracts; an excerpt from the
| linked post I think it's extremely relevant for LLMs, maybe
| it triggers an interesting discussion here:
|
| "The scope of contracts extends beyond basic validation. One
| key observation is that a contract is considered fulfilled if
| both the LLM's input and output are successfully validated
| against their specifications. This leads to a deep
| implication: if two different agents satisfy the same
| contract, they are functionally equivalent, at least with
| respect to that specific contract.
|
| This concept of functional equivalence through contracts
| opens up promising opportunities. In principle, you could
| replace one LLM with another, or even substitute an LLM with
| a rule-based system, and as long as both satisfy the same
| contract, your application should continue functioning
| correctly. This creates a level of abstraction that shields
| higher-level components from the implementation details of
| underlying models."
| robertkrahn01 wrote:
| Probably linking the paper and examples notebook here makes sense
| as they are pretty explanatory:
|
| https://github.com/ExtensityAI/symbolicai/blob/main/examples...
|
| https://arxiv.org/pdf/2402.00854
| futurisold wrote:
| Wanted to do just that, thank you
| futurisold wrote:
| I didn't expect this -- I was supposed to be sleeping now, but I
| guess I'll chat with whoever jumps in! Good thing I've got some
| white nights experience.
| b0a04gl wrote:
| this works like functional programming where every symbol is a
| pure value and operations compose into clean, traceable flows.
| when you hit an ambiguous step, the model steps in. just like IO
| in FP, the generative call is treated as a scoped side effect.
| this can engage your reasoning graph stays deterministic by
| default and only defers to the model when needed. crazy demo
| though, love it
| futurisold wrote:
| Yes, pretty much. We wanted it be functional from the start.
| Even low level, everything's functional (it's even called
| functional.py/core.py). We're using decorators everywhere. This
| helped a lot with refactoring, extending the framework,
| containing bugs, etc.
| nbardy wrote:
| I love the symbol LLM first approaches.
|
| I built a version of this a few years ago as a LISP
|
| https://github.com/nbardy/SynesthesiaLisp
| futurisold wrote:
| Very nice, bookmarked for later. Interestingly enough, we share
| the same timeline. ~2yo is when a lot of interesting work
| spawned as many started to tinker.
| jaehong747 wrote:
| great job! it reminds me genaiscript.
| https://microsoft.github.io/genaiscript/
|
| // read files
|
| const file = await workspace.readText("data.txt");
|
| // include the file
|
| content in the prompt in a context-friendly way def("DATA",
| file);
|
| // the task
|
| $`Analyze DATA and extract data in JSON in data.json.`;
| futurisold wrote:
| Thank you! I'm not familiar with that project, will take a look
___________________________________________________________________
(page generated 2025-06-27 23:00 UTC)