[HN Gopher] Declarative Programming with AI/LLMs
___________________________________________________________________
Declarative Programming with AI/LLMs
Author : Edmond
Score : 75 points
Date : 2024-09-15 14:54 UTC (8 hours ago)
(HTM) web link (blog.codesolvent.com)
(TXT) w3m dump (blog.codesolvent.com)
| fny wrote:
| DSLs are not dead.
|
| I have had the opposite experience. For complex tasks, LLMs fail
| in subtle ways which requires inspection of its output:
| essentially "declarative" to "imperative" is bug ridden.
|
| My trick has been to create DSLs (I call them primitives) that
| are loaded as context before I make declarative incantations to
| the LLM.
|
| These micro-languages reduce the error space dramatically and
| allow for more user-friendly and high-level interactions with the
| LLM.
| tomrod wrote:
| What a bright approach to this!
| dartos wrote:
| What do these DSLs look like, if you don't mind sharing?
| orochimaaru wrote:
| Bingo!!! I use this approach for data science tasks today.
| Create very specific DSL that has mathematics, set theory, etc.
| as a context and setup your data science exploration using the
| dsl as input. Been fairly decent so far. It works for two
| specific reasons 1. I have a fairly specific dsl that is
| expressive enough for the task at hand available easily (math
| notations have been around for centuries now and algorithms for
| at least 1/2 a century). 2. I use Apache spark - so everything
| around parallelizing and synchronizing I don't handle in the
| code myself (not most of the time).
| arslnjmn wrote:
| This is a fascinating topic and something I'm looking into
| these days, specifically removing need for Data Scientists to
| write Spark code. Would be great if you can share more
| details around the DSL. The DSL also sounds interesting in
| it's own right!
| vharuck wrote:
| TFA covers this (I think, it got real jargony at times):
|
| >Declarative processing of configurations generated via AI is a
| way to ground the AI, this requires a lot of work since you
| don't just offload requests to an AI but rather your processing
| logic serves as a guardrail to ensure what's being done makes
| sense. In order for AI to be used in applications that require
| reliability, this work will need to be done.
|
| When I was playing around with AI for data analysis last year,
| the best results for me came from something like this but more
| on the imperative side: RAG with snippets of R code. My first
| attempt was taking snippets directly from existing scripts and
| adding comments to explain the purpose and caveats. That didn't
| work well until I put in a lot of effort replacing the "common
| knowledge" parts with variables or functions. For example, no
| more `sex = 1`, but `sex = "male"`. Common tasks across scripts
| were refactored into one or a few functions with a couple
| parameters, and then placed in a package. The threshold for
| applying the DRY principle was lowered.
|
| In the end, I decided a custom solution wasn't worth the
| effort. The data had identifying details of people, so any
| generated code would have to be checked and run by analysts who
| already had access to the data. But the process of refactoring
| stuff into descriptively-named objects was such a big benefit,
| the AI code wasn't doing enough to justify the effort. Again,
| this was using a custom system made by a total ML noob (myself)
| with GPT 3.5. The execs banned usage of LLMs until they could
| deal with the policy and privacy concerns, so I don't know
| what's possible these days.
| pknerd wrote:
| Pls talk more or write some blog post about this approach
| arslnjmn wrote:
| This is a very interesting use case of LLM and something I'm
| looking into these days. Would appreciate if you could share
| more details on the challenges you ran into in using DSL's with
| LLM's and how you solved them?
| weeksie wrote:
| Software specification documents will, as they say, rise in
| status. The kind of specification outlined in this article misses
| the mark-why would we use anything but actual natural language?
| That said, there will be real returns to structuring
| specifications so that they are actionable and easy to navigate
| to achieve context, both kinds.
| qsort wrote:
| > why would we use anything but actual natural language?
|
| Because natural language is not a good tool to describe
| computational processes.
|
| Which one would you rather write:
|
| (a) p(x) = x^2 + 5x + 4
|
| (b) Let us consider the object in the ring of univariate
| polynomials with complex coefficients defined by the square of
| the variable, plus five times the variable plus four.
|
| Every scientific discipline moves _away_ from natural language
| as soon as possible. Low-code, no-code and the likes have been
| a dismal failure precisely for this reason. Why would I move
| _back_ to natural language if I can effectively express my
| problem in a formal language that I can manipulate at-will?
| handfuloflight wrote:
| > Every scientific discipline moves away from natural
| language as soon as possible.
|
| Have you seen a scientific paper that only had mathematics?
|
| Natural language is still necessary for scaffolding,
| exposition, contextualization.
| skydhash wrote:
| Mathematics is not the only formal languages. Every
| profession soon invents its own jargon because natural
| language are too ambiguous. For some that's enough. But
| science require more formalism.
|
| Boole's Laws of Thought or Church's The Calculi of Lambda-
| Conversion are mostly describing how to be so precise that
| the description of the problem equates its solution. But
| formal languages have their own issues.
| __loam wrote:
| I strongly believe that using these systems as a programming
| interface is a very bad pattern. They are the ultimate leaky
| abstraction.
| skydhash wrote:
| Both imperative and declarative programming require an
| understanding of the domain you're trying to code a solution
| in. Once you've understand the model, the DSL makes a lot more
| sense. I strongly believe that people who are hoping for these
| no-code tools don't care to understand the domain, or why it's
| formal representation as a DSL is necessary. What makes natural
| language great is the ability for humans to create a shared
| model of understanding that aims to eliminate ambiguity. And
| even then, there are issues. Formalism is what solve these
| issues, not piling on more random factors.
| quantadev wrote:
| It's true that no-code tools have mostly not been that
| successful in the past (except in very limited
| circumstances), because eventually you run into cases where
| it would've just been easier to just write some code than to
| try to finagle the no-code constructs into doing something it
| wasn't really designed to support. Often the most compact way
| to specify something is actually just they Python code
| itself, for example.
| frozenlettuce wrote:
| I'm experimenting with something like that, to allow creating a
| web API from some descriptions in markdown.
| https://github.com/lfarroco/verbo-lang
| frozenlettuce wrote:
| the initial idea was a general-purpose language, but obviously
| the scope for that would be too big. I think that having
| "natural language frameworks" for some application types can
| work: REST APIs, CLI apps, React components... . If you have a
| set architecture, like the Elm architecture, centered around
| events being fired that update some state, that could lead to
| some standards. One feature that I intend to add is having an
| "interview" with the AI. You write the spec, then it reads it
| and gives you some questions/issues, like: "this point is too
| vague", "what do you want to update here? a or b?". That would
| help ensure that the prompt/spec itself is improvable with AI.
|
| People say that a microservice is something that "fits in your
| head". Once the logic gets too complex, you should separate it
| into another service. Maybe in the future the phrase will be
| "what fits in the AI's context". That would be a good litmus
| test: if a piece of software is hard for an AI, maybe we should
| simplify it - because it is probably too hard for the average
| human as well.
| ingigauti wrote:
| I've been developing a programming language that does this, the
| repo can be found here https://github.com/plangHQ
|
| Here is a code example ReadFileAndUpdate -
| read file.txt in %content% - set %content.updated% as %now%
| - write %content% to file.txt
|
| I call this intent based programming. There isn't a strict syntax
| (there are few rules), the developer creates a Goal (think
| function) and writes steps (start with -) to solve the goal
|
| I've been using it for clients with very good results, and from
| the 9 months I've been able to build code using it, the
| experience has shown far less code needs to be written and you
| see the project from different perspective.
| quantadev wrote:
| I like what you're doing there. It does seem like we might need
| some new kind of language to interface with LLMs. Sort of the
| language of prompt engineering, that's a bit more specific than
| just "raw English language" but also more powerful than just
| pure templating systems.
| ingigauti wrote:
| Yeah, I don't believe LLM will be able to code fully, they
| are analog trying to do something digital where everything
| needs to be 100% correct.
|
| Plang being analog language I see the LLM is able to code so
| much more and it never has syntax, library or other build
| errors.
| quantadev wrote:
| But we have to admit also that LLMs may become (or maybe
| even OpenAI-01 already is) smart enough that they can not
| only write the code to solve some task but understand the
| task well enough to also be able to write even better Unit
| Tests than humans ever could. Once AI starts writing Unit
| Tests (internally even) for everything it spits out we can
| probably say humans will at that point be truly obsolete
| for writing apps. However, even then, the LLM output will
| still need to be computer code, rather than just having the
| LLMs just "interpret" English language all the time to
| "run" apps.
| skydhash wrote:
| Ever heard of the halting problem [0]? Every time I heard
| these claims, it sounds like someone saying that we can
| travel in time as soon as we invent a faster than light
| vessel, or better, Dr Who's cabin. There's a whole set of
| theorems that says ultimately how a formal system (which
| computers are) can't be completely automated as there are
| classes of problems it can't solve. Anything the LLMs do,
| you can write a better performing software except for the
| task that it is best suited for: translation between
| natural languages. And the latter, it's because it's a
| pain to write all the rules.
|
| [0]: https://en.wikipedia.org/wiki/Halting_problem
| quantadev wrote:
| LLMs are doing genuine reasoning already (and no I don't
| mean consciousness or qualia), and they were even since
| GPT3.5.
|
| They can already take descriptions of tasks and write
| computer programs to do those tasks, because they have a
| genuine understanding of the tasks (again no qualia
| implied).
|
| I never said there are no limits to what LLMs can do, or
| no limits to what logic can prove, or even no limits to
| what humans can understand. Everything has limits.
|
| EDIT: And before you accuse me of saying LLMs can
| understand _all_ tasks, go back and re-read the post a
| second time, so you don 't make that mistake again.
| JTyQZSnP3cQGa8B wrote:
| It looks like Python without parentheses, but instead of using
| the REPL you use a black box that costs money for every line.
|
| I kind of regret when NFTs were all the rage, at least it was
| fun.
|
| Also please show us a real code that is generated.
| ingigauti wrote:
| Check this out for your answer https://github.com/PLangHQ/pla
| ng/blob/main/Documentation/blo...
| nyrikki wrote:
| The Ironic part being that the COBOL in that example is
| completely wrong and non-functional.
|
| Note: WS-MENU-ITEM(1) OF WS-LABEL
|
| When the syntax, which is similar to natural language is:
| elementary-var IN|OF group-var
|
| It is a good example if you want to prove to people that they are
| a bad idea.
|
| I prefer to use these tools as a form of red/green/refactor
| workflow, where I don't use them in the refactor step or the test
| case step.
| hmottestad wrote:
| Another thing they are generally good at is writing Javadocs
| and comments. GPT-4 manages to return my code, intact, complete
| with Javadocs for all the methods I ask it for.
| jiggawatts wrote:
| I'm picturing the docs it would generate for the code I see
| at government agencies written by $500/month outsourcers.
|
| "This function processes logins using a complex and unclear
| logic. Exceptions are not thrown and instead most failures
| are represent using a success code that sets the current
| logged in user to either null or a user object with a null or
| empty string as the name."
| agentultra wrote:
| > Not only does AI eliminate the need for developing a custom
| DSL, as your DSL now is just plain language
|
| Plain language is not precise enough.
| yu3zhou4 wrote:
| That's a fascinating direction to explore. It turns out that
| translating instructions into AI/ML tasks and wrapping it with
| Python code is easy to build [0]. It starts with LLM decididing
| what type of task should be performed, then there's a search over
| Hugging Face catalog, inference on a model (picked by heuristics)
| with Inference Endpoints and parsing the output to most relevant
| Python type [1]
|
| [0] https://github.com/jmaczan/text-to-ml
|
| [1] Page four -
| https://pagedout.institute/download/PagedOut_004_beta1.pdf#p...
| probably_wrong wrote:
| I can't tell whether you're being honest or not. 10 years ago
| this was a literal joke [1] that people would tell and
| implement as "look at this terrible idea". Nowadays I can't
| tell anymore.
|
| [1] https://gkoberger.github.io/stacksort/
| yu3zhou4 wrote:
| I coded this proof of concept to show how easy it is to have
| AI mingled with a regular Python code. I didn't know it's
| considered a terrible idea xd
| eigenspace wrote:
| It's amazing how despite the fact that LLMs can be really useful
| and transformative for certain things, people like this insist on
| trying to convince you that they're useful and transformative for
| something that they're simply shit at doing.
| peterweyand38 wrote:
| I've solved this problem and come up with a root key for all
| language but it doesn't matter because I'm being poisoned and
| stalked in San Francisco.
|
| People are too stupid to be helped.
| meiraleal wrote:
| > People are too stupid to be helped.
|
| We can see that. Need help?
| peterweyand38 wrote:
| Funny. I'm being poisoned and harassed to the point where it
| doesn't matter. It's called gaslighting. Where's the last
| homeless person you did this to until they complained all the
| time? I remember them posting on hackernews. What happened to
| them? Are they dead?
|
| I need a place to live where I can sleep and eat without
| being poisoned. I was gassed with drugs by the public and
| city officials coordinated to have it happened. I'm being
| poisoned so I have constant headaches and then having
| strangers follow me around and sniff at me and scream in my
| ear.
|
| Can you find me a safe place to live and clean food to eat? I
| don't trust the shelter I'm in or city officials. Is that too
| stupid to be helped? Or are you powerless to help someone?
|
| I spend the entirety of my day emailing as many people as I
| possibly can to warn them away from coming to San Francisco.
|
| That's all of academia. Anyone powerful and famous I can
| think of. Anyone with any influence. Over every email
| platform I can think of while rotating addresses.
|
| And provide pictures of abandoned buildings. And people
| stalking me in public. People here are sick.
|
| You live in a ruins and you hurt and poison people you don't
| like with vigilante mobs and drugs. And so everyone that can
| leaves. Try and go to a doctor with an illness and see if you
| can "no really actually" your way into competent medical
| care.
|
| Want I should post the picture of people dumping all their
| fentanyl in the sewer again with the embossed logo of the
| city of San Francisco on the grate? That's "funny" too.
|
| I wouldn't be so cruel and mean were I not being poisoned and
| in constant pain.
|
| (Edit - cute trick about making it so that what I type has
| errors in it when I post it so I have to go back and edit it.
| Happens in my emails too because my phone is bugged. And then
| when I find all the errors and correct them some homeless guy
| grunts or some asshole in public "no actually reallys".
| Christ you're all so fucking ignorant and evil. Oh look I
| said Christ by all means send the religious loonies after me
| now. I wonder if the guy who cleans the fentanyl out of your
| water supply cares that he can't go to the doctors because
| they're all sick. But that's cool. You're good at programming
| a phone.)
| meiraleal wrote:
| Are you poor and broke? Do you have family?
|
| > cute trick about making it so that what I type has errors
| in it when I post it so I have to go back and edit it.
|
| It looks like you are in the middle of a psychotic break,
| please seek real help, not what you think would fix the
| problems you described.
| bytebach wrote:
| I recently has a consulting gig (medical informatics) that
| required English Declarative -> Imperative code. Direct code
| generation by the LLM turned out to be buggy so I added an
| intermediate DSL implemented in Prolog! The prompt described the
| Prolog predicates it had to work with and their semantics and the
| declarative goal. The resulting (highly accurate and bug free)
| Prolog code was then executed to generate the conventional
| (Groovy) imperative code that was then executed dynamically. In
| some hand-wavy way the logical constraints of using Prolog as an
| intermediate DSL seemed to keep the LLM on the straight and
| narrow.
| danielvaughn wrote:
| A while ago I created a very simple AI tool that lets you write
| any kind of pseudocode you like, and then choose a language to
| convert it into. I didn't do much with it, but I like that style
| better because at least you can verify and correct the output.
|
| For instance: // pseudocode input
| fizzBuzz(count) for each i in count if
| divisible by 3, print 'fizz' if divisible by 5, print
| 'buzz' if both, print 'fizz buzz' // rust
| output fn fizz_buzz(count: i32) { for i in
| 1..=count { match (i % 3, i % 5) { (0, 0)
| => println!("fizz buzz"), (0, _) => println!("fizz"),
| (_, 0) => println!("buzz"), _ => println!("{}", i),
| } } }
| alexpetros wrote:
| This is not declarative programming, it's codegen. Codegen has
| its place, but it does not have the most important property of
| declarative interfaces: that the implementation can improve
| without altering the code.
|
| As others have pointed out, natural language is often
| insufficient to describe precisely the operations that you want.
| Declarative programming solves this with specialized syntax; AI
| codegen solves this by guessing at what you left out, and then
| giving you specific imperative code that may or may not do what
| you want. Personally, I'll be investing my time and resources
| into the former.
| RodgerTheGreat wrote:
| On the other hand, if you use an LLM to generate code, all you
| have to do is change models, or adjust the model's temperature,
| or simply prompt the model a second time and you can expect the
| result to be teeming with a fresh batch of new flaws and
| surprising failure modes. An endless supply of debugging
| without the inconvenience of having to write a program to begin
| with!
| Elfener wrote:
| My opinion on this whole "using LLMs as a programming language"
| thing is described nicely by this comic:
| https://www.commitstrip.com/en/2016/08/25/a-very-comprehensi...
|
| > Do you know the industry term for a project specification that
| is comprehensive and precise enough to generate a program?
|
| > Code, it's called code.
| TeMPOraL wrote:
| That comic, while funny, is making a pretty weak argument (and
| basically reverses that other joke that the highest-level/sixth
| generation programming language is a graduate CS student).
|
| _Obviously_ machines need code to execute. But humans don 't
| need to write every line of it. Transitioning to using (future)
| LLMs as a programming language is transitioning from the role
| of a programmer to the role of a customer (or at least PM).
| Conversely, as a programmer (or technical manager), if your job
| is to extract a precise spec from a customer that doesn't know
| what they want, your job is exactly the one "LLMs as a
| programming language" are going to replace.
| malkarouri wrote:
| The point is that explaining the requirements in a precise
| manner to LLM is literally coding the problem in a higher
| level language; the LLM is acting as a compiler for that
| precise description in English.
|
| I actually am sympathetic to your point about the value of
| LLMs in programming, but more from the perspective that LLMs
| can help us to do the precise description gradually and
| interactively in a much better way that a dumb REPL.
| fullstackchris wrote:
| While interesting, this still can't account for domain expertise
| and system design decisions - you can't assume every character /
| line / function / method typed is just "correct" and exactly what
| you'll need. There are 1000s of ways to do both the wrong and
| right thing in software.
|
| The real problem always comes back to the fact that the LLM cant
| just make code appear out of nowhere, it needs _your_ prompt (or
| at least code in the context window) to know what code to write.
| If you can't exactly describe the requirements - or what is
| increasingly happening - _know_ the actual technical descriptions
| for what you are trying to accomplish, its kinda like having a
| giant hammer with no nail to hit. I'm worried of a sort of future
| where we sort of program ourselves into a circle, all programs
| starting to look the same simply because the original "hardcore"
| or "forgotten" patterns and strategies of software design "just
| don't need to be taught anymore". In other words, people getting
| things to work but having no idea how they work. Yes I get the
| whole "most people dont know how cars work but use them", but as
| a software engineer not really knowing how the actual source code
| itself works? It feels strange and probably ultimately the wrong
| direction.
|
| I also think the entire idea of a fully automated feature build /
| test / deploy AI system is just impossible... the complexity of
| such a landscape is just too large to automate with some sort of
| token generator. AGI, if course, but LLMs are so far from AGI
| it's laughable.
___________________________________________________________________
(page generated 2024-09-15 23:01 UTC)