[HN Gopher] Can logic programming be liberated from predicates a...
       ___________________________________________________________________
        
       Can logic programming be liberated from predicates and
       backtracking? [pdf]
        
       Author : matt_d
       Score  : 167 points
       Date   : 2024-10-12 04:58 UTC (1 days ago)
        
 (HTM) web link (www-ps.informatik.uni-kiel.de)
 (TXT) w3m dump (www-ps.informatik.uni-kiel.de)
        
       | xelxebar wrote:
       | _Abstract_. Logic programming has a long history. The
       | representative of logic programming in practice, the language
       | Prolog, has been introduced more than 50 years ago. The main
       | features of Prolog are still present today: a Prolog program is a
       | set of predicate definitions executed by resolution steps with a
       | backtracking search strategy. The use of back- tracking was
       | justified by efficiency reasons when Prolog was invented.
       | However, its incompleteness destroys the elegant connection of
       | logic pro- gramming and the underlying Horn clause logic and
       | causes difficulties to teach logic programming. Moreover, the
       | restriction to predicates hinders an adequate modeling of real
       | world problems, which are often functions from input to output
       | data, and leads to unnecessarily inefficient exe- cutions. In
       | this paper we show a way to overcome these problems. By
       | transforming predicates and goals into functions and nested
       | expressions, one can evaluate them with a demand-driven strategy
       | which might re- duce the number of computation steps and avoid
       | infinite search spaces. Replacing backtracking by complete search
       | strategies with new imple- mentation techniques closes the gap
       | between the theory and practice of logic programming. In this
       | way, we can keep the ideas of logic program- ming in future
       | programming systems.
        
         | bradley13 wrote:
         | I didn't read past the abstract, but it sounds like they are
         | just transforming logic-based programs into function-based
         | programs. But: if I wanted functional programming, I wouldn't
         | be writing in Prolog.
         | 
         | What would be interesting, would be to replace depth-first
         | search while remaining in the world of predicates and Horn
         | clauses.
        
           | usgroup wrote:
           | functional logic programming is not equivalent to functional
           | programming.
        
           | harperlee wrote:
           | Naively, writing only one logic relation of n parameters is
           | about equivalent to writing n^2 functions (just decide for
           | each parameter whether you give it or not as input). So there
           | clearly is value there.
           | 
           | I say naively because on one hand you might not need all
           | versions of the function, but on the other one you can also
           | provide partial values, so it's not either input or output.
        
           | YeGoblynQueenne wrote:
           | >> What would be interesting, would be to replace depth-first
           | search while remaining in the world of predicates and Horn
           | clauses.
           | 
           | For that you want tabled Prolog, or in other words Prolog
           | executed by SLG-Resolution. The paradigmatic implementation
           | is XSB Prolog:
           | 
           | https://xsb.com/xsb-prolog/
           | 
           | SWI-Prolog also supports tabling but I think the XSB
           | implementation is more mature.
        
         | cerved wrote:
         | isn't backtracking a complete search strategy?
        
           | desdenova wrote:
           | That phrase was badly written.
           | 
           | Backtracking is a complete search of the problem-space.
           | 
           | What is incomplete is the Horn-SAT problem space, which is a
           | subset of SAT, that can be solved in polynomial time, and is
           | what Prolog is based on.
           | 
           | A complete logic system would have to solve SAT, which is NP-
           | complete.
           | 
           | At least that's what I understood they meant by that.
        
             | YeGoblynQueenne wrote:
             | Yeah, it's confusing. The article is referring to the
             | incompleteness of Prolog implemented using Depth First
             | Search. That's what the author means by "backtracking". I
             | know this because I know "backtracking" is used in the
             | logic programming community to stand for DFS, but if you
             | don't know the jargon you'd be right to be confused. You
             | can kind of, er, glean, that meaning in the article if you
             | see how they refer to a "fixed" search strategy, and also
             | notice that "backtracking" is not normally a search
             | strategy since it can't search on its own. "Backtracking"
             | is really "DFS with backtracking".
             | 
             | The article is pointing out that Prolog with backtracking
             | DFS is incomplete _with respect to the completeness of SLD-
             | Resolution_. To clarify, SLD-Resolution is complete for
             | refutation, or with subsumption. Prolog is an
             | implementation of SLD-Resolution using DFS with
             | backtracking. DFS is incomplete in the sense that it gets
             | stuck in infinite loops when an SLD tree (the structure
             | searched by DFS in Prolog) has cycles, especially left-
             | recursive cycles. The article gives an example of a program
             | that loops forever when executed with DFS with
             | backtracking, in ordinary Prolog.
             | 
             | SLD-Resolution's completeness does not violate the Church-
             | Turing thesis, so it's semi-decidable: SLD-trees may have
             | infinite branches. To be honest I don't know about the
             | equivalence with Horn-SAT, but Resolution, restricted to
             | definite clauses, i.e. SLD-Resolution, _is_ complete (by
             | refutation and subsumption, as I say above, and respecting
             | some structural constraints to do with the sharing of
             | variables in heads and bodies of clauses). We got several
             | different proofs of its completeness so I think we can
             | trust it 's true.
             | 
             | Edit: where does this knowledge about Horn-Sat come from?
             | Do you have references? Gimme gimme gimme.
        
           | usgroup wrote:
           | Depth first search is not complete if branches can be
           | infinitely deep. Therefore if you're in the wrong infinite
           | branch the search will never finish.
           | 
           | Breadth first search is complete even if the branches are
           | infinitely deep. In the sense that, if there is a solution it
           | will find it eventually.
        
             | desdenova wrote:
             | In practice, though, with BFS you'd run out of memory
             | instead of never finding a solution.
             | 
             | Also, there shouldn't be many situations where you'd be
             | able to produce infinite branches in a prolog program.
             | Recursions must have a base case, just like in any other
             | language.
        
               | YeGoblynQueenne wrote:
               | This has to do with the ordering of search: searching a
               | proof tree (an SLD tree, in SLD-Resolution) with DFS, as
               | in Prolog, can get stuck when there are cycles in the
               | tree. That's especially the case with left-recursion. The
               | article gives an example of a left-recursive program that
               | loops if you execute it with Prolog, but note that it
               | doesn't loop if you change the order of the clauses.
               | 
               | This version of the program, taken from the article,
               | loops (I mean it enters an infinite recursion):
               | last([_H|T],E) :- last(T,E).       last([E],E).
               | ?- last_(Ls,3).       % Loops
               | 
               | This one doesn't:                 last([E],E).
               | last([_H|T],E) :- last(T,E).            Ls = [3] ;
               | Ls = [_,3] ;       Ls = [_,_,3] ;       Ls = [_,_,_,3] ;
               | Ls = [_,_,_,_,3] ;       Ls = [_,_,_,_,_,3] .       % And
               | so on forever
               | 
               | To save you some squinting, that's the same program with
               | the base-case moved before the inductive case, so that
               | execution "hits" the base case when it can terminate.
               | That's half of what the article is kvetching about: that
               | in Prolog, you have to take into account the execution
               | strategy of logic programs and can't just reason about
               | the logical consequences of a program, you also have to
               | think of the imperative meaning of the program's
               | structure. It's an old complain about Prolog, as old as
               | Prolog itself.
        
               | agumonkey wrote:
               | IIRC Markus Triska showed a trick (with a nickname i
               | forgot) to constrain the search space by embedded a
               | variable length into the top level goal.
        
               | YeGoblynQueenne wrote:
               | I think what you mean is that he adds an argument that
               | counts the times a goal is resolved with, thus limiting
               | the depth of resolution? That works, but you need to give
               | a magic number as a resolution depth limit, and if the
               | number is too small then your program fails to find a
               | proof that it normally should be able to find. It's not a
               | perfect solution.
        
             | xelxebar wrote:
             | Hrm. I guess the converse applies if nodes can have
             | infinite children. That said, even if your tree is
             | infinitely wide and deep, we're only dealing with countable
             | children, right? Thus a complete traversal has to exist,
             | right?
             | 
             | For example, each node has unique path to root, so write
             | <n1, n2, ..., nk> where each ni is the sibling ordinal of
             | the node at depth i in that path, i.e. it's the ni-th
             | sibling of the n(i-1)st node. Raising each of these to the
             | ith prime and taking a product gives each node a unique
             | integer label. Traverse nodes in label order and voila?
             | 
             | However, that all assumes we know the tree beforehand,
             | which doesn't make sense for generic call trees. Do we just
             | smash headfirst into Rice on this when trying to traverse
             | in complete generality?
        
               | usgroup wrote:
               | No breadth first search is still complete given an
               | infinite branching factor (i.e. a node with infinite
               | children). "Completeness" is not about finishing in
               | finite time, it also applies to completing in infinite
               | time.
               | 
               | Breadth first search would visit every node breadth
               | first, so given infinite time, the solution would
               | eventually be visited.
               | 
               | Meanwhile, say a branch had a cycle in it, even given
               | infinite time, a naive depth first search would be
               | trapped there, and the solution would never be found.
        
               | LegionMammal978 wrote:
               | Suppose you have a node with two children A and B, each
               | of which has infinitely many children. If you performed
               | an ordinary BFS, you could get trapped in A's children
               | forever, before ever reaching any of B's children.
               | 
               | Or, suppose that a node has infinitely many children, but
               | the first child has its own child. A BFS would get stuck
               | going through all the first-level children and never
               | reach the second-level child.
               | 
               | A BFS-like approach _could_ work for completeness, but
               | you 'd have to put lower-level children on the same
               | footing as newly-discovered higher-level children. E.g.,
               | by breaking up each list of children into additional
               | nodes so that it has branching factor 2 (and possibly
               | infinite depth).
        
               | usgroup wrote:
               | Countable infinity does not work like that: two countable
               | infinities are not more than one countable infinity. I
               | think it falls into the "not even wrong" category of
               | statements.
               | 
               | The Wikipedia article is fairly useful:
               | https://en.wikipedia.org/wiki/Countable_set
        
               | LegionMammal978 wrote:
               | Yes, if you put two (or three, or countably many)
               | countable sets together, you obtain a set that is also
               | countable. The problem is, we want to _explicitly
               | describe_ a bijection between the combined set and the
               | natural numbers, so that each element is visited at some
               | time. Constructing such a bijection between the natural
               | numbers and a countably-infinite tree is perfectly
               | possible, but it 's less trivial than just DFS or BFS.
               | 
               | If we're throwing around Wikipedia articles, I'd suggest
               | a look at https://en.wikipedia.org/wiki/Order_type. Even
               | if your set is countable, it's possible to iterate
               | through its elements so that some are never reached, not
               | after any length of time.
               | 
               | For instance, suppose I say, "I'm going to search through
               | all positive odd numbers in order, then I'm going to
               | search through all positive even numbers in order." (This
               | has order type o[?]2.) Then I'll never ever reach the
               | number 2, since I'll be counting through odd numbers
               | forever.
               | 
               | That's why it's important to order the elements in your
               | search strategy so that each one is reached in a finite
               | time. (This corresponds to having order type o, the order
               | type of the natural numbers.)
        
               | thethirdone wrote:
               | > "Completeness" is not about finishing in finite time,
               | it also applies to completing in infinite time.
               | 
               | Can you point to a book or article where the definition
               | of completeness allows infinite time? Every time I have
               | encountered it, it is defined as finding a solution if
               | there is one in finite time.
               | 
               | > No breadth first search is still complete given an
               | infinite branching factor (i.e. a node with infinite
               | children).
               | 
               | In my understanding, DFS is complete for finite depth
               | tree and BFS is complete for finite branching trees, but
               | neither is complete for infinitely branching infinitely
               | deep trees.
               | 
               | You would need an algorithm that iteratively deepens
               | while exploring more children to be complete for the
               | infinite x infinite trees. This is possible, but it is a
               | little tricky to explain.
               | 
               | For a proof that BFS is not complete if it must find any
               | particular node in finite time: Imagine there is a tree
               | starting with node A that has children B_n for all n and
               | each B_n has a single child C_n. BFS searching for C_1
               | would have to explore all of B_n before it could find it
               | so it would take infinite time before BFS would find C_1.
        
             | agumonkey wrote:
             | Reminds me that Warren made a talk about prolog term
             | domains to study resolution over infinite branches.
        
       | xelxebar wrote:
       | Man, lately, I feel like this stuff has been following me around.
       | I'd really like to deep-dive into logic programming and related
       | paradigms. Just recently came across Answer Set Programming[0]
       | (via Potassco's clingo[1]), and it has made me realize just how
       | ignorant I am of the design space that's being explored here.
       | 
       | More personally, I recently spent enough time with first Scheme
       | and then APL that the paradigms clicked for me, and the effect
       | that had on the entirety of my outlook on work was dramatically
       | changed as a result. For whatever reason, I feel like breaking
       | down my ingrained technical paradigms has allowed me to integrate
       | and strengthen my soft skills.
       | 
       | Plus, mind-expanding experiences are just plain fun. Looking for
       | more of that juice!
       | 
       | [0]:https://en.wikipedia.org/wiki/Answer_set_programming
       | 
       | [1]:https://potassco.org/
        
         | gnulinux wrote:
         | I strongly recommend checking Souffle programming language.
         | It's a dialect of Datalog that can output bulk CSV data that
         | can be easily imported into other databases (like Duckdb or
         | Excel etc). It creates an extremely intuitive framework for
         | logical programming. I.e. you can visualize logical programming
         | as each relation being a giant table of elements, "or"
         | operation being akin to SQL `union all`, "and" operation being
         | akin to SQL `join`, "not" operation being akin to `outer join
         | ... where joined isnull` etc...
        
           | PaulHoule wrote:
           | The tragedy of RDF and OWL is that people don't perceive the
           | connection between logic and databases.
        
         | szundi wrote:
         | Could have been an LSD trip description
        
           | YeGoblynQueenne wrote:
           | No, it's _SLD_ -Resolution :P
        
             | 082349872349872 wrote:
             | "Susie in the Lye with Diamonds"                 Picture
             | yourself as the goal of a problem       With cut-driven
             | search and definite clause       Somebody calls you, you
             | answer quite "no"-ly       A girl with Kowalskified eyes...
        
         | MIA_Alive wrote:
         | I'm literally using ASP and Clingo to do logic programming for
         | school. And you're telling it became relevant to you in your
         | work??
        
           | sterlind wrote:
           | I use ASP at work! I used it as the core of a powerful code
           | generator: I modeled the type system I wanted to implement,
           | some base operations and derivation rules, and had it
           | synthesize implementations for every possible operator
           | between every possible pair of types. I run clasp and it
           | dumps out thousands of lines of C# implementing a simple
           | symbolic matrix linear algebra library. It's one of the most
           | beautiful things I've made, imo.
        
       | tempodox wrote:
       | The Curry language (https://www.curry-language.org) does look
       | interesting. Does anybody have practical experience with it?
        
         | mdaniel wrote:
         | while I was trying to track down the license for it[1], I found
         | https://git.ps.informatik.uni-kiel.de/curry/curry2go (also
         | BSD-3) which says "A compiler and run-time system to compile
         | and run Curry programs as Go programs"
         | 
         | 1: maybe this is it? it does include _a lot_ of curry named
         | submodules; anyway, BSD-3-Clause https://git.ps.informatik.uni-
         | kiel.de/curry/pakcs/-/blob/v3....
        
       | HeralFacker wrote:
       | Yes, it's called functional programming and it's been around for
       | a while
        
       | woolion wrote:
       | This is a reference to the "Can programming be liberated from the
       | von Neumann style?" from 1977. It argues for functional
       | programming, making the point that the imperative style is more
       | common for efficiency reasons, as the programming model is close
       | to the computer architecture. It aims to be a general thought
       | framework inviting to step a step back on some notions that have
       | been (hastily?) accepted in the programming world.
       | 
       | It makes the same analogy that Prolog (or logic programming
       | languages in general) have been strongly influenced by the
       | resolution algorithm. In practice that means that if you write a
       | non-trivial program, if performance is not right you'll need to
       | understand the execution model and adapt to it, mainly with the
       | pruning operator (!). So while the promise is to "declare" values
       | and not think about the implementation details, you're railroaded
       | to think in a very specific way.
       | 
       | I personally found that frustrating to find good solutions
       | essentially unworkable because of this, in comparison with either
       | imperative or functional paradigms that are significantly more
       | flexible. As a result, Prolog-style programming feels limited to
       | the small problems for which it is readily a good fit, to be
       | integrated into a general program using a general-purpose
       | language. I may be wrong on this, but of the 50 people that
       | learned Prolog around the same time as me, none kept up with it.
       | Meanwhile, other niche languages like Ocaml, Haskell and Scheme
       | had good success.
       | 
       | Rethinking the language foundation could remove these barriers to
       | give the language a broader user base.
        
         | PaulHoule wrote:
         | The way you write imperative programs in Prolog by exploiting
         | the search order, using cuts, etc. seems clever when you see it
         | in school and do a few assignments for a comparative
         | programming languages class (the only 3 credit CS course I
         | took) but it is painfully awkward if you have to do very much
         | of it.
        
           | YeGoblynQueenne wrote:
           | It isn't. I do most of my programming in Prolog, I write
           | oodles of it daily, and it's not a problem. You learn to
           | think that way easily.
           | 
           | The argument is basically that Prolog is not 100% declarative
           | and that if we jump through a few hoops, and translate it all
           | to functional notation, we can make it "more declarative".
           | But let's instead compare the incomplete declarativeness of
           | Prolog to a fully-imperative, zero-declarative language like
           | Python or C#. We'll find I believe that most programmers are
           | perfectly fine programming completely non-declaratively and
           | don't have any trouble writing very complex programs in it,
           | and that "OMG my language is not purely declarative" is the
           | least of their problems. I hear some old, wizened nerds even
           | manage to program in C where you actually can drop to the
           | hardware level and push bits around registers entirely by
           | hand O.o
        
             | Pearledlang wrote:
             | Really interesting to hear!
             | 
             | I was quite hooked to Prolog in a previous life. Then the
             | limiting factor was the tooling, for really practical use.
             | 
             | Could you tell a bit about your Prolog environment?
        
               | agumonkey wrote:
               | seconded
        
               | YeGoblynQueenne wrote:
               | I use the SWI-Prolog IDE:
               | 
               | https://www.swi-prolog.org/PceEmacs.html
               | 
               | I suppose it is a bit spartan, but it has a ton of
               | functionality that I find indispensable [1]. For example,
               | when I place my cursor on a variable in the editor it
               | highlights all the variables that unify with it in the
               | same clause, and it will highlight singleton variables in
               | a different colour so you can catch errors caused by
               | typos easily. It is also aware of things like imported
               | predicates, undefined or dynamic predicates, multi-file
               | predicates etc, and will highlight them accordingly. It
               | has some (limited) auto-complete and code-expansion etc.
               | and a status line that gives you very good information
               | about the kind of term you have your cursor on - where
               | it's defined if it's a predicate, its name and arity and
               | how many clauses etc, basically much of the information
               | you can get from querying the program database with
               | various program inspection built-ins. Some of my
               | colleagues use VS Code to write Prolog instead and I keep
               | shaking my head and grumbling about the errors they keep
               | making that they wouldn't if they used the SWI-Prolog IDE
               | instead. Kids, these days. In my youth, we wrote all our
               | Prolog on Notepad! And compiled on Dos!
               | 
               | (nope. never)
               | 
               | SWI also has a graphical debugger, which however I never
               | use:
               | 
               | https://www.swi-
               | prolog.org/pldoc/doc/_SWI_/xpce/prolog/lib/g...
               | 
               | I know some people swear by its name but I prefer the
               | textual debugger. Again this one looks a little spartan
               | :)
               | 
               | More goodies in SWI: cross-referencer:
               | 
               | https://www.swi-prolog.org/gxref.md
               | 
               | And profiler:                 1 ?-
               | profile(between(1,10,_)).       =========================
               | ============================================       Total
               | time: 0.000 seconds       ===============================
               | ======================================       Predicate
               | Box Entries =    Calls+Redos     Time       =============
               | ========================================================
               | between/3                                 1 =        1+0
               | 0.0%       true.
               | 
               | And lots and tons of facilities for debugging and error
               | handling, unit testing, all sorts of libraries, a package
               | manager etc.
               | 
               | ________________________
               | 
               | [1] You can customise the IDE colours too. There's a dark
               | theme:
               | 
               | https://www.swi-prolog.org/pldoc/man?section=theme
               | 
               | There's some screenshots here:
               | 
               | https://swi-prolog.discourse.group/t/questions-about-ide-
               | the...
        
               | nextos wrote:
               | Do you use Prolog in Academia or you have moved to
               | Industry?
        
             | upghost wrote:
             | Serious question, how do you deal with typos in functors?
             | And is your techniques specific to the implementation of
             | Prolog you use? Recently I had a maddening experience
             | chasing this down:                 result(World0,
             | move(robot(R), Dir), World) :-             dissoc(World0,
             | at(robot(R), X0), World1),
             | direction_modifier(Dir, Modifier),             X #=
             | X0+Modifier,             conj(World1, at(robot(R), X),
             | World).       result(World0, drop_rock(robot(R), Place),
             | World) :-             dissoc(World0, capacity(Place,
             | Capacity0), World1),             dissoc(World1,
             | carring_rock(robot(R)), World2),             Capacity #=
             | Capacity0 + 1,             conj(World2, capacity(Place,
             | Capacity), World).       result(World0,
             | pickup_rock(robot(R), Place), World) :-
             | dissoc(World0, capacity(Place, Capacity0), World1),
             | Capacity #= Capacity0 - 1,             conj(World1,
             | capacity(Place, Capacity), World2),
             | conj(World2, carrying_rock(robot(R)), World).
             | 
             | See if you can spot the bug.
             | 
             | ...
             | 
             | ...
             | 
             | ...                 carrying_rock vs carring_rock
             | 
             | Because the typo was in a functor (not predicate or
             | singleton variable) there was no IDE or language support,
             | Prolog assumed that I wanted an reported the wrong answer.
             | of course the snippet I showed was part of a larger
             | example. In other languages it would've taken me 5 minutes
             | to bisect the program or debug and find the error but it
             | took me 3-4 hours. I ended up needing to write a lot of
             | error correcting code, basically a half-assed type system,
             | and that code ended up being more substantial than the
             | actual program logic. Is this common? Am I "doing it
             | wring"?
             | 
             | Right now this seems to have all the downsides of
             | programming exclusively with "magic strings", and I haven't
             | been able to find any cure for it or even seen this problem
             | discussed elsewhere.
             | 
             | *Edit:*
             | 
             | I even rewrote it for SICStus and downloaded their IDE and
             | taught myself Eclipse just to use their IDE plugin, and
             | found that setting breakpoints didn't help the problem,
             | because naturally due to the fact that the functor is in
             | the predicate signature, the predicate is never stepped
             | into in the first place!
             | 
             | I could linearize the arguments and include them in the
             | body but this destroys the indexing and can put me into
             | "defaulty representation" territory.
        
               | YeGoblynQueenne wrote:
               | >> Right now this seems to have all the downsides of
               | programming exclusively with "magic strings", and I
               | haven't been able to find any cure for it or even seen
               | this problem discussed elsewhere.
               | 
               | The cure is to not try to program with "magic strings".
               | You don't need to, and if you really want to, then you
               | should try to understand what exactly it is that you're
               | doing, and do it right.
               | 
               | Specifically, what you call "magic strings" are what we
               | call "functions" in First Order Logic, and that Prolog
               | calls compound terms, like carring_rock(robot(R)) which
               | is in fact two compound terms, nested. Prolog lets you
               | nest compound terms infinitely and if you really want to
               | hurt yourself, one great way to do it is to nest terms
               | everywhere.
               | 
               | The alternative is to understand that when you use
               | compound terms as arguments to predicates (or other
               | terms) you are really _typing_ those arguments. Above,
               | "carring_rock(_)" is the type of the second argument of
               | result/3, and "robot(_)" is the type of the single
               | argument of carring_rock/1. The catch is, of course, that
               | Prolog is not a typed language and it doesn't care if you
               | want to hurt yourself. So if you need to have types like
               | that, then you should write some code to explicitly
               | handle types and do some type checking. For instance,
               | right out the top of my head:
               | result_type(T):-           T =.. [carring_rock,R]
               | ,robot_type(R)           ,!.       result_type(T):-
               | throw('Unknwon result type':T)
               | robot_type(T):-           T =.. [robot,R]           , %
               | ... further typing of R           ,!.
               | robot_type(T):-           throw('Unknown robot type':T)
               | 
               | Note that this is not the right way to throw errors but I
               | can't now.
               | 
               | A simpler thing I'd advise, but that's going into style
               | territory, is to keep variable names as short as
               | possible. One reason it's hard to spot the mistake in the
               | code above is that the source code is all cluttered with
               | long variable names and the nesting of compound terms
               | makes it even more cluttered. An IDE with good syntax
               | highlighting can also help. Perversly, I find that there
               | are many people who code in Prolog either without syntax
               | highlighting at all or in IDEs that are not aware of
               | common results of typos, like singleton variables. The
               | SWI-Prolog IDE is good for this and Sweep for Emacs has a
               | good reputation also (although I haven't tried it):
               | 
               | https://eshelyaron.com/sweep.html
               | 
               | Edit: it just occurred to me that the project I'm working
               | on currently, in my post-doc, involves quite a bit of
               | typing using compound terms as arguments, like you do
               | above. I've opted for a program synthesis approach where
               | the predicates I need with typing are automatically
               | generated from a specification where I define the
               | arguments of predicates' types. Doing the same thing by
               | hand is probably my number two recommendation of how not
               | to code in Prolog. Number one is "the dynamic database is
               | evil", but does anyone listen to me? Never.
               | 
               | Edit 2: Covington et al's _Coding Guidelines for Prolog_
               | make the same point:                 5.13 Develop your
               | own ad hoc run-time type and mode checking system.
               | Many problems during development (especially if the
               | program is large and/or there       are several
               | developers involved) are caused by passing incorrect
               | arguments. Even       if the documentation is there to
               | explain, for each predicate, which arguments are
               | expected on entry and on successful exit, they can be,
               | and all too often they are,       overlooked or ignored.
               | Moreover, when a "wrong" argument is passed, erratic be-
               | havior can manifest itself far from where the mistake was
               | made (and of course,       following Murphy's laws, at
               | the most inconvenient time).            In order to
               | significantly mitigate such problems, do take the time to
               | write your       own predicates for checking the legality
               | of arguments on entry to and on exit from       your
               | procedures. In the production version, the goals you
               | added for these checks       can be compiled away using
               | goal_expansion/2.
               | 
               | https://arxiv.org/abs/0911.2899
               | 
               | >> I could linearize the arguments and include them in
               | the body but this destroys the indexing and can put me
               | into "defaulty representation" territory.
               | 
               | Sorry, I don't know what either of those are: "linearize
               | the arguments" and "defaulty representation".
               | 
               | Prolog only has first-argument indexing normally,
               | although some implementations let you change that and use
               | your own indexing scheme. Is that what you did?
        
               | foobiekr wrote:
               | so so so so so so so so so much this. I have tried to
               | make prolog a part of various systems on and off for two
               | decades and the ergonomics and basic practical shit like
               | this is why it never works.
               | 
               | the answer is always: do a lot of manual stuff that the
               | language should do for you. I can, but I can't get a team
               | to.
        
               | YeGoblynQueenne wrote:
               | Then don't use Prolog. It's not mandatory.
               | 
               | For the record I never have problems like that and I'm
               | sure I'm not special. Well, not in _that_ way. This all
               | comes under the heading of  "learn what works". You have
               | to do that with any language.
               | 
               | Edit: as a slightly less flippant answer (sorry) Prolog
               | doesn't "do a lot of manual stuff that the language
               | should do for you" because the Prolog community doesn't
               | think the language should do those things for you and I
               | agree. Take for instance inheritance and composition,
               | like in object orientation. There's no reason Prolog
               | should do that for you. If it did, it would shoehorn you
               | into a certain programming paradigm that can feel like a
               | straightjacket when you don't need it, but you absolutely
               | need to use it because that's what the "language does for
               | you". Prolog instead gives you the tools to do all the
               | things you need, when you need them. It's more work, for
               | sure, but it's also more freedom. I spent most of my
               | career in the industry working with very rigidly OOP
               | languages and that's one reason why I escaped into
               | academia, where I can program in Prolog (see what I did
               | there?) all day without having to declare a class
               | property if I don't want to. And if I really wanted to, I
               | could go all the way and write something like Logtalk:
               | 
               | https://logtalk.org/
               | 
               | Another example of something that Prolog doesn't do for
               | you, and that you don't always need, but can always do
               | yourself if you need it, is typing; like I point out in
               | the sibling comment. Why _should_ Prolog do that for you?
               | Are we going to have the whole argument about strongly
               | typed vs. weakly typed languages all over again? I think
               | not. If you want types in Prolog, you can roll your own.
               | Now try to write, say, C# code without types, when you
               | really don 't need the bother.
        
             | skybrian wrote:
             | I agree that not being fully declarative is okay. But lots
             | of pure functions are written in languages like Python and
             | C#. "Fully-imperative, zero-declarative" seems like a bit
             | of an exaggeration?
             | 
             | (I'm generally of the opinion that both laziness and
             | backtracking are bad defaults for general-purpose
             | programming, but they're handy when you need them.)
        
         | tannhaeuser wrote:
         | It might be a reference to that 1977 paper in name, but unlike
         | that Backus paper using math to make its point, this reads like
         | a shallow ad for using the Curry language. The central (and
         | only) point is merely an _example_ of rewriting a Prolog
         | predicate for appending lists into Curry but without even the
         | claim of generality. The rewritten Curry functions however
         | trivially fix determinacy and variables to input and output
         | roles when the entire point of logic variables in Prolog is
         | that they 're working either when bound to a value upon
         | entering a predicate call, or can get bound to as many values
         | indeterministically as needed until the procedure terminates
         | succesfully (and then even more on backtracking over failed
         | subsequent goals). The paper also glosses over the concept of
         | unification here. I sure hope the referenced detail papers come
         | with more substance.
         | 
         | The paper title doesn't even make sense? So he wants to
         | "liberate" Logic Programming from predicates? Predicate Logic
         | _is_ First-Order Logic ... ie what YeGoblynQueenne says in
         | another comment.
         | 
         | From a mere practical PoV, who is the author even addressing?
         | Prolog, from the get go, was introduced with NLP, planning
         | problems, and similar large combinatorical search spaces in
         | mind and is used for the convenience it brings to these
         | applications. That FP could theoretically be more broadly used
         | is completely besides the point; Prolog's focus is its
         | strength.
        
         | gatane wrote:
         | At Uni, we had to implement the same project on different
         | paradigms/languages.
         | 
         | We had to do goddamn Paint on Prolog. Yup.
        
       | amboo7 wrote:
       | https://icfp24.sigplan.org/home/minikanren-2024#event-overvi...
       | is related. There has been work on turning slow bidirectional
       | code into faster functional code (in 2023, reachable from this
       | url).
        
       | 2-3-7-43-1807 wrote:
       | Is this about the problem that Prolog tends to ping pong
       | infinitely between leafs? I wrote a fully declarative solitaire
       | game solver and remember this being a big issue forcing me to
       | memorize past states of the backtracking and basically exclude
       | them by another predicate. This is obviously slow. I thought, why
       | not at least have a plugin to avoid trivial cases where
       | backtracking gets stuck switching between A and B. Or a plugin
       | for a stochastic traversal of the solution tree.
        
         | usgroup wrote:
         | There are. Tabling (available in most mature implementations)
         | helps when recalculation of the same states is a problem.
         | Meanwhile, custom search strategy is always an option to
         | implement directly in Prolog. You'll see this in many Advent of
         | Code solutions in Prolog when it is applied to path finding
         | puzzles, in which depth first search is rarely a workable
         | solution.
        
       | jkbyc wrote:
       | There's an insightful critique of the paper on Reddit:
       | https://www.reddit.com/r/ProgrammingLanguages/comments/1g1su...
       | ...agree that it's weird the paper doesn't mention constraint
       | logic programming, but it's perhaps pointing at it implicitly by
       | saying "Replacing backtracking by complete search strategies"
        
         | YeGoblynQueenne wrote:
         | That's a good critique.
        
       | rebanevapustus wrote:
       | Datalog does not use backtracking, and is getting ever
       | increasingly more popular.
       | 
       | See: - The fastest non-incremental embedded Datalog engine
       | https://github.com/s-arash/ascent - The state-of-the-art non-
       | embedded and non-incremental Datalog engine
       | https://github.com/knowsys/nemo - A python library that contains
       | an embedded incremental Datalog engine
       | https://github.com/brurucy/pydbsp - A Rust library that provides
       | a embedded incremental Datalog engine over property graphs
       | https://github.com/brurucy/materialized-view
        
         | lapinot wrote:
         | Datalog is a nice query language but it is far more limited
         | than prolog or general purpose logic programming.
        
           | convolvatron wrote:
           | there is a really interesting space between queries and
           | prolog which includes mundane junk like encoding and
           | rendering and data formatting changes that benefits in
           | evaluation from having less power but maintains the lovely
           | expressibility and analysis that we get from 'real' logic
           | programming.
           | 
           | there's lots of exploring left to do
        
       | ristos wrote:
       | There already is a pretty major effort around the prolog
       | community to build everything as much as possible around pure,
       | monotonic prolog, and to provide a means to support multiple
       | search strategies depending on the best fit for the problem. CLP
       | libraries are also pretty common and the go-to for representing
       | algebraic expressions relationally and declaratively.
       | 
       | I wouldn't say that the logic or relational way of describing
       | effects is a bad thing either. By design it allows for multiple
       | return values (foo/1, foo/2, ...) you can build higher level
       | predicates that return multiple resources, which is pretty common
       | for many programs. It makes concatenative (compositional) style
       | programming really straightforward, especially for more complex
       | interweaving, which also ends up being quite common. Many prolog
       | implementations also support shift/reset, so that you can easily
       | build things like conditions and restarts, algebraic effects,
       | and/or debugging facilities on top. Prolog is also homoiconic in
       | a unique way compared to lisp, and it's quite nice because the
       | pattern matching is so powerful. Prolog really is one of the best
       | languages I ever learned, I wish it was more popular. I think
       | prolog implementations need a better C FFI interop and a nicer
       | library ecosystem. Trealla has a good C FFI.
       | 
       | I think logic programming is the future, and a lot of these
       | problems with prolog are fixable. If it's not going to be prolog,
       | it'll probably be something like kanren and datalog within a lisp
       | like scheme or clojure(script).
       | 
       | This is a great resource for getting a good feel of prolog:
       | https://www.youtube.com/@ThePowerOfProlog/videos
        
         | YeGoblynQueenne wrote:
         | SWI-Prolog has a robust and very mature foreign interface to
         | C++:
         | 
         | https://www.swi-prolog.org/pldoc/doc_for?object=section(%27p...
        
       | PaulHoule wrote:
       | Reminded me of the "rules and schemes" concept I was thinking of
       | about 10 years ago that would separate "pure logical" rules from
       | the stuff it takes to turn them into a program that runs by
       | either forward chaining (production rules) or backwards chaining
       | (like prolog). I got so far as writing a macro compiler that
       | would let you write a base set of rules and rewrite them for
       | different purposes such as transforming document A into document
       | B or going in the opposite direction.
       | 
       | I like how they let you write functions to control the search
       | order because boy that is essential.
        
       | YeGoblynQueenne wrote:
       | [Note I'm sick and tired; literally. I may not be firing on all
       | cylinders in the following.]
       | 
       | The article is fudging things with its use of "backtracking" as a
       | stand-in for backtracking _Depth First Search_ (DFS)- the latter
       | is the search strategy that is  "fixed" in Prolog. And it's not
       | really fixed. Prolog programs can also be executed by "tabling"
       | a.k.a. SLG-Resolution, which basically replaces DFS with Breadth-
       | First Search modulo momoization. Tabling avoids an important
       | source of "incompleteness" in Prolog, that of non-terminating
       | left-recursions.
       | 
       | To clarify, that is what makes Prolog "incomplete": that
       | executing Prolog programs by DFS makes Prolog loop infinitely
       | when encountering some left-recursions. The article gives the
       | example of a last/2 predicate:                 last([_H|T],E) :-
       | last(T,E).       last([E],E).
       | 
       | This does indeed loop. But this one doesn't:
       | last([E],E).       last([_H|T],E) :- last(T,E).            ?-
       | last_(Ls,3).       Ls = [3] ;       Ls = [_,3] ;       Ls =
       | [_,_,3] ;       Ls = [_,_,_,3] ;       Ls = [_,_,_,_,3] ;
       | Ls = [_,_,_,_,_,3] .
       | 
       | And that's what the article is pointing out with the allusion to
       | "a higher, declarative programming style which frees the
       | programmer from thinking about low-level control details". With
       | such a "higher declarative programming style" the programmer does
       | not have to think about program structure or execution strategy
       | and can write whatever, however, and still get results!
       | 
       | The problem with that argument, which is as old as Prolog and
       | possibly even older than that, is that it's an argument from
       | taste, or, indeed, "style", to use the article's terminology.
       | More specifically, for every non-terminating Prolog program that
       | can be written, we can _most likely_ write a Prolog program that
       | does terminate, and that is (success-set) equivalent to the non-
       | terminating program, by keeping in mind the structure of the
       | search space for proofs traversed by ordinary Prolog with DFS, or
       | tabling. And possibly with a few cuts here and there (oh, the
       | humanity!). The article is simply arguing that it is  "more
       | declarative" to not have to do that. But, why is "more
       | declarative" better? It's style all the way down.
       | 
       | As to the second axis of the argument, the horror of predicates
       | that do not neatly map to "many" real world problems that are
       | better represented as functions, asking whether we can liberate
       | logic programming from predicates is like asking whether we can
       | liberate the First Order Predicate Calculus from predicates, or,
       | equivalently, make ice cream without the ice or the cream. Sure
       | we can. The question is again: why? I don't see a clear
       | justification for that. In particular, setting implementation
       | details aside (as the article does on this point) SLD-Resolution
       | is already sound and refutation-complete, or complete with
       | subsumption. That is the best that can ever be achieved, and the
       | article doesn't seem to claim anything else (or the ghosts of
       | Alonzo Church and Alan Turing would be very, very sad). So this,
       | too, seems to be a matter of taste: functions are more stylish
       | than predicates. M'kay.
       | 
       | In fact, if I may blaspheme a little, this is what Church did
       | with his Lambda calculus: he turned everything into typed
       | functions to extend First Order Logic (FOL) into a higher order.
       | And in the process completely destroyed the simple elegance of
       | FOL. Why?
        
       | carapace wrote:
       | I'm not an expert, and I've only just skimmed the paper, but it
       | seems to me that this is a kind of _partial evaluation_ for
       | Prolog, eh? (Perhaps  "partial resolution" is more technically
       | correct?)
       | 
       | I think some Prolog systems do something like this already as an
       | optimization, but I could be totally off on that.
        
       ___________________________________________________________________
       (page generated 2024-10-13 22:01 UTC)