[HN Gopher] Defense of Lisp macros: The automotive field as a ca...
       ___________________________________________________________________
        
       Defense of Lisp macros: The automotive field as a case in point
        
       Author : molteanu
       Score  : 120 points
       Date   : 2024-07-25 09:18 UTC (13 hours ago)
        
 (HTM) web link (mihaiolteanu.me)
 (TXT) w3m dump (mihaiolteanu.me)
        
       | pfdietz wrote:
       | > "I don't think this is truly realized, but all this information
       | written down in word sheets, requirements, knowledge passed on in
       | meetings, little tricks and short hacks solved in video calls,
       | information passed in emails and instant messaging is information
       | that is part of the project, that is put down in some kind of a
       | language, a non-formal one like English or tables or boxes or
       | scribbles or laughs and jokes. The more a project has of these,
       | the harder it is to talk about and verify the actual state of the
       | project as it is obviously impossible to evaluate a non-formal
       | language like you would a formal programming language and thus it
       | becomes that much harder to explore and play with, test and
       | explore the project."
       | 
       | I've thought all this sloppy side information would be a good
       | target for the currently in vogue AI techniques.
        
         | molteanu wrote:
         | That would only mean we'd add even more tools on top of the
         | tools that currently sit themselves on top of C (in this
         | particular case) to fix our 'sloppy' ways, no?
        
           | pfdietz wrote:
           | Yes, but the hope would be a more general tool rather than a
           | raft of specialized tools.
        
       | FrustratedMonky wrote:
       | So how would we go about switching the entire software industry
       | to use LISP more? I've been struggling with this idea for awhile.
       | It seems that the best languages don't get adopted.
       | 
       | The only consistent explanation I've seen that it is about
       | 'easy'. The other languages have tools to make them easy, easy
       | IDE's, the languages 'solve' one 'thing' and using them for that
       | 'one thing' is easier to than building your own in LISP.
       | 
       | You can do anything with LISP, sure, but there is a learning
       | curve, lot of 'ways of thinking' to adopt the brain to in order
       | to solve problems.
       | 
       | Personally, I do wish we could somehow re-vamp the CS Education
       | system to focus on LISP and other ML languages, and train more
       | for the thinking process, not just how to connect up some Java
       | Libraries.
        
         | adamddev1 wrote:
         | There was a great episode of Type Theory for All with Conal
         | Elliot as a guest where he decried MITs decision to switch to
         | teaching Python instead of Scheme for introductory programming.
         | He makes a very good case for how CS education should be aiming
         | to pursue the greatest truths and not just equipping people
         | with whatever employers want.
        
           | anta40 wrote:
           | A podcast about type theory and other programming language
           | design topics? Oh well auto subscribe :)
        
         | dartos wrote:
         | Imo the issue is undergrad cs programs have an identity crisis.
         | 
         | Are they the entry into the world of computer science. Should
         | they teach set theory, computer organization, language theory,
         | etc.
         | 
         | OR
         | 
         | are they trying to prepare the new wave of workers. Should they
         | teach protocols like http, industry tools like Java and python,
         | and good test practices?
         | 
         | A well rounded engineer should have a grasp on it all, but with
         | only 4 years, what will attract more funding and more students?
        
           | VyseofArcadia wrote:
           | I've been advocating for years that CS departments need to
           | bifurcate. Actual CS classes, the theory-heavy kind, need to
           | be reabsorbed into math departments. People who want to study
           | computer science academically would be better served by
           | getting a math degree. The "how to write software" courses
           | need be absorbed into engineering departments, and maybe the
           | extra discipline gains from actual engineers teaching these
           | courses can start turning software engineers from computer
           | programmers with an inflated job title into actual engineers.
           | 
           | Students can then make a choice. Do I get a degree in
           | computer science and be better prepared for academia,
           | essentially being a mathematician who specializes in
           | computation? Or do I get a degree in computer engineering and
           | be better prepared to write reliable software in industry?
           | 
           | Of course this distinction exists today, and some
           | universities do offer separate CS and CE degrees, but in
           | practice it seems more often to be smashed together into a
           | catch-all CS degree that may or may not be more theory or
           | practice focused depending on the program.
        
             | molteanu wrote:
             | Just to add another angle to this: of course you can have
             | CS classes and all that good stuff, but would the
             | businesses only employ these kind of graduates? Or would
             | they spread even thinner to grab market share or increase
             | profits and hire non-experts as a result, for which "easy"
             | and intuitive tools have to be developed and employed? I
             | mean, I see this problem with abstraction, maths,
             | compilers, Lisp, etc, you know, the fundamental stuff. That
             | is, the deeper you go it will become that much harder to
             | find people willing or able to dive deep. So eventually you
             | run out of manpower and that what? Use these "intuitive"
             | tools, probably.
        
               | VyseofArcadia wrote:
               | I mean, this is already a thing today. Programmers who
               | can hack together a CRUD app in <insert popular web dev
               | stack> are a dime a dozen. People who know more about
               | compilers than "oh yeah I got a C in that class" are
               | pretty hard to find.
               | 
               | At some point businesses who need "deep experts" have to
               | hire non-experts and invest in training them. This is
               | what I have seen in practice. You don't use the
               | "intuitive" tools, you hire people who are willing to
               | learn and teach them the other tools.
        
             | scott_s wrote:
             | First, "computer engineering" is already a name for an
             | established discipline. And regarding academic computer
             | scientists, a significant amount of them are on the
             | "systems" side; it would be inaccurate to call their work a
             | specialization of math.
        
           | blacklion wrote:
           | IMHO, it is University-level vs College (vocational college)
           | difference. Industry should not require University-level
           | training from workers. Academy should.
           | 
           | Plumber or electrician don't need university degree, only
           | professional training till you design whole sewer system or
           | power grid for city.
           | 
           | So, yes, bifurcate CS to science and trade. And fight
           | requirements for Bachelor/Major degree in jobs offerings.
        
             | fn-mote wrote:
             | > Industry should not require University-level training
             | from workers. > Plumber or electrician don't need
             | university degree, only professional training [...]
             | 
             | Oh come on, this is ridiculous.
             | 
             | Sure _you_ might be able to do good programming without a
             | college degree.
             | 
             | The typical student? I give them very little chance.
             | 
             | Here's what I think you would see:
             | 
             | * Students trapped in small skill-set jobs, struggling to
             | branch out.
             | 
             | * Poor ability to manage complexity. The larger the system,
             | the worse an ad-hoc job is going to be.
             | 
             | * Lack of awareness of how much tools can help. "Make
             | impossible states not representable?" Write a DSL with a
             | type system so another department stops making so many
             | errors?
             | 
             | * Even more incompetence. Had to learn to recognize an
             | O(N^2) algorithm on the job, but you didn't even know what
             | asymptotic complexity was? People are going to end up
             | believing ridiculous cargo cult things like "two nested
             | loops = slow".
             | 
             | * Less rigorous thinking. Do you think they're even going
             | to be able to pick up recursion? Have you watched a
             | beginner try to understand something like mergesort or
             | quicksort? Imagine that they never had a class where they
             | programmed recursively. Is depth first search taught on the
             | job? Sure, but... how deeply.
             | 
             | * Even less innovation. Who is going to be making the
             | decisions about how to manage state? Would you ever see
             | ideas like Effect?
             | 
             | I'm not saying that a university education is a cure-all,
             | or that it is essential to the jobs that many of us do. I
             | AM saying that if you look at the level of complexity of
             | the work we do, it's obvious (to me) that there is
             | something to study and something to gain from the study.
        
           | PaulHoule wrote:
           | 20 years ago I found CS students at my Uni usually didn't
           | know how to do version control, use an issue tracker, etc.
           | Today they all use Github.
           | 
           | Remember computer science professors and grad students get
           | ahead in their careers by writing papers not by writing
           | programs. You do find some great programmers, but you also
           | find a lot of awful code. There was the time that a well-
           | known ML professor emailed me a C program which crashed
           | before it got into main() and it was because the program
           | allocated a 4GB array that it never used. He could get away
           | with it because he had a 64 bit machine but I will still on
           | 32 bits.
           | 
           | Early in grad school for physics I had a job developing Java
           | applets for education and got invited to a CS conference in
           | Syracuse where I did a live demo. None of the computer
           | scientists had a working demo and I was told I was very brave
           | to have one.
        
           | ykonstant wrote:
           | When I was a student in Crete, the nearby CSD was a gorgeous
           | example of doing both competently. They had rigorous theory
           | courses (like mandatory DS, Algos, logic, compilers etc) as
           | well as hardcore applied stuff.
           | 
           | Side-knowledge like source control, Unix systems and
           | utilities, editors/IDEs were supposed to be picked by
           | students themselves with the help of lab TAs, because
           | assignments were to be delivered over ssh through specific
           | channels etc. Sometimes, the quirky teacher would not give
           | precise instructions for certain projects, but tell the
           | students to find the documentation in the departmental
           | network directories for the class and decrypt them with their
           | ID numbers. So the students would go on to crawl the unix and
           | windows nodes for the relevant info.
           | 
           | A "good" (7.5+/10) graduate from the CSD of the University of
           | Crete could do rigorous algorithmic analysis, hack system
           | stuff in C and time in Verilog. OOP was hot at the time, so
           | it also meant students were expected to produce significant
           | amounts of code in Java and C++. Specializations abounded:
           | networks and OSs, theory, arithmetic and analysis, hardware
           | (including VLSI etc. at the undergraduate level, with labs).
           | I won't even go into the graduate stuff.
           | 
           | And although this curriculum was quite demanding and onerous
           | (the number of projects in each class was quite crazy), it
           | was just something students dealt with. Heck, this was not
           | even the most famous CS school in Greece: the elite hackers
           | mostly went to the National Technical University of Athens.
           | 
           | I am not sure what the situation is now, but at least at the
           | time, graduates were ready both for academic and serious
           | industry careers. It is a 4 year curriculum, though many if
           | not most students went on for 5 or 6 years. Of course, free.
        
           | FredPret wrote:
           | In my traditional-engineering education, we spent a ton of
           | time on the basic science and theory, with a super broad
           | overview of actual methods used in practice.
           | 
           | The expectation was that you graduate and you're then
           | equipped to go to a company and start learning to be an
           | engineer that does some specific thing, and you're just
           | barely useful enough to justify getting paid.
        
         | ryan-duve wrote:
         | > The only consistent explanation I've seen that it is about
         | 'easy'. The other languages have tools to make them easy, easy
         | IDE's, the languages 'solve' one 'thing' and using them for
         | that 'one thing' is easier to than building your own in LISP.
         | 
         | I have thought something similar, perhaps with a bit more
         | emphasis on network effects and initial luck of popularity, but
         | the same idea. Then about a week ago, I was one of the lucky
         | 10,000[0] that learned about The Lisp Curse[1]. It's as old as
         | the hills, but I hadn't come across anything like it before. I
         | think it better explains the reason "the best languages don't
         | get adopted" than the above.
         | 
         | The TL;DR is with unconstrained coding power comes dispersion.
         | This dilutes solutions across equally optimal candidates
         | instead of consolidation on a single (arbitrary) optimal
         | solution.
         | 
         | [0] https://xkcd.com/1053/
         | 
         | [1] http://winestockwebdesign.com/Essays/Lisp_Curse.html
        
           | FrustratedMonky wrote:
           | Guess today I'm one of the lucky 10,000 to read this essay.
        
             | Jach wrote:
             | I'd say you were unlucky, because it's a rather terrible
             | essay and doesn't actually get the diagnosis correct at
             | all; indeed if some of its claims were true, you'd think
             | they would apply just as well to other more popular
             | languages, or rule out existing Lisp systems comprised of
             | millions of lines of code with large teams. The author
             | never even participated in Lisp, and is ignorant of most of
             | Lisp history. I wish it'd stop being circulated.
        
         | PaulHoule wrote:
         | I'm going to argue that Lisp already won.
         | 
         | That is, other programming languages have adopted many of the
         | features of Lisp that made Lisp special such as garbage
         | collection (Rustifarians are learning the hard way that garbage
         | collection is the most important feature for building programs
         | out of reusable modules), facile data structures (like the
         | scalar, list, dict trinity), higher order functions, dynamic
         | typing, REPL, etc.
         | 
         | People struggled to specify programming languages up until 1990
         | or so, some standards were successful such as FORTRAN but COBOL
         | was a hot mess that people filed lawsuits over it, PL/I a
         | failure, etc. C was a clear example of "worse is better" with
         | some kind of topological defect in the design such that there's
         | a circularity in the K&R book that makes it confusing if you
         | read it all the way through. Ada was a heroic attempt to write
         | a great language spec but people didn't want it.
         | 
         | I see the Common Lisp spec as the first modern language spec
         | written by adults which inspired the Java spec and the Python
         | spec and pretty much all languages developed afterwards.
         | Pedants will consistently deny that the spec is influenced by
         | the implementation but that's absolutely silly: modern
         | specifications are successful because somebody thinks through
         | questions like "How do we make a Lisp that's going to perform
         | well on the upcoming generation of 32 bit processors?"
         | 
         | In 1980 you had a choice of Lisp, BASIC, PASCAL, FORTRAN,
         | FORTH, etc. C wound up taking PL/I's place. The gap between
         | (say) Python and Lisp is much smaller than the gap between C
         | and Lisp. I wouldn't feel that I could do the macro-heavy stuff
         | in
         | 
         | https://www.amazon.com/Lisp-Advanced-Techniques-Common/dp/01...
         | 
         | in Python but I could write most of the examples in
         | 
         | https://www.amazon.com/Paradigms-Artificial-Intelligence-Pro...
         | 
         | pretty easily.
        
           | munch117 wrote:
           | Lisp is the original dynamic language; for decades more or
           | less the only one. Dynamic languages have been incredibly
           | successful.
           | 
           | The code-as-data, self-similar aspect of Lisp, which is at
           | the core of Lisp's macro features, hasn't reached the same
           | popularity.
           | 
           | I would say that Lisp has both won and lost.
        
             | PaulHoule wrote:
             | As much as it sucked, BASIC had an important place as a
             | dynamic language as early as 1963 at Dartmouth. GOTO wasn't
             | all that different from a JMP in assembly.
             | 
             | BASIC was mature as a teaching language that you could run
             | on a minicomputer by the early-1970s (see
             | https://en.wikipedia.org/wiki/RSTS/E) and it came to
             | dominate the market because there were implementations like
             | Tiny BASIC and Microsoft BASIC that would run in machines
             | with 4K of RAM.
             | 
             | There was endless handwringing at the time that we were
             | exposing beginners to a language that would teach them
             | terrible habits but the alternatives weren't great: it was
             | a struggle to fit a PASCAL (or FORTRAN or COBOL or ...)
             | compiler into a 64k address space (often using virtual
             | machine techniques like UCSD Pascal which led to terrible
             | performance) FORTH was a reasonable alternative but never
             | got appeal beyond enthusiasts.
             | 
             | There was a lot of hope among pedagogues that we'd switch
             | to LOGO which was more LISP-like in many ways and you could
             | buy LOGO interpreters for everything from the TI-99/4A and
             | TRS-80 Color Computer to the Apple ][. There was also uLISP
             | which was available on some architectures but again wasn't
             | that popular. For serious coding, assembly language was
             | popular.
             | 
             | In the larger computer space there were a lot of languages
             | like APL and SNOBOL early on that were dynamic too.
        
               | mst wrote:
               | I cut my teeth on BBC BASIC with the occasional inline
               | arm2 assembly block on an Acorn Archimedes A310.
               | 
               | It had its limitations, but it damn well worked and you
               | could have total control of the machine if you needed it.
               | 
               | (also the Acorn Archimedes manual was about 40% "how to
               | use the gui" and 60% a complete introduction and
               | reference for BBC BASIC, which definitely helped; I had
               | to buy a book to get an explanation of the ASM side of
               | things but, I mean, fair enough)
               | 
               | Then again the second time I fell in love with a
               | programming language was perl5 so I am perhaps an outlier
               | here.
        
           | whartung wrote:
           | Similarly, Lisp has lost.
           | 
           | It's clear to me that S-expr are not popular.
           | 
           | Other languages and environments have captured much of what
           | Lisp has offered forever, but not S-expr, and, less so, not
           | macros.
           | 
           | Any argument one has against Lisp is answered by modern
           | implementations. But, specifically, by Clojure. Modern, fast,
           | "works with everything", "lots of libraries", the JVM "lifts
           | all boats", including Clojure.
           | 
           | But despite all that, it's still S-expr based, and its still
           | a niche "geek" language. A popular one, for its space, but
           | niche. It's not mainstream.
           | 
           | Folks have been pivoting away from S-expr all the way back to
           | Dylan.
           | 
           | I'm a Lisp guy, I like Lisp, and by Lisp I mean Common Lisp.
           | But I like CLOS more than macros, specifically mulithmethod
           | dispatch. I'd rather have CLOS in a language than macros.
           | 
           | I don't hate S-expr, but I think their value diminishes
           | rather quickly if you don't have macros. Most languages have
           | structured static data now, which is another plus of S-expr.
           | 
           | I don't use many macros in my Lisp code. Mostly convenience
           | methods (like (with-<some-scope> scope <body>)), things like
           | that). Real obvious boiler plate stuff. Everyone else would
           | just wrap a lambda, but that's not so much the Common Lisp
           | way.
           | 
           | In my Not Lisp work, I don't really miss macros.
           | 
           | Anyway, S-expr hinder adoption. It's had all the time in the
           | world to "break through", and it hasn't. "Wisdom of the
           | crowds" says nay.
        
             | PaulHoule wrote:
             | What did win is the scalar/list/dict trinity which I first
             | saw clearly articulated in Perl but is core to dynamic
             | langauges like Python and Javascript and in the stdlib and
             | used heavily in almost every static language except for the
             | one that puts the C in Cthulu.
        
               | dudinax wrote:
               | And a lot of lisps have clunky dicts.
        
             | kazinator wrote:
             | This is kind of backwards. Languages which imitate
             | everything from the Lisp family except S-exprs and macros
             | are, because of that, other languages. Those that have
             | S-exprs and macros are identified as in the Lisp family.
             | 
             | The appearance of new languages like this has not stopped.
        
           | AnimalMuppet wrote:
           | > C was a clear example of "worse is better" with some kind
           | of topological defect in the design such that there's a
           | circularity in the K&R book that makes it confusing if you
           | read it all the way through.
           | 
           | Whaaaaat???
           | 
           | I read the K&R book (rev 1) all the way through. The only
           | thing I found confusing (without a compiler to experiment
           | wiht) was argc and argv. Other than that, I found it very
           | clear.
           | 
           | What, specifically, are you referring to?
        
             | jjtheblunt wrote:
             | oh how i wish i had kept my rev1 K&R rather than donate it
             | to my hometown library 20+ years ago.
        
           | jjtheblunt wrote:
           | I really like that argument.
           | 
           | I've often thought Sun and Solaris also won, since so much of
           | Linux is open source reimaginings of what Solaris had in the
           | mid-late 90s, essentially a few year head start on Linux
           | (which i used in the early 90s and still do, but along with
           | Solaris back then, and NeXTstep).
        
             | mst wrote:
             | 20 years back I remember chatting to a sysadmin over beer
             | and the conversation included "Linux mostly does either
             | whatever Solaris does or whatever BSD does, you just have
             | to check which it is before trying to write anything."
             | 
             | (this was not a complaint, we were both veterans of the
             | "How Many UNICES?!?!?!" era)
        
           | _dain_ wrote:
           | _> C was a clear example of "worse is better" with some kind
           | of topological defect in the design such that there's a
           | circularity in the K&R book that makes it confusing if you
           | read it all the way through._
           | 
           | Aww you can't just leave us hanging like this. What's the
           | paradox?
        
         | samatman wrote:
         | > _So how would we go about switching the entire software
         | industry to use LISP more?_
         | 
         | There are two answers to this, depending on how broadly you
         | interpret the "Lisp" part of the question. The fundamentalist,
         | and worse, answer is simply: you don't. The objections to Lisp
         | are not mere surface-level syntactic quibbles, it's deeper than
         | that. But also, the syntax is a barrier to adoption. Some
         | people think in S-expressions quite naturally, I personally
         | find them pleasant if a bit strange. But the majority simply do
         | not. Let's not downplay the actual success of Common Lisp,
         | Scheme, and particularly Clojure, all of them are actively
         | used. But you've got your work cut out for you trying to get
         | broader adoption for any of those, teams which want to be using
         | them, are.
         | 
         | The better, more liberal answer, is: use Julia! The language is
         | consciously placed in the Lisp family in the broader sense, it
         | resembles Dylan more than any other language. Critically, it
         | has first-class macros, which one might argue are less
         | _eloquent_ than macros in Common Lisp, but which are equally
         | _expressive_.
         | 
         | It also has multiple dispatch, and is a JIT-compiled LLVM
         | language, meaning that type-stable Julia code is as fast as a
         | dynamic language can be, genuinely competitive with C++ for
         | CPU-bound numeric computation.
         | 
         | Perhaps most important, it's simply a pleasure to work with.
         | The language is typecast into a role in scientific computing,
         | and the library ecosystem does reflect that, but the language
         | itself is well suited to general server-side programming, and
         | library support is a chicken-and-egg thing: if more people
         | start choosing Julia over, say, Go, then it will naturally grow
         | more libraries for those applications over time.
        
           | ykonstant wrote:
           | >The objections to Lisp are not mere surface-level syntactic
           | quibbles, it's deeper than that.
           | 
           | I am interested in the specifics; if you have the time to
           | write details I'd love that, otherwise I welcome some links!
        
             | BoingBoomTschak wrote:
             | My very rough and wet finger take:
             | 
             | * Scheme is just too fragmented and lacking in arguments to
             | seduce the current industry; its macros are unintuitive,
             | continuations are hard to reason with
             | (https://okmij.org/ftp/continuations/against-callcc.html),
             | small stdlib, no concept of static typing, etc...
             | 
             | * CL is old, unfashionable and full of scary warts
             | (function names, eq/eql/equal/equalp, etc...), and its
             | typing story is also pretty janky even if better than the
             | others (no recursive deftype meaning you can't statically
             | type lists/trees, no parametric deftype, the only one
             | you'll get is array). Few people value having such a solid
             | ANSI standard when it doesn't include modern stuff like
             | iterators/extensible sequences, regexps or threads/atomics.
             | 
             | * Clojure is the most likely to succeed, but the word Java
             | scares a lot of people (for both good and bad reasons) in
             | my experience. As does the "FP means immutable data
             | structs" ML cult. And its current state for static typing
             | is pretty dire, from what I understand.
             | 
             | All of these also don't work that well with VSCode and
             | other popular IDEs (unless stuff like Alive and Calva has
             | gotten way better and less dead than I remember, but even
             | then, SLIME isn't the same as LSP). So, basically, that and
             | static typing having a huge mindshare in the programming
             | world.
        
               | ykonstant wrote:
               | Thank you so much for the info! I am mostly interested in
               | CL.
               | 
               | >Few people value having such a solid ANSI standard when
               | it doesn't include modern stuff like iterators/extensible
               | sequences, regexps or threads/atomics.
               | 
               | For everyone: are there plans to include such topics in
               | the Standard, or are there canonical extensions that
               | people are using?
        
               | tmtvl wrote:
               | There is little chance the Standard is gonna get updated
               | again, especially since CDR (the Common Lisp Directory
               | Repository) kinda stopped. That said there are a few
               | libraries that are kinda considered to be the default go-
               | tos for various needs:
               | 
               | CL-PPCRE for regular expressions.
               | 
               | Bordeaux-threads for threads.
               | 
               | Alexandria for various miscellaneous stuff.
               | 
               | Trivia for pattern matching.
               | 
               | CFFI for, well, FFI.
               | 
               | And Series for functional iterative sequence-like data
               | structures.
               | 
               | There's also ASDF and UIOP, but I'm not sure whether or
               | not they're part of the Standard.
        
               | BoingBoomTschak wrote:
               | Too bad cl-ppcre is extremely slow in my experience, but
               | that's what we have. ASDF and UIOP are the same as the
               | rest, de facto standards, but not ANSI nor CLtL.
               | 
               | I would also add iterate to the "must haves". All noobs
               | should constantly look at the CLHS (available for dash,
               | zeal and emacs too!), https://lispcookbook.github.io/cl-
               | cookbook/ and https://github.com/CodyReichert/awesome-cl,
               | anyway.
               | 
               | Truly, someone could tie himself to SBCL and its
               | extensions and get a much more modern environment, but I
               | think it's worth targeting ECL and CCL in addition.
        
               | dfox wrote:
               | ASDF is one of the symptoms of what is wrong about CL.
               | You have this thing that is not really sure whether it is
               | image based or source based and has a bunch of global
               | state while reading the source. Solution to that is ASDF,
               | which is a ridiculously hairy and complex thing.
        
               | nsm wrote:
               | I agree with your comment for the most part (particularly
               | the IDE situation)
               | 
               | I do want to callout that while Racket started off from
               | Scheme (and still maintains compatibility with a bunch of
               | it), it should be considered a different platform at this
               | point and solves a bunch of the problems you called out
               | with Scheme - macros are apparently much better,
               | continuations are usually delimited, has user-mode
               | threads and generators, so you very rarely need to reach
               | for raw continuations, a very nice concurrency system,
               | large stdlib, and Typed Racket also adds static typing.
               | 
               | The DrRacket/SLIME experience is great for smaller
               | projects. I do agree that the language server needs more
               | love. However I still think it gets a lot of stuff right,
               | and is much faster than Python/Ruby due to being powered
               | by Chez Scheme.
        
               | Capricorn2481 wrote:
               | Calva is way better than Alive, currently. It pretty much
               | does everything you need.
        
             | screcth wrote:
             | Lisp is too powerful and flexible. That's an advantage and
             | a disadvantage.
             | 
             | On the one hand, you can use that power to write elegant
             | programs with great abstractions. One the other hand it's
             | really really easy to create unmaintainable messes full of
             | ad-hoc macros that no-one, not even you next month, will be
             | able to understand.
             | 
             | Even if the program is well-written, it can be a really
             | steep learning curve for junior hires unless it's very well
             | documented and there's support from the original writers
             | available.
             | 
             | Compare it to other languages that limit your ability to
             | express yourself while writing code.
             | 
             | The truth is that most programs are not really interesting
             | and having a code base that can be hacked-on easily by any
             | random programmer that you can hire is a really valuable
             | proposition from the business point of view, and using less
             | powerful languages is a good way to prevent non-experts
             | from making a mess.
        
         | Jach wrote:
         | Not calling it LISP would help. And though you get used to it
         | and even kind of appreciate it sometimes, the default behavior
         | of most Common Lisp implementations to SHOUT its symbols at you
         | is rather unfortunate.
         | 
         | Go back to 2005 and look at how 'easy' Python, Perl, PHP, Ruby,
         | JavaScript, C++, and Java were. Look at them today. Why has
         | their popularity hierarchy not remained the same? How about vs.
         | Lisp then and now (or even Clojure later)? You'll find answers
         | that explain Lisp's current popularity (which isn't all that
         | bad) better than anything to do with initial or eventual
         | easiness.
        
       | roenxi wrote:
       | In my experience (Clojure) macro-heavy libraries tend to be
       | powerful but brittle. A lisp programmer can do things in a
       | library that non-lisp programmers can't realistically attempt
       | (Typed Clojure, for example). But the trade offs are very real
       | though hard to put a finger on.
       | 
       | There are several Clojure libraries that are cool and completely
       | reliant on macros to be implemented. Eventually, good Clojure
       | programmers seem to stop using them because the trade-offs are
       | worse error messages or uncomfortable edge-case behaviours. The
       | base language is just so good that it doesn't really matter, but
       | I've had several experiences where the theoretically-great
       | library had to be abandoned because of how it interacts with
       | debugging tools and techniques.
       | 
       | It isn't the macros fault, they just hint that a programmer is
       | about to do something that, when it fails, fails in a way that is
       | designed in to the system and can't be easily worked around.
       | Macros are basically for creating new syntax constructs - great
       | but when the syntax has bugs, the programmer has a problem. And
       | the community tooling probably won't understand it.
        
         | PaulHoule wrote:
         | For a few years I've tried some crazy experiments towards
         | metaprogramming in Java
         | 
         | https://github.com/paulhoule/ferocity
         | 
         | my plan there was to write ferocity0 in Java that is able to
         | stub out the standard library and then write ferocity1 in
         | ferocity0 (and maybe a ferocity2 in ferocity1) so that I don't
         | have to write repetitive code to stub out all the operators,
         | the eight primitive data types, etc. I worked out a way to
         | write out Java code in a form that looks like S-expressions
         | 
         | https://github.com/paulhoule/ferocity/blob/main/ferocity0/sr...
         | 
         | which bulks up the code even more than ordinary Java so to make
         | up for it I want to use "macros" aggressively which might get
         | the code size reasonable but I'm sure people would struggle to
         | understand it.
         | 
         | I switched to other projects but yesterday I was working on
         | some Java that had a lot of boilerplate and was thinking it
         | ought to be possible to do something with compile time
         | annotations along the lines of                  @CompileTime
         | static void generate(LiveClass that) {
         | that.addMethod("methodName",... arguments ...,... body...)
         | }
         | 
         | and even write a function like that which might see a class
         | annotation                  @Boilerplate("argument")
         | 
         | and add                  final static PARAM_ARGUMENT =
         | "${argument}"
         | 
         | or something like that.
        
           | RaftPeople wrote:
           | > _I want to use "macros" aggressively which might get the
           | code size reasonable but I'm sure people would struggle to
           | understand it._
           | 
           | A long time ago I wrote a macro language for the environment
           | I was working in and had grand plans to simplify the dev
           | process quite a bit.
           | 
           | While it did work from one perspective, I was able to
           | generate code much much faster for the use cases I was
           | targeting. But the downside was that it was too abstract to
           | follow easily.
           | 
           | It required simulating the macro system in the mind when
           | looking at code to figure out whether it was generating an
           | apple or an orange. I realized there is a limit to the
           | usability of extremely generalized abstraction.
           | 
           | EDIT: I just remembered my additional thought back then was
           | that a person that is really an expert in the macro language
           | and the macros created could support+maintain the macro based
           | code generation system.
           | 
           | So the dev wouldn't be expected to be the maintainer of the
           | underlying macro system, they would just be using the systems
           | templates+macros to generate code, which would give them
           | significant power+speed. But it's also then a one-off
           | language that nobody else knows and that the dev can't
           | transfer to next job.
        
         | zelphirkalt wrote:
         | > Eventually, good Clojure programmers seem to stop using them
         | because the trade-offs are worse error messages or
         | uncomfortable edge-case behaviours.
         | 
         | I am not so familiar with Clojure, but I am familiar with
         | Scheme. The thing is not, that a good programmer stops using
         | macros completely, but that a good programmer knows, when they
         | have to reach for a macro, in order to make something
         | syntactically nicer. I've done a few examples:
         | 
         | pipeline/threading macro: It needs to be a macro in order to
         | avoid having to write (lambda ...) all the time.
         | 
         | inventing new define forms: To define things on the module or
         | top level without needed set! or similar. I used this to make a
         | (define-route ...) for communicating with the docker engine.
         | define-route would define a procedure whose name depends on for
         | which route it is.
         | 
         | writing a timing macro: This makes for the cleanest syntax,
         | that does not have anything but (time expr1 expr2 expr3 ...).
         | 
         | Perhaps the last example is the least necessary.
         | 
         | Many things can be solved by using higher-order functions
         | instead.
        
           | hellcow wrote:
           | My canonical use-case for macros in Scheme is writing unit
           | tests. If you want to see the unevaluated expression that
           | caused the test failure, you'll need a macro.
           | (define-syntax test (syntax-rules ()           ((_ test-expr
           | expected-expr)            (let ((tv (call-with-values (lambda
           | () test-expr) list))                  (ev (call-with-values
           | (lambda () expected-expr) list)))              (if (not
           | (equal? tv ev))                  (printf
           | "\nTest failed: ~s\nWanted:      ~s\nGot:         ~s\n"
           | 'test-expr                   ev                   tv)))))))
        
             | zelphirkalt wrote:
             | I've got another similar use-case: Function contracts.
             | There you also probably want to see the unevaluated form of
             | the assertion on failure.
        
             | neilv wrote:
             | Also, with `syntax-case` or `syntax-parse`, you can have
             | the IDE know the syntax for the test case that failed:
             | 
             | https://www.neilvandyke.org/racket/overeasy/
        
             | _dain_ wrote:
             | Python's pytest framework achieves this without macros. As
             | I understand it, it disassembles the test function bytecode
             | and inspects the AST nodes that have assertions in them.
        
               | zelphirkalt wrote:
               | Inspecting the AST ... That kind of sounds like what
               | macros do. Just that pytest is probably forced to do it
               | in way less elegant ways, due to not having a macro
               | system. I mean, if it has to disassemble things, then it
               | has already lost, basically, considering how much its
               | introspection is lauded sometimes.
        
         | gus_massa wrote:
         | In racket you can define simple macros with define-syntax-rule
         | but in case of an error you get a horrible ininteligible
         | message that shows part of the expanded code.
         | 
         | But the recomendation is tp use syntax-parse that is almost a
         | DSL to write macros, for example you can specify that a part is
         | an identifier like x instead of an arbitrary expresssion like
         | (+ x 1). It takes more time to use syntax-parse, but when
         | someone uses the macro they get a nice error at the expansion
         | time.
         | 
         | (And syntax-parse is implemented in racket using functions and
         | macros. A big part of the racket features are implemented in
         | racket itself.)
        
         | mst wrote:
         | Macros are something where usage tends to go
         | 
         | 1) OMG confusing -> almost never use them 2) OMG exciting ->
         | use them faaar too much 3) Huh. Powerful, but so powerful you
         | should use only when required. Cool.
         | 
         | Quite a few things turn out to be like that skill-progression-
         | wise and at this point I've just accepted it as human nature.
        
       | VyseofArcadia wrote:
       | I was a bit baffled the first time I was introduced to Simulink
       | Coder. I get the value proposition: simulate your model so that
       | you have confidence it's doing the right thing, and then
       | essentially just run that model as code instead of implementing
       | it yourself and possibly introducing mistakes. What I didn't
       | understand, as a software guy, not an engineering guy, was why on
       | earth you'd want a _graphical_ programming language to do that.
       | Surely it would be easier to just write your model in a
       | traditional text-based language. Hell, the same company even
       | makes their own language (MATLAB) that would be perfect for the
       | task.
       | 
       | I did a little digging, and it turns out I had it backwards. It's
       | not that Simulink invented a new graphical programming language
       | to do this stuff. Control systems engineers have been studying
       | and documenting systems using Simulink-style block diagrams since
       | at least the 1960s. Simulink just took something that used to be
       | a paper and chalkboard modeling tool and made it executable on a
       | computer.
        
         | jocaal wrote:
         | The people using matlab are typically experts in things other
         | than programming. Matlab is for people designing antennas or
         | control systems, where the math is more important than the
         | code.
        
           | VyseofArcadia wrote:
           | Most definitely, but my point was to my programmer-brain,
           | nearly any text-based modeling language would be better than
           | some sort of awkward graphical diagram. And since the same
           | company makes MATLAB, why not?
           | 
           | It turns out non-programmers actually like graphical tools.
           | Go figure.
        
             | molteanu wrote:
             | I've linked this thread from HN in the article about visual
             | programming, if you find it useful or insightful,
             | 
             | https://news.ycombinator.com/item?id=14484244
        
             | jocaal wrote:
             | > It turns out non-programmers actually like graphical
             | tools. Go figure.
             | 
             | I'd rather say that visual tools are better abstractions
             | than what text based tools can provide for those specific
             | scenarios. Some problems are easier solved with diagrams.
        
               | agumonkey wrote:
               | text is bad with cross reference and impact.. every time
               | your problem grows too many of them you end up charting a
               | diagram
        
             | buescher wrote:
             | Programmers just don't like switching between
             | representations - it's the same reason that "printf
             | debugging" will still be widespread after all of us are
             | gone.
        
               | archgoon wrote:
               | Printf debugging will exist for as long as access to logs
               | is easier than direct access to memory.
        
               | buescher wrote:
               | Sure, that's just one use case. And logs can be useful
               | for history as well as current state. But I think it's
               | the mode shift to "do I set a breakpoint here, and then
               | inspect the program state" as opposed to just continuing
               | to sling code to print the state out.
        
               | chowells wrote:
               | What's funny is that I learned to debug with breakpoints,
               | stepping, and inspectors. I use printf debugging now
               | because the more experience you get, the easier and
               | faster it becomes. There's a point where it's just a
               | faster tool to validate your mental model than watching
               | values churn in memory.
        
               | dfox wrote:
               | Another thing is the ability to actually have a
               | breakpoint in the program. There is a lot of distributed
               | systems (especially embedded ones) where just stoping the
               | program and inspecting the state will cause the rest of
               | the system to enter some kind of failure state.
        
               | mst wrote:
               | I use printf debugging locally as well. Interactive
               | debuggers and I just don't get on, and my test cases are
               | built to start fast enough that it works out for me.
               | Usually I have a 'debug_stderr' function of some sort
               | that serialises + writes its argument and then returns it
               | so I can stick it in the middle of expressions.
               | 
               | (I do make sure that anybody learning from me understands
               | that debuggers are not evil, they're just not my thing,
               | and that shouldn't stop everybody else at least trying to
               | learn to use them)
        
               | User23 wrote:
               | Printf debugging is actually a very primitive case of
               | predicate reasoning. Hopefully in the not too distant
               | future instead of using prints to check our assumptions
               | about the code we will instead statically check them as
               | SMT solvable assertions.
        
             | FredPret wrote:
             | I once wrote a linear programming optimal solver for a
             | business problem.
             | 
             | But because it was all equations and logical reasoning, it
             | was hard to understand, and boring to non-enthusiasts.
             | 
             | So my manager had me make a visual simulation instead,
             | which we could then present to his managers.
             | 
             | This isn't as stupid as it sounds - we humans have huge
             | visual processing networks. It makes sense to tap into that
             | when interfacing with other humans.
             | 
             | I think to learn programming or math, you actually end up
             | learning to use this visual wiring to think about written
             | symbols. But this process of rewiring - while valuable - is
             | very hard and slow, and only a tiny fraction of humans have
             | gone through it.
        
               | crdrost wrote:
               | Related to the visual simulations, one of Alan Kay's
               | favorite demos over the years has been to show folks
               | Sketchpad,
               | 
               | https://www.youtube.com/watch?v=495nCzxM9PI
               | 
               | partly as a "this is what we were doing in 1962, you are
               | now as far removed from that demo as that demo was
               | removed from the year 1900 -- do you really feel like
               | you've made the analogous/concomitant progress in
               | programming?" ... and one of his points there was that a
               | lot of programming acts as a "pop music" where you are
               | trying to replicate the latest new sounds and marginally
               | iterate on those, rather than study the classics and the
               | greats and the legacy of awesome things that had been
               | done in the past.
        
               | llm_trw wrote:
               | I'm not sure what you're trying to say here.
               | 
               | That specialist interfaces make you productive?
               | 
               | Sketchpad was just a cad tool, one of the first ones to
               | be sure, but still a design tool.
               | 
               | We have substantially better ones today:
               | https://www.youtube.com/watch?v=CIa4LpqI2EI
               | 
               | So in 62 years have we improved the state of the art in
               | design by as much as between 1900 to 1962?
               | 
               | Yes. I'd say we have and more.
        
               | zogrodea wrote:
               | I'm just putting this Alan Kay question (from Stack
               | Overflow) here because of relevance.
               | 
               | In that question, he's considered not with implementation
               | or how good the execution of an idea is (which is
               | certainly one type of progress), but in genuinely new
               | ideas.
               | 
               | I don't think I personally am qualified to say yes or no.
               | There are new data structures since then for example, but
               | those tend to be improvements over existing ideas rather
               | than "fundamental new ideas" which I understand him
               | (perhaps wrongly) to be asking for.
               | 
               | https://stackoverflow.com/questions/432922/significant-
               | new-i...
        
               | agumonkey wrote:
               | this is awfully manual, does autocad have parametric
               | topology tools ?
               | 
               | programs like houdini are more reactive and mathematical
               | (no need to create width/volume by hand and trim
               | intersections by hand), i think mech engineering tools
               | (memory fail here) have options like this
        
           | molteanu wrote:
           | Well, software developers in the embedded field of automotive
           | use Matlab and Simulink quite extensively. I haven't worked
           | with it myself but the "excuse" for using it was that "we
           | have complicated state machines that would be difficult to
           | write directly in C".
        
             | pantsforbirds wrote:
             | Matlab also solves a ton of common engineering problems
             | with efficient BLAS wrappers. And it is (or at least used
             | to be) pretty trivial to drop in a CUDA kernel for hotter
             | code paths.
             | 
             | It's still great for simulation work imo.
        
             | buescher wrote:
             | You would think about a lot of the things that they do,
             | "how hard can it be?". Take a look at the _simplified_
             | example statechart here (scroll down) for a power window
             | control and ponder all the ways an embedded programmer
             | could struggle to get the behavior right in plain
             | procedural C code as well as how difficult it would be to
             | get a typical team to document their design intent.
             | 
             | https://www.mathworks.com/help/simulink/ug/power-window-
             | exam...
             | 
             | Where statechart design is appropriate, it is a very
             | powerful tool.
        
               | mst wrote:
               | Those do strike me as elucidating the power in suitable
               | contexts very nicely.
               | 
               | For those following along at home, click https://uk.mathw
               | orks.com/help/simulink/ug/powerwindow_02.png first and
               | then realise that's a top level chart with per-case sub
               | charts (and then follow buescher's link if you want to
               | see those) ... and that at the very least this is
               | probably the Correct approach to developing and
               | documenting a design for something like that.
               | 
               | Thank you kindly for the Informative Example.
        
               | buescher wrote:
               | You're welcome, and yeah, that is the statechart. It's
               | actually at the low zoomed-in level. That's the
               | controller at the heart of the example. A lot of the
               | diagrams in the article are higher-level, not sub-charts.
               | I am not a 100% advocate for the BDUF-flavored model-
               | driven-design approach in the whole document at that
               | link, though I can understand why some industries would
               | take it.
               | 
               | Remember, too, this is undoubtedly a simplified example.
               | Just off the cuff I'd expect a real-world window motor
               | controller in 2024 to have at least open-loop ramp-up and
               | ramp-down in speed, if not closed-loop speed control. I
               | also would expect there'd be more to their safety
               | requirements.
        
               | mst wrote:
               | > A lot of the diagrams in the article are higher-level,
               | not sub-charts.
               | 
               | I must have failed to understand even more things than I
               | thought I did. I'll have to read through again when more
               | awake if I want to fix that, I suspect.
               | 
               | I think if faced with that class of problem (am not
               | automative developer, just spitballing) I would probably
               | try using some sort of (no, not YAML) config format to
               | express the various things.
               | 
               | But that's a 'how my brain works' and I am aware of that
               | - for database designs, I've both ensured there was a
               | reasonable diagram generated when the point of truth was
               | a .sql file, and happily accepted an ERD as the point of
               | truth once I figured out how to get a .sql file out of it
               | to review myself.
        
             | jocaal wrote:
             | What they didn't tell you is that these state-machines are
             | differential equations as well. Again, the math is more
             | important than the code. If you want to look more into it,
             | just search for state variable control.
        
         | buescher wrote:
         | Bingo. Additionally, control theory block diagrams are pretty
         | rigorous, with algebraic rules for manipulating them you'll
         | find in a controls textbook. You can pretty much enter a block
         | diagram from Apollo documentation into Simulink and "run" it,
         | in much the same way you could run a spice simulation on a
         | schematic from that era.
        
           | tonyarkles wrote:
           | Yeah, there's a really interesting phenomenon that I've
           | observed that goes with that. If what you're describing was
           | the main way people used it, I'd be very satisfied with that
           | and have actually been chewing on ways to potentially bring
           | those kinds of concepts into the more "mainstream"
           | programming world (I'm not going to go way off on that
           | tangent right now)
           | 
           | What I've seen many times though in my career that awkwardly
           | spans EE and CS is that people forget that you can still...
           | write equations instead of doing everything in Simulink. As
           | an example I was looking at a simplified model of an aircraft
           | a couple months ago.
           | 
           | One of the things it needed to compute was the density of air
           | at a given altitude. This is a somewhat complicated equation
           | that is a function of altitude, temperature (which can be
           | modelled as a function of altitude), and some other stuff.
           | These are, despite being a bit complicated, straightforward
           | equations. The person who had made this model didn't write
           | the equations, though, they instead made a Simulink sub-model
           | for it with a bunch of addition, multiplication, and division
           | blocks in a ratsnest.
           | 
           | I think the Simulink approach should be used when it brings
           | clarity to the problem, and should be abandoned when it
           | obscures the problem. In this case it took a ton of mental
           | energy to reassemble all of the blocks back into a simple
           | closed-form equation and then further re-arranging to verify
           | that it matched the textbook equation.
        
             | buescher wrote:
             | I had an intern do something similar with an
             | electromechanical machine. I'd all but written the
             | equations out for him but he found drawing it out in
             | Simulink in the most fundamental, primitive blocks to help
             | him understand what was going on . I don't get it either.
             | 
             | A related phenomenon seems to be people who don't want to
             | "wire up" schematics but attach named nets to every
             | component and then draw boxes around them.
        
               | tonyarkles wrote:
               | > people who don't want to "wire up" schematics but
               | attach named nets to every component and then draw boxes
               | around them
               | 
               | Lol there are a few things that I am highly intolerant of
               | and this is one of them. The only place where I'm
               | generally ok with that approach is at the input
               | connectors, the microcontroller, and the output
               | connectors. Same philosophy as I said before though: "if
               | it brings clarity draw it as a wired up schematic; if it
               | turns into a ratsnest use net labels to jump somewhere
               | else". Having every component in a separate box with net
               | labels will generally obscure what's going on and just
               | turns it into an easter egg hunt.
        
               | buescher wrote:
               | Also, part of the style seems to be to contort any actual
               | schematic to fit in the box. I like to ask "if you
               | wouldn't put it on its own page so you could draw it
               | clearly, why did you draw a box around it?".
        
               | foldr wrote:
               | I don't really get this point of view. Take an I2C bus,
               | for example. Isn't it easier to read the schematic if all
               | the components on the bus have two pins connected to
               | wires labeled 'SDA' and 'SCL'?
        
               | mst wrote:
               | I think using the drawing it out process to understand
               | initially is ... not how my brain would do it, but seems
               | like a perfectly valid approach.
               | 
               | But once you -do- understand it's time to stick that in a
               | reference file out of the way and write it again properly
               | for actual use.
        
             | mighmi wrote:
             | Where can I read about your project or hear your tangent?
        
               | tonyarkles wrote:
               | I'm hoping at a minimum get my thoughts written out while
               | I'm flying this weekend!
               | 
               | The quick gist is that Jupyter is frustrating for me
               | because it's so easy to end up in a situation where you
               | inadvertently end up with a cell higher up in your
               | document that is using a value computed in a cell further
               | down. It's all just one global namespace.
               | 
               | In the Julia world with Pluto they get around this by
               | restricting what can go into cells a little bit (e.g. you
               | have to do some gymnastics if you're going to try to
               | assign more than one variable in a single cell); by doing
               | it this way they can do much better dependency analysis
               | and determine a valid ordering of your cells. It's just
               | fine to move a bunch of calculation code to the
               | end/appendix of your document and use the values higher
               | up.
               | 
               | The idea I've been chewing on comes somewhat from using
               | Obsidian and their whole infinite canvas idea. It seems
               | like using ideas from Pluto around dependency analysis
               | and being able to also determine whether the contents of
               | a given cell are pure or not (i.e. whether they do IO or
               | other syscalls, or will provide outputs that are solely a
               | function of their inputs) should be able to make it
               | easier to do something... notebook-like that benefits
               | from cached computations while also having arbitrary
               | computation graphs and kind of an infinite canvas of data
               | analysis. Thinking like a circuit simulator, it should be
               | possible to connect a "scope" onto the lines between
               | cells to easily make plots on the fly to visualize what's
               | happening.
               | 
               | Anyway, that's the quick brain dump. It's not well-formed
               | at all yet. And honestly I would be delighted if someone
               | reads this and steals the idea and builds it themselves
               | so that I can just an off-the-shelf tool that doesn't
               | frustrate me as much as Jupyter does :)
        
               | sterlind wrote:
               | it sounds like the ideal solution would be something
               | functional (so you have a computation graph), pure (so
               | you can cache results) and lazy (so order of expressions
               | doesn't matter.) why not Haskell? or even a pure/lazy
               | subset/variant of Julia, if you want to ditch the baggage
               | of Haskell's type bondage?
               | 
               | you could ditch explicit cells entirely, and implement
               | your "scope" by selecting a (sub)expression and spying on
               | the inputs/outputs.
        
               | tonyarkles wrote:
               | I've thought about that and have written some fun Haskell
               | code in the past but... the other goal is to actually
               | have users :D. I've also considered Lisp, Scheme, and
               | friends to have really easily parseable ASTs.
               | 
               | I jest a bit, but there's a very rich ecosystem of really
               | useful data analysis libraries with Python that do
               | somewhat exist in other ecosystems (R, Julia, etc) but
               | aren't nearly as... I would use the word polish, but a
               | lot of the Python libraries have sharp edges as well.
               | Well trodden might be a better word. My experience with
               | doing heavy data analysis with Python and Julia is that
               | both of them are often going to require some Googling to
               | understand a weird pattern to accomplish something
               | effectively but there's a much higher probability that
               | you're going to find the answer quickly with Python.
               | 
               | I also don't really want to reinvent the universe on the
               | first go.
               | 
               | It has occurred to me that it might be possible to do
               | this in a style similar to org-mode though where it
               | actually doesn't care what the underlying language is and
               | you could just weave a bunch of languages together. Rust
               | code interfacing with some hardware, C++ doing the Kalman
               | filter, Python (via geopandas) doing geospatial
               | computation, and R (via ggplot2) rendering the output.
               | There's a data marshalling issue there of course, which
               | I've also not spent too many cycles thinking about yet :)
               | 
               | Edit: I did copy and paste your comment into my notebook
               | for chewing on while I'm travelling this weekend. Thanks
               | for riffing with me!
        
             | singleshot_ wrote:
             | One cool thing about, for instance, density altitude
             | calculations or runway length calculations, is that you can
             | break the parts of the algorithm down graphically do pilots
             | can trace through datapoints to get an answer without even
             | having a calculator. See many pilot operating handbooks for
             | examples.
        
         | dugmartin wrote:
         | I used to work on GE's programmable logic controller product on
         | the realtime OS side. PLCs are primarily programmed in ladder
         | logic (https://en.wikipedia.org/wiki/Ladder_logic) and we had a
         | full graphical UI to do that. IIRC (its been nearly 30 years)
         | we also had a parser that parsed text based ladder logic
         | diagrams to drive the OS tests.
        
           | buescher wrote:
           | Ladder logic is sort of a similar case right? Supposedly it
           | derives from notation for documenting relay racks in factory
           | automation that could very well date to the 1800s. And it's a
           | pretty brilliant implementation - you don't need a degreed
           | engineer or C programmer to set up fairly sophisticated
           | _concurrent_ system with it.
        
             | dugmartin wrote:
             | Yes it is very similar and as you say the notation lets you
             | define really complex concurrent logic.
        
             | dfox wrote:
             | The idea behind that is neat. And well, the conversion from
             | the ladder diagram to PLC bytecode is intentionally so
             | trivial that it can be done by hand if the ladder diagram
             | is drawn in certain way (which is the way the first PLCs
             | were programmed).
             | 
             | Then there is the reality: every single one of Simatic
             | projects I have seen was done by "PLC programmers" and had
             | this peculiar feature of using the abstractions the wrong
             | way around. Straightforward logic written as Instruction
             | Lists (ie. assembly that produces the bytecode) and things
             | that were straightforward sequential programming built from
             | ridiculously complex schematic diagrams.
        
           | brabel wrote:
           | Any Ladder logic editor would have two views: the block view
           | and the text view. I used to work on Siemens PLCs. I started
           | using only the block view, but over time migrated to only
           | using the text view. Now, I understand that the text view is
           | basically a kind of Assembler. I liked it because I was able
           | to think and visualize things in my head much better using
           | text. But for simple flows, and people not versed in
           | programming, the only option, really, is the block view.
        
         | taeric wrote:
         | This gets to a fairly stark difference in how people model
         | things. It is very common to model things in ways that are
         | close to what is physically being described. Is why circuit
         | simulators often let you show where and how the circuits are
         | connected. Knowing what all can run at the same time is now
         | very clear and knowing what can and cannot be reduced to each
         | other is the same. If they are separate things in the model,
         | then they remain separate.
         | 
         | Contrast this with symbolic code. Especially with the advent of
         | optimizing compilers, you get very little to no reliable
         | intuition on the order things operate. Or, even, how they
         | operate.
         | 
         | Now, this is even crazier when you consider how people used to
         | model things mathematically. It really drives home why people
         | used to be far more interested in polynomials and such. Those
         | were the bread and butter of mathematical modelling for a long
         | time. Nowadays, we have moved far more strongly into a
         | different form of metaphor for our modeling.
        
         | rightbyte wrote:
         | Simulink is a really nice way to do some control problems in.
         | 
         | Like, the 'flow' is really clear. Adding a feedback, saturation
         | or integral or whatever is clear to the eye.
         | 
         | Stateflow charts are also a really nice way to do
         | statemachines, which are used alot in automotive.
         | 
         | Simulink is terrible in many ways too, but in some perverted
         | way I like it for the specific usecase.
        
           | phkahler wrote:
           | >> Simulink is a really nice way to do some control problems
           | in.
           | 
           | Agreed. But it needs to stop there. BTW I never understood
           | why the tool needs to know what processor the code is for. I
           | asked them to provide a C99 target so they don't need to
           | define their own data types in generated code. Does that
           | exist yet? Or do they still define their own UINT_16 types
           | depending on the target?
        
         | GabeIsko wrote:
         | This is kind of the mixed feeling I have about this blog post.
         | There is a recognition that we need universal tools that are
         | designed with honest intentions to be effective in general
         | problem domains. But determining the boundaries of those
         | domains is ultimately a political exercise and not a technical
         | one.
         | 
         | It becomes presumptive to assume that Lisp is the be all, end
         | all of car networking woes. Even if you accept that a tool to
         | manage and organize computational tasks using a more functional
         | paradigm would be helpful, that doesn't mean that a somewhat
         | archaic, syntactically maligned language is the answer.
         | 
         | ROS tackles this somewhat yeah? But it's still very much a
         | research project, despite its use in some parts of industry.
         | The most significant development for computation in vehicles in
         | recent times is still the automotive Linux efforts, so we are
         | not really even in the ballpark to discuss specific languages.
        
       | BaculumMeumEst wrote:
       | Macros are one of the widely touted superpowers of lisp. And yet
       | in Clojure, which is the most widely used dialect in modern use,
       | it is idiomatic to avoid macros as much as possible. None of the
       | claimed advantages really hold up under scrutiny.
        
         | rscho wrote:
         | > idiomatic to avoid macros as much as possible.
         | 
         | Same in all lisps. The advantages of having macros vary on a
         | case-by-case basis, but one thing is quite sure: the number of
         | programmers collaborating on a project is inversely
         | proportional to how much you should use macros. For a solo dev,
         | especially when doing science-y stuff, they're invaluable.
        
           | slaymaker1907 wrote:
           | That's only true of macros created for the purpose of
           | reducing boilerplate code. However, the best macros actually
           | reduce errors and improve code quality by enforcing
           | invariants more than can be done without macros.
        
         | bjoli wrote:
         | Many people use macros as a poor man's inliner, to generate
         | code in a way that adds no benefits over a good compiler.
         | 
         | You rarely actually need syntactic abstractions, but when you
         | do you need macros.
        
         | wtetzner wrote:
         | > it is idiomatic to avoid macros as much as possible.
         | 
         | Of course, don't use a macro if you don't need one. But when
         | you do hit a case where you need one, it's better than
         | using/writing external tooling to solve the problem.
         | 
         | > None of the claimed advantages really hold up under scrutiny.
         | 
         | This doesn't follow.
        
       | codr7 wrote:
       | Macros are more abstract, one meta level up, hence errors are
       | more difficult to relate to code and reason about. They are also
       | more powerful, so errors can have dramatic consequences.
       | 
       | Any kind of code generation setup will have the same
       | characteristics.
       | 
       | Sloppy macros clashing with local names can lead to pretty long
       | debug sessions, but it's the first thing you learn to avoid.
       | 
       | And you're basically inventing your own ad hoc programming
       | language to some extent, the effort spent on error handling tend
       | to not live up to those requirements.
       | 
       | All of that being said, I wouldn't trade them for anything, and I
       | haven't seen any convincing alternatives to Lisp for the full
       | experience.
        
         | kazinator wrote:
         | > _one meta level up_
         | 
         | Which level is that? Is _cond_ higher than _if_? Which one is
         | the macro, again?
        
       | codr7 wrote:
       | On that subject, I once made a serious effort at explaining why
       | Lisp macros are so useful:
       | 
       | https://github.com/codr7/whirlisp
        
       | pfdietz wrote:
       | Without Lisp-like macros, you need preprocessors or code
       | generators. Something like Yacc/Bison is easily done with
       | suitable macros.
       | 
       | In Common Lisp, macros also enable a kind of Aspect Oriented
       | Programming. That's because one can dynamically modify macro
       | expansion using the macroexpand hook. It's a way to modify a code
       | base without changing the source files, which has all sorts of
       | uses.
       | 
       | http://clhs.lisp.se/Body/v_mexp_h.htm#STmacroexpand-hookST
       | 
       | One can implement things natively in Common Lisp, like code
       | coverage tools, that require separate tooling in other languages.
        
         | kazinator wrote:
         | > _one can dynamically modify macro expansion using the
         | macroexpand hook_
         | 
         | That's an incredibly bad idea, and it's not dynamic except in
         | purely interpreted Lisps that expand the same piece of code
         | each time it is executed.
         | 
         | The hooks is absolutely global; whatever you put in there must
         | not break anything.
         | 
         | From the spec itself: "For this reason, it is frequently best
         | to confine its uses to debugging situations."
        
           | pfdietz wrote:
           | One would bind the macroexpand hook variable around the call
           | to COMPILE or COMPILE-FILE for that code. The binding goes
           | away after that. It wouldn't be global at all.
        
         | jstanley wrote:
         | > It's a way to modify a code base without changing the source
         | files
         | 
         | Is another way of saying your source files might not do what
         | they say! You're not selling it to me.
        
           | pfdietz wrote:
           | A use case is code coverage: you can instrument your code by
           | compiling it in a dynamic environment where the hook causes
           | extra coverage recording code to be inserted. You'd like to
           | be able to do this without changing the source code itself.
        
           | diffxx wrote:
           | https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref.
           | ..
        
           | mst wrote:
           | Wrapping the existing code in hooks of some sort to enable
           | injecting profiling or whatever is really quite nice.
           | 
           | A lot of javascript hot reloading implementations rely on
           | using Babel to do that as part of a build step so they output
           | the wrapped code and your app loads that instead (and it's
           | not that dissimilar to compilers having a debug mode where
           | they output very different code).
           | 
           | I would certainly side-eye you for using the relevant common
           | lisp hooks in production without a very convincing reason,
           | but for many development time tasks it's basically the
           | civilised version of widely used techniques.
        
         | jjtheblunt wrote:
         | > Something like Yacc/Bison is easily done with suitable
         | macros.
         | 
         | Is this true? I may have misunderstood what macros can do in
         | terms of remembering state, while applying. Looking it up again
         | to see what i overlooked...
        
           | pfdietz wrote:
           | You can write a macro that expands a grammar definition into
           | the appropriate code. Anything a preprocessor can do, a macro
           | can do like that.
        
             | jjtheblunt wrote:
             | But it's the reverse direction that Bison/Yacc does, as it
             | writes code to recognize essentially expanded grammar
             | rules/macros, collapsing existing code into an expandable
             | grammar rule/macro that would correspond.
             | 
             | Thinking...if there's an obvious equivalence
        
               | pfdietz wrote:
               | Um, what? Bison/yacc generate compilable C from a grammar
               | file. A CL macro would expand to Common Lisp starting
               | from a grammar description. It's not the reverse at all.
        
         | dfox wrote:
         | The common definition of "Aspect oriented programming" is more
         | or less "just bolt whatever CLOS method combinators already can
         | do onto another language". That is powerful concept, but maybe
         | somewhat orthogonal to macros, unless you want to do that with
         | macros, which you certainly can.
        
           | pfdietz wrote:
           | That's certainly _a_ form of AOP, but the hook mechanism here
           | would enable arbitrary expansion time code transformations,
           | not just injections at method calls.
        
       | cryptonector wrote:
       | > Replacing Lisp's beautiful parentheses with dozens of special
       | tools and languages, none powerful enough to conquer the whole
       | software landscape, leads to fragmentation and extra effort from
       | everyone, vendors and developers alike. The automotive field is a
       | case in point.
       | 
       | This completely ignores Haskell.
        
       ___________________________________________________________________
       (page generated 2024-07-25 23:03 UTC)