[HN Gopher] I don't want to go to Chel-C
___________________________________________________________________
I don't want to go to Chel-C
Author : contrapunctus
Score : 92 points
Date : 2022-06-11 15:31 UTC (7 hours ago)
(HTM) web link (applied-langua.ge)
(TXT) w3m dump (applied-langua.ge)
| entaloneralie wrote:
| Hi all,
|
| My name is Devine, and I'm to blame for the uxn disaster.. I'm
| getting this link thrown at me from all sides right now, so I
| figured I might as well chip in.
|
| I'm a bit more on the design-y side of things, and all these
| fancy words to talk about computers and retro computing, are a
| bit beyond me, and reference an era of computing which I didn't
| really had a chance to experience first hand.
|
| But let's talk about "the uxn machine has quite the opposite
| effect, due to inefficient implementations and a poorly designed
| virtual machine, which does not lend itself to writing an
| efficient implementation easily."
|
| This is meaningful to me and I'd love to have your opinion on
| this. Before setting on the journey to build uxn, I looked around
| at the options that were out there, that would solve our specific
| problems, and I'd like to know what you would recommend.
|
| Obviously, I take it that the author is not advocating that we
| simply return to Electron, and I take it they understand that re-
| compiling C applications after each change is not viable with our
| setup, that Rust is manyfolds too slow for us to make any use of
| it, and that they understand that our using Plan 9, C is not a
| good candidate for writing cross-compatible(libdraw) applications
| anyway.
|
| So, what should we have done differently?
|
| I'm genuinely looking for suggestions, even if one suggestion
| might not be compatible with our specific position, many folks
| come to us asking for existing solutions in that space, and I
| would like to furnish our answers with things I might have not
| tried.
|
| Here are some directions we tried prior to Uxn:
|
| https://wiki.xxiivv.com/site/devlog.html
|
| Ah, one last thing, this is how you fib the first 65k shorts in
| uxntal:
|
| https://git.sr.ht/~rabbits/uxn/blob/main/projects/examples/e...
| mwcampbell wrote:
| I think a VM for a small, but highly abstract, language like
| Scheme might address the objections of the author(s) of this
| article. You might like Chibi-Scheme:
| https://github.com/ashinn/chibi-scheme
|
| Having said that, IMO, if you're having fun with uxn and its
| retro 8-bit aesthetic, by all means keep going with that.
| entaloneralie wrote:
| I did use Chibi! I've implemented a few lisps, but I always
| found their bytecode implementation too convoluted and slow,
| so I went with a stack machine for this particular project. I
| might at some point, implement a proper scheme in uxntal.
|
| It's neat, but I don't remember seeing a graphical API for
| it, I'll have a look :)
| mwcampbell wrote:
| I'm not aware of one. I was thinking that you could roll
| your own, just as your Varvara computer defines a graphics
| device on top of uxn. You'd still gain the benefits of
| using an existing language.
| csande17 wrote:
| I am not very familiar with Uxn, but the one thing from the
| article that struck me as an actual problem was that the memory
| containing the program's code can be modified at runtime. The
| downsides of that can heavily outweigh the benefits in many
| cases; it turns "write a uxn -> native code compiler" from a
| weekend project to a nearly impossible task, for example. This
| is probably relatively easy to fix, assuming uxn programs do
| not regularly use self-modifying code. (The article proposes an
| elaborate scheme involving alias analysis and bounds checks,
| but something like "store executable code and writable data in
| two separate 64KB regions" would work just as well.)
|
| The article suggests that Uxn programs can write to arbitrary
| locations on the filesystem. If that is the case, it seems like
| it would be really easy to change that in the interpreter. Then
| Uxn's security model would be essentially identical to how the
| article describes WebAssembly: the interpreter serves as a
| "sandbox" that prevents programs from touching anything outside
| their own virtual memory. This is a good security model and it
| likely makes a lot of sense for Uxn.
|
| Otherwise, the article seems to be more bluster than substance.
| Uxn is probably not going to be the fastest way to compute the
| Fibonacci sequence, nor the most secure way to protect a
| telecommunications network from foreign cyber-attacks, but it
| doesn't need to be either of those things to be useful and
| valuable as a way to write the kind of UI-focused personal
| computer software you want to write.
| tumult wrote:
| The second (or first, maybe) most popular Uxn emulator,
| Uxn32, has sandboxed filesystem access, and has had it since
| the beginning. The author of the original article doesn't
| know what they're talking about.
| entaloneralie wrote:
| I use Uxn's self-modification powers quite a bit, I use them
| mostly for routines that runs thousands of times per frames,
| so I don't have to pull literals from the stack, I just can
| just write myself a literal in the future and have it
| available. I wonder, what about this makes a native code
| compiler difficult, is it because most programs protect
| against this sort of behaviors? or that programs are stored
| in read-only memory?
|
| Some of the emulator sandbox the file device, it is likely to
| be the way that all the emulators will work eventually.
| csande17 wrote:
| > I wonder, what about this makes a native code compiler
| difficult, is it because most programs protect against this
| sort of behaviors? or that programs are stored in read-only
| memory?
|
| Say you're trying to "translate" a Uxn program into x86
| code. The easiest way to do this is by going through each
| of the instructions in the program one by one, then
| converting it to an equivalent sequence of x86
| instructions. (There is a little complexity around handling
| jumps/branches, but it's not too bad.)
|
| But if the Uxn program is allowed to change its own code at
| runtime, that kind of conversion won't work. The Uxn
| program can't change the x86 code because it doesn't know
| what x86 code looks like--it only knows about Uxn code.
| There are some ways around this, but either they're really
| slow (eg by switching back to an interpreter when the Uxn
| code has been modified) or much more complex (eg a JIT
| compiler) or don't work all the time (due to the halting
| problem).
| gerikson wrote:
| Upvote for Elvis Costello reference alone.
| filippp wrote:
| There's also a The Smiths reference right at the beginning.
| tomcam wrote:
| Don't worry, that's still forgivable
| stephc_int13 wrote:
| This is beyond stupid.
|
| There is no such thing as "safe language".
|
| The author has clearly never seen what real safety critical code
| look like.
|
| When safety/robustness really is critical, there are ways to
| achieve it with high confidence, and it can be done in any
| language as long as the team doing it is qualified and applying
| stringent rules.
|
| Of course this is time consuming and costly but luckily we've
| known how to do it for decades.
|
| The kind of rules I am talking about are more stringent than most
| people who never worked on those fields realize.
|
| In the most extreme case I've seen (aerospace code) the rules
| were : no dynamic memory allocation, no loop, no compiler. Yeah,
| pure hand-rolled assembly. Not for speed, for safety and
| predictability.
| [deleted]
| Avshalom wrote:
| _For example, the specification framework for the C language by
| Andronick et al8 does not support "references to local
| variables, goto statements, expressions with uncontrolled side-
| effects, switch statements using fall-through, unions, floating
| point arithmetic, or calls to function pointers". If someone
| does not want to use a language which provides for bounds
| checks, due to a lack of some sort of "capability", then we
| cannot imagine that they will want to use any subset of C that
| can be formally modelled!_
|
| that said this isn't an essay about safety, it's about the
| emotional appeal of a sort of false simplicity that some
| programmers are prone to falling for and pointing out the
| inherent inability of a couple projects (in synecdoche for a
| whole shit ton of other projects) to live up to promises of
| that mirage.
| stephc_int13 wrote:
| I think it's about the emotional appeal of a sort of false
| safety that some programmers are prone to falling for.
| avgcorrection wrote:
| This comment is beyond stupid.
|
| This is an article about programming in general. Ok? It's not
| about your particular niche. It's about software world outside
| the Department of Software Reliability where Code Coverage is
| defined as ten staff members being required to read all lines
| of the code base in order for it to qualify for Rigorous
| Software For Critical Functions Standard #45687A45.
|
| More seriously though: not all problems can be solved without
| dynamic memory allocation and no loops. And for all practical
| purposes: without a compiler. So they need other ways to skin
| the cat.
| stephc_int13 wrote:
| What I am saying is that when safety is required, there are
| known techniques to achieve it, but this is mostly about
| people and their skills, much more than tools.
|
| I am upset reading ignorant people talking about computer
| security without any real knowledge about it.
|
| This guy don't know what he is talking about.
|
| The fact that you can tag some part of the code as [unsafe]
| does not make the rest of the program better of "safe", this
| is magical thinking at best.
| benreesman wrote:
| Quality is costly in time and money, but to your point, it's
| not mysterious how to achieve.
|
| I write slow, half-assed Python scripts every single day,
| because "good enough" is in fact good enough for some things.
|
| But over-extrapolating the "good enough" mindset to everything
| is lazy and makes computers less fun.
| chubot wrote:
| I tend to agree (although this article feels a bit like it's
| "picking on" some whimsical projects that aren't making grand
| claims ... or are they?)
|
| A term I like to use for this phenonemon is "Inner Platform
| Effect".
|
| https://en.wikipedia.org/wiki/Inner-platform_effect
|
| The prime example is that in the 90's, Java thought they were
| going to make Windows irrelevant (a pile of poorly debugged
| device drivers or something like that).
|
| But now Java is just Windows/Unix process.
|
| Same with all these "alternative computing stacks" -- in the end
| they will almost certainly be just Unix processes.
|
| The only situation I can think of where they wouldn't be is a
| revolution in hardware like the IBM PC producing MS-DOS, etc. And
| even not then, because we already had the mobile revolution
| starting ~15 years ago, and both platforms are based on Unix
| (iOS/Android).
|
| ----
|
| I do think people should carefully consider whether they want to
| create an "inner platform" or make the platforms we ACTUALLY USE
| better.
|
| That is the purpose of https://www.oilshell.org/.
|
| Evolving working systems is harder than creating clean slates. So
| clean slates are fun for that reason: you get to play God without
| much consequence.
|
| Sometimes clean slates introduce new ideas, and that's the best
| case. But a common pattern is that they are better along certain
| dimensions (the "pet peeves" of their creator -- small size,
| etc.), but they are WORSE than their predecessors along all the
| other dimensions!
|
| So that is the logic of Oil being compatible and painstakingly
| cleaning up bash -- to not be worse than the state of the art
| along any dimension! I've experimented writing a clean slate
| shell in the past, but there are surprising number of tradeoffs,
| and a surprising number of things that Unix shell got right.
|
| -----
|
| But Oil can ALSO be viewed as a clean slate, which is sometimes
| hard to understand.
| https://www.oilshell.org/release/latest/doc/oil-language-tou...
|
| I just drafted this doc, and surprisingly little breaks even in
| the non compatible mode:
|
| _What Breaks When You Upgrade to Oil_
| https://www.oilshell.org/preview/doc/upgrade-breakage.html
|
| So I would like to see more systems take a DUAL view --
| compatible, but also envision the ideal end state, and not just
| pile on hacks (e.g. Linux cgroups and Docker are a prime example
| of this.)
|
| You will learn the wisdom of the system that way, and the work
| will be more impactful. For example, it forced me to put into
| words what Unix (and the web) got right:
| https://www.oilshell.org/blog/2022/03/backlog-arch.html
| travisgriggs wrote:
| Buried in the lengthy conclusion at the end:
|
| > Simplicity needs to be pervasive through the system, and
| further in the context it resides in, for one to benefit from it
|
| This should have been the opening text of the essay.
| cyber_kinetist wrote:
| But this is what Jon would actually agree? (Design right
| abstractions for the right hardware?)
| travisgriggs wrote:
| Maybe? One man's abstraction is another man's bureaucracy?
| Stampo00 wrote:
| I agree with the overall sentiment. There's a loud minority of
| developers out there who are nostalgic for something they never
| actually experienced. It's a reaction to the explosion of
| complexity of computers and the increasing depth of the stack of
| software we must depend on to get anything done. It makes us
| vulnerable because we must depend on leaky abstractions since
| there's too much software to fully understand all of it. And
| sometimes there are bugs and it only takes being burned once or
| twice before you become suspicious of any software you haven't
| (or couldn't have) written yourself. But starting from scratch
| kills your productivity, maybe for years!
|
| I'm truly sympathetic. The simplicity of the microcomputer era
| has a certain romance to it, especially if you never lived
| through the reality of it. There's a sense of freedom to the
| model of computing where only your program is running and the
| entirety of the machine is your own little playground. So I get
| why people make stuff like PICO-8 and uxn.
|
| I agree with the criticism of Jon Blow's rhetoric, even though
| the tone is harsher than strictly necessary. Blow's criticisms
| and proposed solutions seem overly opinionated to me, and he's
| throwing out the baby with the bath water. He describes things
| like compile time and runtime hygienic macros like they're a new
| invention that hasn't existed in Lisp since before he was born.
|
| However, I think targeting uxn is unfair. Uxn would be viewed
| better as an experiment in minimalism and portability. I think of
| it more like an art project.
|
| It's unfair because the author is comparing mainstream languages
| that benefit from countless users' input and innovations over the
| course of 60 years to a niche system with a small handful of
| developers that has existed for maybe 2 years or so. That's a
| strawman if I ever heard one.
| tialaramex wrote:
| That Jonathan Blow talk is awful. Repeatedly Blow stands up
| what is barely even a sketch of a straw man argument for a
| position and then just declares he's proven himself right and
| moves on. I can hardly see why even Jonathan himself would
| believe any of this, let alone how it could convince others.
|
| And at the end it's basically nostalgia, which is a really old
| trap, so old that the warnings are well posted for many
| centuries. If you're stepping into that trap, as Blow seemingly
| has, you have specifically ignored warnings telling you what's
| about to happen, and you should - I think - be judged
| accordingly.
| mwcampbell wrote:
| Assuming you're talking about the "collapse of civilization"
| talk, I think the primary appeal is that he's confirming the
| feeling many of us have that software, particularly user-
| facing software on personal computers, is going downhill. And
| I use that metaphor deliberately, because it reminds us that
| things get worse by default, unless we put in the work to
| counteract the relevant natural forces and make things
| better.
|
| Whether he has any real solutions to the decay of modern
| software is, of course, another question. It makes intuitive
| sense that, since previous generations were able to develop
| efficient and pleasant-to-use software for much more
| constrained computers than the ones we now have, we can gain
| something by looking back to the past. But those previous
| generations of software also lacked things that we're no
| longer willing to give up -- security, accessibility,
| internationalization, etc. That doesn't mean we have to
| settle for the excesses of Electron, React, etc. But it does
| mean that, at least for software that needs qualities like
| the ones I listed above, we can't just go back to the ways
| software used to be developed for older computers. So, I
| think you're right about the danger of nostalgia.
| zozbot234 wrote:
| > There's a sense of freedom to the model of computing where
| only your program is running and the entirety of the machine is
| your own little playground.
|
| These days, we call that little playground a 'sandbox'. But I
| think OP's point is that sandboxes can be a lot more efficient
| that what they see w/ uxn. It's not exactly a secret that the
| whole FORTH model does not easily scale to larger systems:
| that's why other languages exist!
| Animats wrote:
| _" There's a loud minority of developers out there who are
| nostalgic for something they never actually experienced."_
|
| Having actually experienced it, all the way back to writing
| business applications in mainframe assembler, I am not
| nostalgic for it. Today I write mostly in Rust.
| dgb23 wrote:
| I'm from the modern web dev generation so to speak, but just
| yesterday had an amazing conversation with my dad, who did
| what you describe.
|
| He explained to me what a "patch" actually means, or meant
| back then. He was talking about debugging and implementing
| loaders for new hardware and such, then he mentioned a
| "patching area", I asked wtf that means and apparently the
| release cycles then where so slow, that a binary left some
| dedicated space for you to directly hack in a patch. He then
| would change the machine code directly to jump to the patched
| area so he could fix a bug or extend the behavior.
|
| Contrast this to the arguably wasteful CI/CD setups we have
| now.
| mwcampbell wrote:
| > Contrast this to the arguably wasteful CI/CD setups we
| have now.
|
| To clarify, are you arguing that modern build and release
| processes should patch binaries in-place, or something
| else?
| benreesman wrote:
| As a kind of middle-ground: CI/CD burns our abundant
| cycles out of band. That's a good place to spend
| extravagantly.
|
| I'm all for burning more cycles on tests and static
| analysis and formal verification etc. _before_ the
| software goes on the user's machine.
|
| But we all live every day with "good enough" on _our_
| machines every day. I think there's a general consensus
| that too much software spends _my_ CPU cycles like a
| drunken sailor.
| dgb23 wrote:
| I'm not arguing. Just observing the difference. Different
| times, different needs and practices.
|
| For example back then it was common to understand the
| whole machine code of a binary in total. We're talking no
| abstraction, no runtimes. Portability and virtual memory
| were luxuries.
|
| I definitely think CI/CD could be less wasteful, but I
| don't necessarily think we should manually patch binaries
| in place.
| mwcampbell wrote:
| Likewise, as a child born in the early 80s, my family's first
| computer was an Apple IIGS, and I routinely used Apple IIe
| computers in school from 1st through 7th grade. I wrote Apple
| II assembler during that time (albeit heavily relying on
| routines copied from books and magazines). And while
| occasionally fooling around with an Apple II emulator with an
| Echo speech synthesizer, or even just listening to one [1],
| makes me nostalgic for my childhood, I don't miss the
| experience of actually using or programming those computers.
| Things are so much better now, even (or perhaps especially)
| for disabled folks like me.
|
| [1]: https://mwcampbell.us/a11y-history/echo.mp3 This file
| has a real Echo followed by an emulator. Unfortunately, I
| forgot where I got it, so I can't provide proper attribution.
| zozbot234 wrote:
| Rust can also be used to write business applications that
| will compile cleanly to mainframe assembler (at least if your
| mainframe is 64-bit and runs Linux).
| dmitriid wrote:
| There were other moments in time between mainframes and
| today.
|
| For example, when I scroll a two-page document in Google Docs
| my CPU usage on an M1 Mac spikes to 20%. For an app with
| overall functionality that is probably less than that of a
| Word 95.
| dmitriid wrote:
| > It's a reaction to the explosion of complexity of computers
| and the increasing depth of the stack of software we must
| depend on to get anything done.
|
| People say this with a straight face, and I don't know if it's
| an elaborate joke of some kind or not.
|
| We're building a tower of babel that requires supercomputers to
| barely run, and somehow end up defending it.
| Avshalom wrote:
| In most english translations the tower of babel was struck
| down by god because it represented the ability of humans to
| challenge the power of god. If we are indeed building a tower
| of babel that's cool, it means that "nothing that they
| propose to do will now be impossible for them."1
|
| The comment you're replying to is not defending the explosion
| of complexity but pointing out that we resent it precisely
| because we find ourselves dependent on it. The article is
| pointing out that we tend to take bad or counterproductive
| paths when we try to free ourselves from that complexity
| though.
|
| 1https://www.bible.com/bible/2016/GEN.11.NRSV
| dgb23 wrote:
| > He describes things like compile time and runtime hygienic
| macros like they're a new invention that hasn't existed in Lisp
| since before he was born.
|
| I suspect that this is where some of the inspiration comes
| from, because he mentioned being a fan of Scheme at a young
| age. At the same time he wants strong static typing, fine
| grained control and power (non restrictive).
| yellowapple wrote:
| The "analysis" of uxn is unproductive to the point of being anti-
| productive.
|
| In particular:
|
| > The most significant performance issue is that all uxn
| implementations we have found use naive interpretation. A typical
| estimation is that an interpreter is between one and two
| magnitudes slower than a good compiler. We instead propose the
| use of dynamic translation to generate native code, which should
| eliminate the overhead associated with interpreting instructions.
|
| Okay, go for it. Literally nothing is stopping you from
| implementing an optimizing compiler for uxn bytecode.
|
| Meanwhile, zero mention in this "analysis" of program size, or
| memory consumption, or the fact that uxn implementations exist
| even for microcontrollers. Interpreters are slow, but they're
| also straightforward to implement and they can be pretty dang
| compact, and so can the interpreted bytecode be much smaller than
| its translated-to-native-machine-code equivalent.
|
| (Interpreters also don't need to be all that slow; I guess this
| guy's never heard of Forth? Or SQLite?)
|
| > Writing a decent big-integer implementation ourselves requires
| some design which we would rather not perform, if possible.
| Instead, we can use a library for simulated 32-bit arithmetic.
| [...] The resulting program takes 1.95 seconds to run in the
| "official" uxn implementation, or 390 times slower than native
| code! This amount of overhead greatly restricts what sort of
| computations can be done at an acceptable speed on modern
| computers; and using older computers would be out of the
| question.
|
| Well yeah, no shit, Sherlock. Doing 32-bit arithmetic on an 8-bit
| machine is gonna be slow. Go try that on some Z80 or 6502 or
| whatever and get back to us.
|
| And no, using older computers ain't "out of the question". There
| are literally uxn ports to DOS, to the Raspberry Pi Pico, to the
| goddamn _Gameboy Advance_ , to all sorts of tiny constrained
| systems. The author would know this if one had done even the
| slightest bit of investigation before deciding to shit all over
| uxn and the folks making it.
|
| > The uxn system could also be a sandbox, and prevent damage
| outside of its memory, but a filesystem device is specified for
| the platform, and is often implemented, and so a uxn program can
| destroy information outside its virtual machine. Any isolation
| would have to be performed outside of the virtual machine; thus
| running a random uxn program is as insecure as running a random
| native executable.
|
| Absolutely nothing in the uxn spec suggests that the "filesystem"
| in question should be the host's filesystem; it could very well
| be some loopback device or a sandbox or what have you. If
| security/isolation is desired, then that's pretty trivial to
| implement in a way that a tiny bytecode stack machine would have
| a _very_ hard time escaping. Either the author is incapable of
| actually reading the documentation of the thing being analyzed,
| or one is being blatantly intellectually dishonest.
|
| ----
|
| Like, I don't know what the author's beef is - maybe Rek and
| Devine ran over the author's dog with their sailboat, or maybe
| the Bytecode Interpreter Mafia smashed the author's kneecaps with
| hammers - but the article reads more like bone-pickery than some
| objective analysis.
| tumult wrote:
| Uxn32, a full Uxn implementation for Win32, already implements
| this kind of filesystem sandbox, and has for months. I don't
| think the author did any research, and instead focused on
| research-y aesthetics.
| twic wrote:
| As an aside, Tony Hoare is a national treasure.
| mhh__ wrote:
| My personal opinion of this is that you can have high and low
| level code written in the same language at the expense of
| requiring more skill from the programmer.
|
| You can write a pure-functional RPN calculator in a handful of
| lines of D, you can also write D's entire implementation in D
| including the "low-level" aspects.
|
| I also think the distinction between low and high level code in
| languages that are not built around some theoretically abstract
| model of execution (e.g. the lambda calculus) to be rather
| pointless and mostly useful only for making poor arguments.
|
| I'm also sceptical of simplicity as an unqualified idea. I have
| read "simple" code with no abstractions that makes it extremely
| hard to read, I have read "simple" code that can be considered as
| such because the abstractions are actually abstractions:
| "Abstraction is the removal of irrelevant detail".
| hsn915 wrote:
| The basic premise of this article seems to be "writing good
| programs without heavy runtime checks is physically impossible".
|
| I'd like to ask the author, how does he think his computer works?
| The hardware is rather complex and it has to work near perfectly
| all the time. It should be impossible according to his premise,
| but it's clearning happening.
| [deleted]
| moonchild wrote:
| Hardware goes through extensive formal verification as well as
| testing (incidentally, it tends to be heavily instrumented, cf
| 'runtime checks', while it is being tested), and has its design
| frozen months before it goes into mass production. If you
| developed software the same way, you might see similar results.
| Most people do not develop software this way.
| TazeTSchnitzel wrote:
| And despite all that effort, hardware is far from free of
| bugs! Lots of broken features ship and have to be disabled,
| or the software has to do horrible workarounds. These fixes
| are hidden inside CPU microcode, OS kernels, GPU drivers.
| uxp100 wrote:
| Yeah, formal verification of software is sort of a niche
| topic, formal verification of hardware is like, one of the
| primary tools. Shit even something like Built in Self Test,
| which is common in hardware, I've seen only rarely, and never
| comprehensive (I'm also not sure comprehensive BiST is the
| right way to think about reliability for software, it's
| probably only very specific things that actually want that)
| mhh__ wrote:
| Hardware is also much simpler than software conceptually at
| least.
|
| You might think of a modern CPU as a black-box that goes
| out and finds (the lack of) dependencies in the instruction
| stream to exploit them, but this all has to be condensed
| into a logical circuit that is bounded in memory
| (registers), can be pipelined, and verified.
|
| And even then with hardware you are typically verifying
| things like "The pipeline never locks up entirely" or "The
| cache never gives back memory from the wrong physical
| address" basic things like that whereas these same kinds of
| invariants in software are rarely profitable to try and
| verify.
| rurban wrote:
| Reads like a typical mid-80ies manifesto for safer languages
| until we arrive at gcc-11. There was no gcc-11 then.
| Sphax wrote:
| I don't use gcc, curious to know what gcc 11 adds.
| jcelerier wrote:
| Cool static analysis features:
| https://developers.redhat.com/blog/2021/01/28/static-
| analysi...
| mhh__ wrote:
| Unless you build the program around the static analysis
| you're going to have a bad time.
|
| David Malcolm has done a really good job with the analyser
| but it can't catch everything by virtue of the C type
| system making it possible to write code that can't be
| guaranteed either by construction or by making opaque
| boundaries.
| spicyusername wrote:
| > In a DevGAMM presentation, Jon Blow managed to convince himself
| and other game developers
|
| Setting the content aside, when an author starts off their with
| with this kind of tone I'm immediately turned off.
|
| There's no need to be so antagonistic. Jonathan Blow "said",
| "claimed", "discussed", or any of the infinite other neutral and
| non-insulting ways to refer to another person's work.
| mhh__ wrote:
| Isn't this basically exactly the same tone the arguments the
| article is trying refute use anyway?
| avgcorrection wrote:
| Blow is not any less confrontational and opinionated himself so
| I'm fine with it.
| ezy wrote:
| This is, of course, an argument against a strawman. I wish the
| author had not mentioned certain folks by name, because the
| analysis is interesting on its own, and makes some good points
| about apparent simplicity. However, by mentioning a specific
| person, and then restating that person's opinion poorly, it does
| a disservice to the material, and basically it just devolves the
| whole thing.
|
| My understanding of Jon Blow's argument is _not_ that he is
| against certain classes of "safe" languages, or even formal
| verification. It is that software, self-evidently, does not work
| well -- or at least not as well as it should. And a big reason
| for that is indeed layers of unnecessary complexity that allow
| people to pretend they are being thoughtful, but serve no useful
| purpose in the end. The meta-reason being that there is a
| distinct lack of care in the industry -- that the kind of
| meticulousness one would associate with something like formal
| verification (or more visibly, UI design and performance) _isn 't
| present_ in most software. It is, in fact, this kind of care and
| dedication that he is arguing _for_.
|
| His language is an attempt to express that. That said, I'm not so
| sure it will be successful at it. I have some reservations that
| are sort of similar to the authors of this piece -- but I do
| appreciate how it makes an attempt, and I think _is_ successful
| in certain parts, that I hope others borrow from (and I think
| some already have).
| benreesman wrote:
| I endorse your much more thoughtful and well-argued post than
| my knee-jerk response down in the gray-colored section below.
|
| John isn't right about everything: he criticizes LSP in the
| cited talk, and I think the jury is in that we're living in a
| golden age of rich language support (largely thanks to the huge
| success of VSCode). I think he was wrong on that.
|
| But the guy takes his craft seriously, he demonstrably builds
| high-quality software that many, many people happily pay money
| for, and generally knows his stuff.
|
| Even Rust gives up _some_ developer affordances for
| performance, and while it's quite fast used properly, there are
| still places where you want to go to a less-safe language
| because you're counting every clock. Rust strikes a good
| balance, and some of my favorite software is written in it, but
| C++ isn't obsolete.
|
| I think Jai is looking like kind of what golang is advertised
| as: a modern C benefitting from decades of both experience and
| a new hardware landscape. I have no idea if it's going to work
| out, but it bothers me when people dismiss ambitious projects
| from what sounds like a fairly uninformed perspective.
|
| HN can't make up its mind right now: is the hero the founder of
| the big YC-funded company that cuts some corners? Is it the
| lone contrarian self-funding ambitious software long after he
| didn't need to work anymore?
| mrtranscendence wrote:
| > But the guy takes his craft seriously, he demonstrably
| builds high-quality software that many, many people happily
| pay money for, and generally knows his stuff.
|
| He has made a few good games, but how has he done anything
| that would paint him as a competent language designer?
| Frankly, Blow has done very little (up to and including being
| a non-asshole) that would make me terribly interested in what
| he's up to.
| benreesman wrote:
| Paul Graham is on record that the best languages are built
| by people who intend to use them, not for others to use.
| FWIW, I agree.
|
| The jury is out on Jai, but it's clearly not a toy. John
| emphasizes important stuff: build times, SOA/AOS as a
| first-class primitive, cache-friendliness in both the L1i
| _and_ L1d. And he makes pragmatic engineering trade offs
| informed by modern hardware: you can afford to over-parse a
| bit if your grammar isn't insane, this helps a lot in
| practice on multi-core. The list goes on.
|
| And "he made a few good games" is really dismissive. He
| doesn't launch a new game every year, but his admittedly
| short list of projects is 100% wild commercially and
| critically successful. On budgets that a FAANG spends
| changing the color of some UI elements.
|
| And that's kind of the point right? Doing better work takes
| time, and there is in fact a lucrative market for high-
| quality stuff.
|
| As for him being an asshole? He's aspy and curt and
| convinced he's right, which is a bad look on the rare
| occasions when he's wrong.
|
| But Bryan Armstrong is on the front page doubling-down on
| such bad treatment of his employees and shareholders that
| they are in public, written revolt. This may have changed
| since I looked, but no one is calling him an asshole.
|
| A world in which a passionate craftsman misses on diplomacy
| while discussing serious technical subject matter is an
| asshole but a well-connected CEO revoking employment offers
| after people already quit their old job is "making the hard
| calls" is basically the opposite of everything the word
| "hacker" stands for.
|
| Asshole.
| dmitriid wrote:
| > He has made a few good games, but how has he done
| anything that would paint him as a competent language
| designer?
|
| You can watch his Twitch streams and see what he does and
| how he uses the language.
|
| He's developing at least two games using it (the
| development of one of them he also shows on stream) and so
| far it's been proven to be a very strong contender for a
| C-like language suitable for game development. Just the
| fact that his entire 3-D game builds in under a few seconds
| is definitely something to aspire to.
| tialaramex wrote:
| > [...] counting every clock. Rust strikes a good balance,
| and some of my favorite software is written in it, but C++
| isn't obsolete.
|
| This isn't a good argument for C++. If you can't get where
| you need to go in Rust because you are "counting every clock"
| you need to go down, which means writing assembler -- not
| sideways to C++. Once you're counting every clock, none of
| the high level languages can help you, because you're at the
| mercy of their compiler's internal representation of your
| program and their optimisers. If you care whether we use CPU
| instruction A or CPU instruction B, you need the assembler,
| which likewise cares, and not a high level language.
|
| Both C++ and Rust provide inline assembler if that's what you
| resort to.
|
| There are things to like about C++ but "counting every clock"
| isn't one of them.
| avgcorrection wrote:
| With regards to language design, Blow is a guy with a series
| of YouTube videos.
|
| The common thing in PL is to publish something written or
| code. So don't be surprised when some people don't feel like
| they have the time to go through an unconventional format.
| mrtranscendence wrote:
| I don't think it's as much of a strawman as you're making it
| out to be. In his talk, Blow says that higher-level
| abstractions haven't made programmers more productive than they
| used to be, and appears to use this as an argument against
| abstraction. He doesn't say (as far as I recall) that we
| shouldn't use something like formal verification, but he does
| put the blame for bad software at the feet of abstraction
| rather than "unnecessary complexity". Or at least if that
| _were_ his point he wasn 't particularly clear about it.
| cyber_kinetist wrote:
| The article is all over the place, I don't really know what it's
| trying to say. It starts by criticizing Jon Blow's talk about
| "collapse of civilization by nobody being to able to do low-level
| programming anymore" as a tirade against the tirade of
| abstractions, but then talks about dxn as a prime (flawed)
| example of this "effort to remove abstractions". I mean, dxn is
| clearly just a wrong abstraction for the hardware we have right
| now, it's actually opposite to what most low-level programmers
| would do. So the dxn example is actually supporting what Jon was
| saying in the talk? It's just not a good example.
|
| Also, the example about dynamic dispatch isn't really as
| persuasive as the author might think. Even if everyone benefits
| from these abstractions, what's the point when that abstraction
| is fundamentally slow on hardware in the first place, no matter
| how much optimization you can do? I mean, the Apple engineers
| have done everything they can do to optimize obj_msgSend() down
| to the assembly level, but you're still discouraged to use Obj-C
| virtual method calls in a tight loop because of performance
| problems. And we know in both principle and practice that
| languages which heavily rely on dynamic polymorphism (like
| Smalltalk) tends to perform much worse than languages like
| C/(non-OOP)C++/Rust which (usually) doesn't rely heavily on
| dynamic polymorphism. In these languages, when performance
| matters devs often use the good ol' switch statement (with enums
| / tagged unions / ADTs) instead to specify different kinds of
| behavior, since they are easier to inline for the compiler and
| the hardware is made to run switch statements faster than virtual
| calls. (Or to go even further you can just put different types of
| objects in separate contiguous arrays, if you are frequently
| iterating and querying these objects...) The problem I think for
| most programmers is that they don't know they can actually make
| these design choices in the first place, since they've learned
| "use virtual polymorphism to model objects" in OOP classes as a
| dogma that they must always adhere to, whereas a switch statement
| could have been better in most cases (both in terms of
| performance and code readability/maintainability. Virtual calls
| may be a good abstraction in some cases, but in most cases there
| are multiple abstractions competing with it that are more
| performant (and arguably, can actually be simpler).
|
| The point Jon is trying to make (although maybe not that clear
| enough from the talk), is that we simply need better abstractions
| for the hardware that we have. And C/C++ doesn't really cut it
| for him, so that's why he's creating his own abstractions from
| scratch by writing a new language. He has often said that he
| dislikes "big ideas programming", which believes that if every
| programmer believes in a core "idea" of programming then
| everything will magically get better. He instead opts towards a
| more pragmatic approach to writing software, which is writing for
| the hardware and the design constraints we have right now. Maybe
| he may seem a bit grumpy from the perspective of people outside
| of OS/compiler/game development (since he also lets out some
| personal developer grievances in the talk), but I think his
| sentiment make sense in a big picture, that we have continuously
| churned out heaps of abstractions that have gone too far from the
| actual inner workings of the hardware, up to the point that
| desktop software has generally become too slow compared to the
| features it provides to users (Looking at you, Electron...)
| mwcampbell wrote:
| > we'd even say that the greatest programmers are so because of
| how they produce redundancy.
|
| Perhaps the greatest of all programmers produce redundancy while
| depending on very little of it in their own code. For example,
| Richard Hipp created SQLite, the ubiquitous and amazingly high-
| quality embedded database, in C. Thinking about that makes me
| feel like I ought to be using C in my own, much more modest
| attempt at a reusable infrastructure project [1]. Cue the cliches
| about younger generations (like mine) being soft compared to our
| predecessors.
|
| [1] https://github.com/AccessKit/accesskit (it's too late now,
| I'm committed to doing it in Rust)
| saagarjha wrote:
| It might be important to consider that making a high
| performance, reliable database is something that is far more
| dependent on the author than the language they choose to use,
| and that it may not be a good choice to cargo cult their
| language choices in your own project.
| RcouF1uZ4gsC wrote:
| > Thinking about that makes me feel like I ought to be using C
| in my own, much more modest attempt at a reusable
| infrastructure project [1].
|
| Before you do that, read https://www.sqlite.org/testing.html
|
| SQLite has a crazy amount of verification and testing. There
| are something like 640x the code for tests as for the actual
| implementation.
|
| If you are looking to SQLite as an inspiration to write in C,
| you should also consider the verification and testing that
| makes it work.
| avgcorrection wrote:
| > > Perhaps the greatest of all programmers produce
| redundancy while depending on very little of it in their own
| code.
|
| And that's a self-own.
| mwcampbell wrote:
| You're right, I forgot about the extraordinarily thorough
| test suite.
| avgcorrection wrote:
| My great-grandfather died of Polio in his twenties. Look at me,
| a thirty-something, with no Polio, not even any Covid symptoms!
| Curse my soft-handed generation!!
| benreesman wrote:
| Jon Blow live-codes compilers and 3D rendering engines and shit
| from scratch on YouTube or whatever. Starting you essay by
| dissing him is not a great intro.
| drakonka wrote:
| I don't think live-coding compilers puts anyone above
| criticism.
| tialaramex wrote:
| I'm not even sure what "live-coding compilers" would mean.
| What's "live" about it?
|
| I've watched people live coding audio software. That's a
| performance, like jamming or rap battling where you make
| music spontaneously for an audience. It's a distinct skill
| from being able to polish stuff in a studio, just like
| playing a guitar or singing live is a distinct skill, it's
| even distinct from working with a loop sampler (like Marc
| Rebillet) although it's often related.
|
| But for a compiler, what's "live"? Somebody writes some code
| and you... tokenize it in real time, transform it into some
| intermediate representation, optimise that and then spit out
| machine code? No? Then it's not "live coding", you're just
| talking about how he got paid to stream on Twitch or
| whatever. Loads of people do that. Ketroc streamed his last
| minute strategies for the recent SC2 AI tournament, he's not
| even a "professional" programmer, half his audience haven't
| seen Java before.
| benreesman wrote:
| I'm saying that John Blow works on fairly difficult software
| projects and shows the process, mistakes and all. That's
| takes a lot of both confidence and humility. I tend to make
| more mistakes the first 2 or 3 times I pair program with
| someone, I can only imagine with 10k people watching my every
| mistake.
|
| Add to this that John Blow is such an exemplar of the
| entrepreneurial spirit that this site has basically its
| foundational value.
|
| The guy worked in the rat race for awhile, saved a modest
| amount to self-fund building Braid. Braid smashed every
| record both commercially and critically for what one guy in a
| garage could do in games.
|
| He took all that money, hired a few people carefully, and
| built The Witness, not on fucking Unity or something, but
| from the shaders up so that it would have a unique look and
| not be a clone of something else. The Witness was also a huge
| commercial and critical success.
|
| His most recent project is, uh, ambitious. I don't know if
| it's going to prove feasible. But I'm sure as hell rooting
| for success rather than failure.
|
| Now mostly this is directed at the auth or the post, but
| you've kind of signed up for a little of this: what the fuck
| is your CV?
| drakonka wrote:
| Thank you for the rundown on Jonathan Blow. I've worked in
| game development for over a decade and am familiar with his
| games and accomplishments, but maybe it'll be useful to
| someone else. If you want to direct a question to the
| author of the post you might want to reach out to them
| directly as I'm not their proxy.
| benreesman wrote:
| Yeah I want to emphasize that my reply is kind of to all
| the peanut gallery stuff on this thread, not trying to
| single you out.
|
| If you work in games then you know that John Blow is by
| many measures the most demonstrably successful game
| developer without a big studio behind him, others might
| not.
|
| Talking about this stuff on the Internet is a sloppy,
| haphazard business: deep insight rarely fits in a tweet.
| I don't mind that the blog author is not only saying
| ridiculous things but naming-and shaming earnest, serious
| pros into the bargain.
|
| I mind that so many people on _this_ site, which I do
| care about, are lining up behind that bullshit.
| kej wrote:
| I think that's kind of the point. You wouldn't want to drive
| your car across a bridge that was live-designed on YouTube. You
| want a bridge that was designed in a boring way with lots of
| redundant safety systems.
| benreesman wrote:
| I'd rather drive in a car where John Blow built the software
| than in a car running software by people who throw rocks at
| John Blow.
| Xeoncross wrote:
| There are best practices and optimized algorithms. Due to
| ignorance or preference these are sometimes ignored. This results
| in a lot of confusion, bloat, and failed attempts.
|
| We all want less sloppy code and less sloppy abstractions, but
| it's hard to do in the real world with tens of millions of
| developers placed under different constraints.
|
| "I was not given time to do this correctly, I'll just use a
| library that adds 200mb RES but basically works for now."
|
| The Internet Architecture Board, W3 working groups, OWASP, Open
| telemetry, and hundreds of other groups are working hard to
| standardize things so we don't have to repeat the same mistakes
| in a problem area. Heck even community sites like leetcode help
| raise awareness about sub-optimal solutions to problem spaces.
___________________________________________________________________
(page generated 2022-06-11 23:00 UTC)