[HN Gopher] John Carmack on inlined code (2014)
___________________________________________________________________
John Carmack on inlined code (2014)
Author : bpierre
Score : 463 points
Date : 2024-10-06 16:47 UTC (3 days ago)
(HTM) web link (number-none.com)
(TXT) w3m dump (number-none.com)
| mihaic wrote:
| When I first heard the maxim that an intelligent person should be
| able to hold two opposing thoughts at the same time, I was naive
| to think it meant weighing them for pros and cons. Over time I
| realized that it means balancing contradictory actions, and the
| main purpose of experience is knowing when to apply each.
|
| Concretely related to the topic, I've often found myself inlining
| short pieces of one-time code that made functions more explicit,
| while at other times I'll spend days just breaking up thousand
| line functions into simpler blocks just to be able to follow
| what's going on. In both cases I was creating inconsistencies
| that younger developers nitpick -- I know I did.
|
| My goal in most cases now is to optimize code for the limits of
| the human mind (my own in low-effort mode) and like to be able to
| treat rules as guidelines. The trouble is how can you scale this
| to millions of developers, and what are those limits of the human
| mind when more and more AI-generated code will be used?
| codeflo wrote:
| There's also the effect that a certain code structure that's
| clearer for a senior dev might be less clear for a junior dev
| and vice versa.
| rob74 wrote:
| Or rather, senior devs have learned to care more for having
| clear code rather than (over-)applying principles like DRY,
| separation of concerns etc., while juniors haven't (yet)...
| stahorn wrote:
| When you thought you made "smart" solutions and many years
| later you have to go in and fix bugs in it, is usually when
| you learn this.
| newswasboring wrote:
| There is a human side to this which I am going through
| right now. The first full framework I made is proving to
| be developer unfriendly in the long run, I put more
| emphasis on performance than readability (performance was
| the KPI we were trying to improve at the time). Now I am
| working with people who are new to the codebase, and I
| observed they were hesitant to criticize it in front of
| me. I had to actively start saying "lets remove <frame
| work name>, its outdated and bad". Eventually I found it
| liberating, it also helped me detach my self worth from
| my work, something I struggle with day to day.
| orwin wrote:
| My 'principle' for DRY is : twice is fine, trice is worth
| an abstraction (if you think it has a small to moderate
| chance to happen again). I used to apply it no matter what,
| soi guess it's progress...
| whstl wrote:
| I really dislike how this principle ends up being used in
| practice.
|
| A good _abstraction_ that makes actual sense is perfectly
| good even when it 's used only once.
|
| On the other hand, the idea of deduplicating code by
| creating an _indirection_ is often not worth it for long-
| term maintenance, and is precisely the kind of thing that
| will cause maintenance headaches and anti-patterns.
|
| For example: don't mix file system or low-level database
| access with your business code, just create a proper
| abstraction. But deduplicating very small fragments of
| same-abstraction-level can have detrimental effects in
| the long run.
| ahoka wrote:
| I think the main problem with these abstractions that
| they are merely indirections in most cases, limiting the
| usefulness to several use cases (sometimes to things that
| never going to be needed).
|
| To quote Dijsktra: "The purpose of abstraction is not to
| be vague, but to create a new semantic level in which one
| can be absolutely precise."
| n0w wrote:
| I can't remember where I picked it up from, but nowadays
| I try to be mindful of when things are "accidentally"
| repeated and when they are "necessarily" repeated.
| Abstractions that encapsulate the latter tend to be a
| good idea regardless of how many times you've repeated a
| piece of code in practice.
| codeflo wrote:
| Exactly, but distinguishing the two that requires an
| excellent understanding of the problem space, and can't
| at all be figured out in the solution space (i.e., by
| only looking at the code). But less experienced people
| only look at the code. In theory, a thousand repetitions
| would be fine if each one encodes an independent bit of
| information in the problem space.
| loup-vaillant wrote:
| The overarching criterion really is how it affects
| locality of behaviour: repeating myself and adding an
| indirection are both bad, the trick is to pick the one
| that will affect locality of behaviour the least.
|
| https://loup-vaillant.fr/articles/source-of-
| readability#avoi...
| ikari_pl wrote:
| twice is fine... except some senior devs apply it to the
| entire file (today I found the second entire file/class
| copied and pasted over to another place... the newer copy
| is not used either)
| tomohawk wrote:
| "Better a little copying than a little dependency" - Russ
| Cox
| syntaxfree wrote:
| WET, write everything twice
| aitchnyu wrote:
| Do you use a copy paste detector to find third copy?
| JauntyHatAngle wrote:
| I know it's overused, but I do find myself saying YAGNI to
| my junior devs more and more often, as I find they go off
| on a quest for the perfect abstraction and spend days yak
| shaving as a result.
| silisili wrote:
| Yes! I work with many folks objectively way younger and
| smarter than me. The two bad habits I try to break them
| of are abstractions and what ifs.
|
| They spend so much time chasing perfection that it
| negatively affects their output. Multiple times a day I
| find myself saying 'is that a realistic problem for our
| use case?'
|
| I don't blame them, it's admirable. But I feel like we
| need to teach YAGNI. Anymore I feel like a saboteur,
| polluting our codebase with suboptimal solutions.
|
| It's weird because my own career was different. I was a
| code spammer who learned to wrangle it into something
| more thoughtful. But I'm dealing with overly thoughtful
| folks I'm trying to get to spam more code out, so to
| speak.
| reshlo wrote:
| I've had the opposite experience before. As a young
| developer, there were a number of times where I advocated
| for doing something "the right way" instead of "the good
| enough way", was overruled by seniors, and then later I
| had to fix a bug by doing it "the right way" like I'd
| wanted to in the first place.
|
| Doing it the right way from the start would have saved so
| much time.
| spinningslate wrote:
| This thread is a great illustration of the reality that
| there are no hard rules, judgement matters, and we don't
| always get things right.
|
| I'm pretty long-in-the-tooth and feel like I've gone
| through 3 stages in my career:
|
| 1. Junior dev where everything was new, and did "the
| simplest thing that could possibly work" because I wasn't
| capable of anything else (I was barely capable of the
| simple thing).
|
| 2. Mid-experience, where I'd learned the basics and
| thought I knew everything. This is probably where I wrote
| my worst code: over-abstracted, using every cool
| language/library feature I knew, justified on the basis
| of "yeah, but it's reusable and will solve lots of stuff
| in future even though I don't know what it is yet".
|
| 3. Older and hopefully a bit wiser. A visceral rejection
| of speculative reuse as a justification for solving
| anything beyond the current problem. Much more focus on
| really understanding the underlying problem that actually
| needs solved: less interest in the latest and greatest
| technology to do that with, and a much larger
| appreciation of "boring technology" (aka stuff that's
| proven and reliable).
|
| The focus on really understanding the problem tends to
| create more stable abstractions which do get reused. But
| that's emergent, not speculative ahead-of-time. There are
| judgements all the way through that: sometimes deciding
| to invest in more foundational code, but by default
| sticking to YAGNI. Most of all is seeing my value not as
| weilding techno armageddon, but solving problems for
| users and customers.
|
| I still have a deep fascination with exploring and
| understanding new tech developments and techniques. I
| just have a much higher bar to adopting them for
| production use.
| ryandrake wrote:
| We all go through that cycle. I think the key is to get
| yourself through that "complex = good" phase as quickly
| as possible so you do the least damage and don't end up
| in charge of projects while you're in it. Get your
| "Second System" (as Brooks[1] put it) out of the way as
| quick as you can, and move on to the more focused, wise
| phase.
|
| Don't let yourself fester in phase 2 and become (as Joel
| put it) an Architecture Astronaut[2].
|
| 1: https://en.wikipedia.org/wiki/Second-system_effect
|
| 2: https://www.joelonsoftware.com/2001/04/21/dont-let-
| architect...
| lcnPylGDnU4H9OF wrote:
| Heh, I've read [2] before but another reading just now
| had this passage stand out:
|
| > Another common thing Architecture Astronauts like to do
| is invent some new architecture and claim it solves
| something. Java, XML, Soap, XmlRpc, Hailstorm, .NET,
| Jini, oh lord I can't keep up. And that's just in the
| last 12 months!
|
| > I'm not saying there's anything wrong with these
| architectures... by no means. They are quite good
| architectures. What bugs me is the stupendous amount of
| millennial hype that surrounds them. Remember the
| Microsoft Dot Net white paper?
|
| Nearly word-for-word the same thing could be said about
| JS frameworks less than 10 years ago.
| whstl wrote:
| Both React and Vue are older than 10 years old at this
| point. Both are older than jQuery was when they were
| released, and both have a better backward compatibility
| story. The only two real competitors not that far behind.
| It's about time for this crappy frontend meme to die.
|
| Even SOAP didn't really live that long before it started
| getting abandoned en masse for REST.
|
| As someone who was there in the "last 12 months" Joel
| mentions, what happened in enterprise is like a different
| planet altogether. Some of this technology had a
| completely different level of complexity that to this day
| I am not able to grasp, and the hype was totally
| unwarranted, unlike actual useful tech like React and Vue
| (or, out of that list, Java and .NET).
| jcgrillo wrote:
| > The focus on really understanding the problem tends to
| create more stable abstractions which do get reused. But
| that's emergent, not speculative ahead-of-time.
|
| I think this takes a kind of humility you can't teach. At
| least it did for me. To learn this lesson I had to
| experience in reality what it's actually like to work on
| software where I'd piled up a bunch of clever ideas and
| "general solutions". After doing this enough times I
| realized that there are very few general solutions to
| real problems, and likely I'm not smart enough to game
| them out ahead of time, so better to focus on things I
| can actually control.
| james_marks wrote:
| > Most of all is seeing my value not as wielding techno
| armageddon, but solving problems for users and customers
|
| Also later in my career, I now know: change begets
| change.
|
| That big piece of new code that "fixes everything" will
| have bugs that will only be discovered by users, and
| stability is achieved over time through small, targeted
| fixes.
| wild_egg wrote:
| The important bit is figuring out if those times where
| "the right way" would have helped outweigh the time saved
| by defaulting to "good enough".
|
| There are always exceptions, but there's typically order
| of magnitude differences between globally doing "the
| right thing" vs "good enough" and going back to fix the
| few cases where "good enough" wasn't actually good
| enough.
| bluGill wrote:
| Only long experience can help you figure this out. All
| projects should have at least 20% of the developers who
| have been there for more than 10 years so they have
| background context to figure out what you will really
| need. You then need at least 30% of your developers to be
| intended to be long term employees but they have less
| than 10 years. In turn that means never more than 50% of
| your project should be short term contractors. Nothing
| wrong with short term contractors - they often can write
| code faster than the long term employees (who end up
| spending a lot more time in meetings) - but their lack of
| context means that they can't make those decisions
| correctly and so need to ask (in turn slowing down the
| long term employees even more)
|
| If you are on a true green field project - your
| organization has never done this before good luck. Do the
| best you can but beware that you will regret a lot. Even
| if you have those long term employees you will do things
| you regret - just not as much.
| sdeframond wrote:
| > then later I had to fix a bug
|
| How much later? Is it possible that by delivering sooner
| your team was able to gain insight and/or provide value
| sooner? That matters!
| pjmlp wrote:
| Here is an unwanted senior tip, in many consulting
| projects without the "the good enough way" first, there
| isn't anything left for doing "the right way" later on.
| JasserInicide wrote:
| Everything in moderation, even moderation.
| pastaguy1 wrote:
| This isn't meant to be taken too literally or
| objectively, but I view YAGNI as almost a meta principle
| with respect to the other popular ones. It's like an
| admission that you won't always get them right, so in the
| words of Bukowski, "don't try".
| james_marks wrote:
| Agreed. I've been trying to dial in a rule of thumb:
|
| If you aren't using the abstraction on 3 cases when you
| build it, it's too early.
|
| Even two turns into a higher bar than I expected.
| recursive wrote:
| It's more case by case for me. A magic number should get
| a named constant on its first use. That's an abstraction.
| spc476 wrote:
| C++ programmers decided against NULL, and for well over a
| decade, recommended using a plain 0. It was only recently
| that they came up with a new name: nullptr. Sigh.
| int_19h wrote:
| That had to do with the way NULL was defined, and the
| implications of that. The implication carried over from C
| was that NULL would always be null _pointer_ as opposed
| to 0, but in practice the standard defined it simply as 0
| - because C-style (void*)0 wasn 't compatible with all
| pointer types anymore - so stuff like:
| void foo(void*); void foo(int);
| foo(NULL);
|
| would resolve to foo(int), which is very much contrary to
| expectations for a null _pointer_ ; and worse yet, the
| wrong call happens silently. With foo(0) that behavior is
| clearer, so that was the justification to prefer it.
|
| On the other hand, if you accept the fact that NULL is
| really just an alias for 0 and not specifically a null
| pointer, then it has no semantic meaning as a named
| constant (you're literally just spelling the numeric
| value with words instead of digits!), and then it's about
| as useful as #define ONE 1
|
| And at the same time, that was the only definition of
| NULL that was backwards compatible with C, so they
| couldn't just redefine it. It had to be a new thing like
| nullptr.
|
| It is very unfortunate that nullptr didn't ship in C++98,
| but then again that was hardly the biggest wart in the
| language at the time...
| randomdata wrote:
| Your documentation will tell when you need an
| abstraction. Where there is something relevant to
| document, there is a relevant abstraction. If its not
| worth documenting, it is not worth abstracting. Of
| course, the hard part is determining what is actually
| relevant to document.
|
| The good news is that programmers generally hate writing
| documentation and will avoid it to the greatest extent
| possible, so if one is able to overcome that friction to
| start writing documentation, it is probably worthwhile.
|
| Thus we can sum the rule of thumb up to: If you have
| already started writing documentation for something, you
| are ready for an abstraction in your code.
| sgu999 wrote:
| good devs*, not all senior devs have learned that, sadly.
| As a junior dev I've worked under the rule of senior devs
| who were over-applying arbitrary principles, and that
| wasn't fun. Some absolute nerds have a hard time
| understanding where their narrow expertise is meant to fit,
| and they usually don't get better with age.
| zeroq wrote:
| As someone who recently had to go over a large chunk of
| code written by myself some 10-15 years ago I strongly
| agree with this sentiment. Despite being a mature
| programmer already at that time, I found a lot of magic and
| gotchas that were supposed to be, and felt at the time,
| super clever, but now, without a context, or prior version
| to compare, they are simply overcomplicated.
| devjab wrote:
| I find that it's typically the other way around as things
| like DRY, SOLID and most things "clean code" are hopeless
| anti-patterns peddled by people like Uncle Bob who haven't
| actually worked in software development since Fortran was
| the most popular language. Not that a lot of these things
| are bad as a principle. They come with a lot of "okish"
| ideas, but if you follow them religiously you're going to
| write really bad code.
|
| I think the only principle in programming I think can be
| followed at all times is YAGNI (you aren't going to need
| it). I think every programming course, book, whatever
| should start by telling you to never, ever, abstract things
| before you absolutely can't avoid it. This includes DRY.
| It's a billion times better to have similar code in
| multiple locations that are isolated in their purpose, so
| that down the line, two-hundred developers later you're not
| sitting with code where you'll need to "go to definition"
| fifteen times before you get to the code you actually need
| to find.
|
| Of course the flip-side is that, sometimes, it's ok to
| abstract or reuse code. But if you don't have to, you
| should never ever do either. Which is exactly the opposite
| of what junior developers do, because juniors are taught
| all these "hopeless" OOP practices and they are taught to
| mindlessly follow them by the book. Then 10 years later (or
| like 50 years in the case of Uncle Bob) they realise that
| functional programming is just easier to maintain and more
| fun to work with because everything you need to know is
| happening right next to each other and not in some obscure
| service class deep in some ridiculous inheritance tree.
| int_19h wrote:
| The problem with repeating code in multiple places is
| that when you find a bug in said code, it won't actually
| be fixed in all the places where it needs to be fixed.
| For larger projects especially, it is usually a
| worthwhile tradeoff versus having to peel off some extra
| abstraction layers when reading the code.
|
| The problems usually start when people take this as an
| opportunity to go nuts on generalizing the abstraction
| right away - that is, instead of refactoring the common
| piece of code into a simple function, it becomes a
| generic class hierarchy to cover all conceivable future
| cases (but, somehow, rarely the actual future use case,
| should one arise in practice).
|
| Most of this is just cargo cult thinking. OOP is a valid
| tool on the belt, and it is genuinely good at modelling
| certain things - but one needs to understand _why_ it is
| useful there to know when to reach for it and when to
| leave it alone. That is rarely taught well (if at all),
| though, and even if it is, it can be hard to grok without
| hands-on experience.
| kolinko wrote:
| I bumped into that issue, and it caused a lot of friction
| between me and 3 young developers I had to manage.
|
| Ideas on how to overcome that?
| whstl wrote:
| Teaching.
|
| I had this problem with an overzealous junior developer and
| the solution was showing some different perspectives. For
| example John Ousterhout's A Philosophy of Software Design.
| thelastparadise wrote:
| I tried this but they just come back with retorts like
| "OK boomer" which tends to make the situation even worse.
|
| How do you respond to that?
| cbrozefsky wrote:
| Fire them.
| garblegarble wrote:
| The sibling comment says "fire them". That sounds glib,
| but it's the correct solution here.
|
| From what you've described, you have a coworker who is
| not open to learning and considering alternative
| solutions. They are not able to defend their approach,
| and are instead dismissive (and using an ageist joke to
| do it). This is toxic to a collaborative work
| environment.
|
| I give some leeway to assholes who can justify their
| reasoning. Assholes who just want their way because it's
| their way aren't worth it and won't make your product
| better.
| yodsanklai wrote:
| This is a discriminatory statement and it should be taken
| seriously.
| jerf wrote:
| To be honest, at the point where they are being insulting
| I also agree firing them is a very viable alternative.
|
| However, to answer the question more generally, I've had
| some success first acknowledging that I agree the
| situation is suboptimal, and giving some of the reasons.
| These reasons vary; we were strapped for time, we simply
| didn't know better yet, we had this and that specific
| problem to deal with, sometimes it's just straight up
| "yeah I inherited that code and would never have done
| that", honestly.
|
| I then indicate my willingness to spend some time fixing
| the issues, but make it clear that there isn't going to
| be a Big Bang rewriting session, but that we're going to
| do it incrementally, with the system working the whole
| time, and they need to conceive of it that way. (Unless
| the situation is in the rare situation where a rewrite is
| needed.) This tends to limit the blast radius of any
| specific suggestion.
|
| Also, as a senior engineer, I do not 100% prioritize
| "fixing every single problem in exactly the way I'd do
| it". I will selectively let certain types of bad code
| through so that the engineer can have experience of it. I
| may not let true architecture astronautics through, but
| as long as it is not entirely unreasonable I will let a
| bit more architecture than perhaps I would have used
| through. I think it's a common fallacy of code review to
| think that the purpose of code review is to get the code
| to be _exactly_ as "I" would have written it, but that's
| not really it.
|
| Many people, when they see this degree of flexibility,
| and that you are not riding to the defense of every
| coding decision made in the past, and are willing to take
| reasonable risks to upgrade things, will calm down and
| start working with you. (This is also one of the subtle
| reasons automated tests are _super super important_ ; it
| is far better for them to start their refactoring and
| have the automated tests explain the difficulties of the
| local landscape to them than a developer just
| blathering.)
|
| There will be a set that do not. Ultimately, that's a
| time to admit the hire was a mistake and rectify it
| appropriately. I don't believe in the 10x developer, but
| not for the usual egalitarian reasons... for me the
| problem is I firmly, _firmly_ believe in the existence of
| the net-negative developer, and when you have those the
| entire 10x question disappears. Net negative is not a
| permanent stamp, the developer has the opportunity to
| work their way out of it, and arguably, we _all_ start
| there both as a new developer and whenever we start a new
| job /position, so let me sooth the egalitarian impulse by
| saying this is a description of someone at a point in
| time, not a permanent label to be applied to anyone.
| Nevertheless, someone who _insists_ on massive changes,
| who deploys morale-sapping insults to get their way,
| whose ego is tied up in some specific stack that you 're
| not using and basically insists either that we drop
| everything and rewrite now "or else", who one way or
| another refuses to leave "net negative" status... well,
| it's time to take them up on the "or else". I've
| exaggerated here to paint the picture clearly in prose,
| but, then again, of the hundreds of developers I've
| interacted with to some degree at some point, there's a
| couple that match every phrase I gave, so it's not like
| they don't exist at all either.
| AnimalMuppet wrote:
| Or, perhaps better, just let that hang for a moment -
| long enough to become uncomfortable - and then say "Try
| again."
|
| As others have said, if they can't or won't get that
| that's unacceptable behavior, fire them. (jerf is more
| patient than I am...)
| ziml77 wrote:
| You mean they literally say "ok boomer"? If so they are
| not mature enough for the job. That phrase is equivalent
| to "fuck off" with some ageism slapped on top and is
| totally unacceptable for a workplace.
| mjburgess wrote:
| To a certain sort of person, conversation is a game of arriving
| at these antithesis statements: * Inlining
| code is the best form of breaking up code. * Love is
| evil. * Rightwing populism is a return to leftwing
| politics. * etc.
|
| The purpose is to induce aporia (puzzlement), and hence make it
| possible to evaluate apparent contradictions. However, a lot of
| people resent feeling uncertain, and so, people who speak this
| way are often disliked.
| skummetmaelk wrote:
| That doesn't seem like holding two opposing thoughts. Why is
| balancing contradictory actions to optimize an outcome
| different to weighing pros and cons?
| mihaic wrote:
| What I meant to say was that when people encounter
| contradictory statements like "always inline one-time
| functions" and "breakdown functions into easy to understand
| blocks", they try to only pick one single rule, even if they
| consider the pros and cons of each rule.
|
| After a while they consider both rules as useful, and will
| move to a more granular case-by-base analysis. Some people
| get stuck at rule-based thinking though, and they'll even
| accuse you of being inconsistent if you try to do case-by-
| case analysis.
| leoh wrote:
| You are probably reaching for Hegel's concept of dialectical
| reconciliation
| mihaic wrote:
| Not sure, didn't Hegel say that there should be a synthesis
| step at some point? My view is that there should never be a
| synthesis when using these principles as tools, as both
| conflicting principles need to always maintain opposites.
|
| So, more like Heraclitus's union of opposites maybe if you
| really want to label it?
| greenie_beans wrote:
| the synthesis would be the outcome maybe? writing code that
| doesn't follow either rule strictly:
|
| > Concretely related to the topic, I've often found myself
| inlining short pieces of one-time code that made functions
| more explicit, while at other times I'll spend days just
| breaking up thousand line functions into simpler blocks
| just to be able to follow what's going on. In both cases I
| was creating inconsistencies that younger developers
| nitpick -- I know I did.
| peepee1982 wrote:
| That's exactly what I try to do. I think it's an unpopular
| opinion though, because there are no strict rules that can be
| applied, unlike with pure ideologies. You have to go by feel
| and make continuous adjustments, and there's no way to know if
| you did the right thing or not, because not only do different
| human minds have different limits, but different challenges
| don't tax every human mind to the same proportional extent.
|
| I get the impressions that programmers don't like ambiguity in
| general, let alone in things they have to confront in real
| life.
| mr_toad wrote:
| > there are no strict rules that can be applied
|
| The rules are there for a reason. The tricky part is making
| sure you're applying them for that reason.
| peepee1982 wrote:
| I don't know what your comment has to do with my comment.
| tomohawk wrote:
| What makes an apprentice successful is learning the rules of
| thumb and following them.
|
| What makes a journeyman successful is sticking to the rules of
| thumb, unless directed by a master.
|
| What makes a master successful is knowing why the rules of
| thumb exist, what their limits are, when to not follow them,
| and being able to make up new rules.
| moss2 wrote:
| That maxim ("an intelligent person should be able to hold two
| opposing thoughts at the same time") is also used by religious
| groups to indoctrinate people. Be careful!
| navane wrote:
| So this maxim can both be used for good and for bad. Extra
| points for this maxim.
| wrasee wrote:
| A metamaxim?
| echelon wrote:
| As a tool, it's a wedge to break indoctrination and overcome
| bias. It leads to more pragmatic and less ideological
| thinking. The subject is compelled to contrast opposing views
| and consider the merits of each.
|
| Any use by ideological groups twists the purpose of the
| phrase on its head. The quote encourages thinking and
| consideration. You'd have to turn off your brain for this to
| have the opposite effect.
| darkwater wrote:
| > Any use by ideological groups twists the purpose of the
| phrase on its head. The quote encourages thinking and
| consideration. You'd have to turn off your brain for this
| to have the opposite effect.
|
| Well, it would not be too surprising that it can be used
| to, for example, make people think that they can trust
| science and also believe in some almighty, unexplainable by
| science divine entity.
| desdenova wrote:
| You can trust science, but science doesn't cover all of
| reality.
|
| My imaginary friend does, buy my magic book.
| asenchi wrote:
| Thoughts like this miss the purpose and significance of
| the maxim being discussed. Science doesn't disprove an
| "almighty, unexplainable divine entity" any more than an
| "almighty, unexplainable divine entity" could also
| provide science as a means to understand the nature of
| things.
|
| Careful you don't fall into the trap of indoctrination.
| :)
| kevin_thibedeau wrote:
| The US has a statutory rapist and someone who believes in
| active weather manipulation seated in Congress. It's easy
| to get the masses to turn off their brains.
| andrewmcwatters wrote:
| Indoctrination is the exact opposite.
| ykonstant wrote:
| Sure, but the maxim can be used to inject this 'exact
| opposite', in perfect accordance with the maxim!
| crabbone wrote:
| Maybe "indoctrination" was a poor choice of word here. The
| problem with this maxim is that it welcomes moral
| relativism.
|
| This can be bad on the assumption that whoever is exposed
| to the maxim is not a proponent of "virtue ethics" (I use
| this as a catch-all term for various religious ethics
| doctrines, the underlying idea is that moral truths are
| given to people by a divine authority rather than
| discovered by studying human behavior, needs and
| happiness). In this situation, the maxim is an invitation
| to embrace ideas that aren't contradictory to one's own,
| but that live "outside the system", to put them on equal
| footing.
|
| To make this more concrete, let's suppose the subject of
| child brides. Some religions have no problem with marrying
| girls of any age to men of any age. Now, the maxim suggests
| that no matter what your moral framework looks like, you
| should accept that under some circumstances it's OK to have
| child marriages. But, this isn't a contradiction. There's
| no ethical theory that's not based on divine revelation
| that would accept such a thing. And that's why, by and
| large, the Western society came to treat child marriages as
| a crime.
|
| Contradictions are only possible when two parties agree on
| the premises that led to contradicting conclusion, and, in
| principle, should be possible to be resolved by figuring
| out which party had a faulty process that derived a
| contradicting opinion. Resolving such contradictions is a
| productive way forward. But, the kind of "disagreement"
| between religious ethics and "derived" ethics is where the
| premises are different. So, there can be no way forward in
| an argument between the two, because the only way the two
| can agree is if one completely abandons their premises.
|
| Essentially, you can think about it as if two teams wanted
| to compete in some sport. If both are playing soccer, then
| there's a meaning to winning / losing, keeping the score,
| being good or bad at the game. But, if one team plays
| soccer while another team is playing chess... it just
| doesn't make sense to pit them against each other.
| wavemode wrote:
| > maxim suggests that no matter what your moral framework
| looks like, you should accept that under some
| circumstances it's OK to have child marriages
|
| You seem to have either misread the maxim, or
| misunderstood it.
|
| The maxim is not that an intelligent person -must- hold
| two contradictory thoughts in their head at once -
| rather, that they should be able to. Being "able to" do
| something, does not mean one does it in all cases.
|
| To say that the maxim suggests that someone "should"
| accept that something that is bad, is sometimes good, is
| a plain misreading of the text. All it's saying is that
| people -can- do this, if they so choose.
| crabbone wrote:
| In this context, it doesn't matter if they "must" or
| "should be able to". No, I didn't misunderstand the
| maxim. No, I didn't mean that it has to happen in all
| cases. You are reading something into what I wrote that I
| didn't.
|
| The maxim is not used by religious people to its intended
| effect. Please read again, if you didn't see it the first
| time. The maxim is used as a challenge that can be
| rephrased as: "if you are as intelligent as you claim,
| then you should be able to accept both what you believe
| to be true and whatever nonsense I want you to believe to
| be true."
| astrolx wrote:
| Doublethink!
| inopinatus wrote:
| Stretch goal: hold three
| j_bum wrote:
| A safe work contribution plan for the year: Hold 1+
| (stretch 3) opposing thoughts at a time.
| inopinatus wrote:
| "hold one opposing thought" could be a zen koan
| oofoe wrote:
| Nah. That's what the Monk is for.
| divs1210 wrote:
| A person of culture, I see.
|
| Electric Monks were made for a reason.
|
| Surprisingly pertinent to the current discussion.
| inopinatus wrote:
| apposite to the opposite
| hibernator149 wrote:
| Wait, isn't that just Doublethink from 1984? Holding two
| opposing thoughts is a sign that your mental model of the world
| is wrong and that it needs to be fixed. Where have you heard
| that maxim?
| HKH2 wrote:
| It's not referring to cognitive dissonance.
| perrygeo wrote:
| No you've got it completely backwards. Reality has multiple
| facets (different statements, all of which can be true) and a
| mental model that insists on a singular judgement is
| reductionist, missing the forest for the trees. Light is a
| wave and a particle. People are capable of good and bad. The
| modern world is both amazing and unsustainable. etc.
|
| Holding multiple truths is a sign that you understand the
| problem. Insisting on a singular judgement is a sign that
| you're just parroting catchy phrases as a short cut to
| thinking; the real world is rarely so cut and dry.
| ragnese wrote:
| > My goal in most cases now is to optimize code for the limits
| of the human mind (my own in low-effort mode) and like to be
| able to treat rules as guidelines. The trouble is how can you
| scale this to millions of developers, and what are those limits
| of the human mind when more and more AI-generated code will be
| used?
|
| I think the truth is that we just CAN'T scale that way with the
| current programming languages/models/paradigms. I can't PROVE
| that hypothesis, but it's not hard to find examples of big
| software projects with lots of protocols, conventions,
| failsafes, QA teams, etc, etc that are either still hugely
| difficult to contribute to (Linux kernel, web browsers, etc) or
| still have plenty of bugs (macOS is produced by the richest
| company on Earth and a few years ago the CALCULATOR app had a
| bug that made it give the wrong answers...).
|
| I feel like our programming tools are pretty good for
| programming in the small, but I suspect we're still waiting for
| a breakthrough for being able to actually make complex software
| reliably. (And, no, I don't just mean yet another "framework"
| or another language that's just C with a fancier type system or
| novel memory management)
|
| Just my navel gazing for the morning.
| austin-cheney wrote:
| I suppose that depends on the language and the elegance of
| your programming paradigm. This is where primitive simplicity
| becomes important, because when your foundation is composed
| of very few things that are not dependent upon each other you
| can scale almost indefinitely in every direction.
|
| Imagine you are limited to only a few ingredients in
| programming: statements, expressions, functions, objects,
| arrays, and operators that are not overloaded. That list does
| not contain classes, inheritance, declarative helpers, or a
| bunch of other things. With a list of ingredients so small no
| internal structure or paradigm is imposed on you, so you are
| free to create any design decisions that you want. Those
| creative decisions about the organization of things is how
| you dictate the scale of it all.
|
| Most people, though, cannot operate like that. They claim to
| want the freedom of infinite scale, but they just need a
| little help. With more help supplied by the language,
| framework, whatever the less freedom you have to make your
| own decisions. Eventually there is so much help that all you
| do as a programmer is contend with that helpful goodness
| without any chance to scale things in any direction.
| twh270 wrote:
| I think the only way this gets better is with software
| development tools that make it impossible to create invalid
| states.
|
| In the physical world, when we build something complex like a
| car engine, a microprocessor, or bookcase, the laws of
| physics guide us and help prevent invalid states. Not all of
| them -- an upside down bookcase still works -- but a lot of
| them.
|
| Of course, part of the problem is that when we build the
| software equivalent of an upside down bookcase, we 'patch' it
| by creating trim and shims to make it look better and more
| structurally sound instead of tossing it and making another
| one the right way.
|
| But mostly, we write software in a way that allows for a ton
| of incorrect states. As a trivial example, expressing a
| person's age as an 'int', allowing for negative numbers. As a
| more complicated example, allowing for setting a coupon's
| redemption date when it has not yet been clipped.
| james_marks wrote:
| To determine what states should be possible is the act of
| writing software.
| bunderbunder wrote:
| John Backus's Turing Award lecture meditated on this idea,
| and concluded that the best way to do this at scale is to
| simply minimize the creation of states in the first place,
| and be careful and thoughtful about _where_ and _how_ we
| create the states that can 't be avoided.
|
| I would argue that that's actually a better guide to how we
| manage complexity in the physical world. Mechanical
| engineers generally like to minimize the number of moving
| parts in a system. When they can't avoid moving parts, they
| tend to fixate on them, and put a lot of effort into
| creating linkages and failsafes to try to prevent them from
| interacting in catastrophic ways.
|
| The software engineering way would be to create extra
| moving parts just because complicated things make us feel
| smart, and deal with potential adverse interactions among
| them by posting signs that say "Careful, now!" without
| clearly explaining what the reader is supposed to be
| careful of. 50 years later, people who try to stick to the
| (very sound!) principles that Backus proposed are still
| regularly dismissed as being hipsters and pedants.
| int_19h wrote:
| I'd say that the extra moving parts are there in most
| cases not because someone wanted to "feel smart" (not
| that it doesn't happen), but to make the pre-existing
| moving parts do something that they weren't originally
| supposed to do, because nobody understands how those pre-
| existing parts work well enough to re-engineer them
| properly on the schedule that they are given. We are an
| industry that builds bridges out of matchsticks, duck
| tape, and glue, and many of our processes are basically
| about how to make the result of that "good enough".
| bluGill wrote:
| I don't think we will ever get the breakthrough you are
| looking for. Things like design patterns and abstractions are
| our attempt at this. Eventually you need to trust that
| however wrote the other code you have to deal with is sane.
| This assumption is false (and it might be you who is insane
| thinking they could/would make it work they way you think it
| does).
|
| We will never get rid of the need for QA. Automated tests are
| great, I believe in them (Note that I didn't say unit tests
| or integration tests). Formal proofs appear great (I have
| never figured out how to prove my code), but as Knuth said
| "Beware of bugs in the above code; I have only proved it
| correct, not tried it". There are many ways code can be meet
| the spec and yet wrong because in the real world you rarely
| understand the problem well enough to write a correct spec in
| the first place. QA should understand the problem well enough
| to say "this isn't what I expected to happen."
| DSMan195276 wrote:
| > protocols, conventions, failsafes, QA teams, etc, etc that
| are either still hugely difficult to contribute to (Linux
| kernel, web browsers, etc)
|
| To be fair here, I don't think it's reasonable to expect that
| once you have "software development skills" it automatically
| gives you the ability to fix any code out there. The Linux
| Kernel and web browsers are not hard to contribute to because
| of conventions, they're hard because most of that code
| requires a lot of outside knowledge of things like hardware
| or HTML spec, etc.
|
| The actual submitting part isn't the easiest, but it's well
| documented if you go looking, I'm pretty sure most people
| could handle it if they really had a fix they wanted to
| submit.
| ragnese wrote:
| There are multiple reasons that contributing to various
| projects may be difficult. But, I was replying to a
| specific comment about writing code in a way that is easy
| to understand, and the comment author's acknowledgement
| that this idea/practice is hard to scale to a large number
| of developers (presumably because everyone's skills are
| different and because we each have different ideas about
| what is "clear", etc).
|
| So, my comment was specifically about code. Yes, developing
| a kernel driver requires knowledge of the hardware and its
| quirks. But, if we're just talking about the code, why
| shouldn't a competent C developer be able to read the code
| for an existing hardware driver and come away understanding
| the hardware?
|
| And what about the parts that are NOT related to fiddly
| hardware? For example, look at all of the recent drama with
| the Linux filesystem maintainer(s) and interfacing with
| Rust code. Forget the actual human drama aspect, but just
| think about the technical code aspect: The Rust devs can't
| even figure out what the C code's semantics are, and the
| lead filesystem guy made some embarrassing outbursts saying
| that he wasn't going to help them by explaining what the
| actual interface contracts are. It's probably because he
| doesn't even know what his own section of the kernel does
| in the kind of detail that they're asking for... That last
| part is my own speculation, but these Rust guys are _also_
| competent at working with C code and they can 't figure out
| what assumptions are baked into the C APIs.
|
| Web browser code has less to do with nitty gritty hardware.
| Yet, even a very competent C++ dev is going to have a ton
| of trouble figuring out the Chromium code base. It's just
| too hard to keep trying to use our current tools for these
| giant, complex, software projects. No amount of convention
| or linting or writing your classes and functions to be
| "easy to understand" is going to _really_ matter in the big
| picture. Naming variables is hard and important to do well,
| but at the scale of these projects, individual variable
| names simply don 't matter. It's hard to even figure out
| what code is being executed in a given context/operation.
| tomjakubowski wrote:
| > Yet, even a very competent C++ dev is going to have a
| ton of trouble figuring out the Chromium code base.
|
| I don't think this is true, or at least it wasn't circa
| 2018 when I was writing C++ professionally and semi-
| competently. I sometimes had to read, understand and
| change parts of the Chromium code base since I was
| working on a component which integrated CEF. Over time I
| began to think of Chromium as a good reference for how to
| maintain a well-organized C++ code base. It's remarkably
| plain and understandable, greppable even. Eventually I
| was able to contribute a patch or two back to CEF.
|
| The hardest thing by far with respect to making those
| contributions wasn't understanding the C++, it was
| understanding how to work the build system for
| development tasks.
| TheHegemon wrote:
| Also agree that the example code base is not the best
| example to use.
|
| The Chromium code base is a joy to read and I would
| routinely spend hours just reading it to understand
| deeper topics relating to the JS runtime.
|
| Compared to my company's much smaller code base that
| would take hours just to understand the most simplest
| things because it was written so terribly.
| knodi wrote:
| > I feel like our programming tools are pretty good for
| programming in the small, but I suspect we're still waiting
| for a breakthrough for being able to actually make complex
| software reliably. (And, no, I don't just mean yet another
| "framework" or another language that's just C with a fancier
| type system or novel memory management)
|
| Readability is for human optimization for self or for other
| people's posterity and code comprehension to the readers
| mind. We need a new way to visualize/comprehension code that
| doesn't involve heavy reading and the read's personal
| capabilities of syntax parsing/comprehension.
|
| This is something we will likely never be able to get right
| with our current man machine interfaces; keyboard,
| mouse/touch, video and audio.
|
| Just a thought. As always I reserve the right to be wrong.
| skydhash wrote:
| Reading is more than enough. What's often lacking is
| usually the why? I can understand the code and what it's
| doing, but I may not understand the problem (and sub
| problems) it's solving . When you can find explanations for
| that (links to PR discussions, archives of mail threads,
| and forums post), it's great. But some don't bother or it's
| somewhere in chat logs.
| madisp wrote:
| calculator app on latest macos (sequoia) has a bug today - if
| you write FF_16 AND FF_16 in the programmer mode and press =,
| it'll display the correct result - FF_16, but the history
| view displays 0_16 AND FF_16 for some reason.
| mgsouth wrote:
| We've been there, done that. CRUD apps on mainframes and
| minis had incredibly powerful and productive languages and
| frameworks (Quick, Quiz, QTP: you're remembered and missed.)
| Problem is, they were TUI (terminal UI), isolated, and
| extremely focused; i.e. limited. They _functioned_ , but
| would be like straight-jackets to modern users.
|
| (Speaking of... has anyone done a 80x24 TUI client for HN?
| That would be interesting to play with.)
| JadeNB wrote:
| > macOS is produced by the richest company on Earth and a few
| years ago the CALCULATOR app had a bug that made it give the
| wrong answers...
|
| This is stated as if surprising, presumably because we think
| of a calculator app as a simple thing, but it probably
| shouldn't be that surprising--surely the calculator app isn't
| used that often, and so doesn't get much in-the-field
| testing. Maybe you've occasionally used the calculator in
| Spotlight, but have you ever opened the app? I don't think I
| have in 20 years.
| grbsh wrote:
| > limits of the human mind when more and more AI-generated code
| will be used
|
| We already have a technology which scales infinitely with the
| human mind: abstraction and composition of those abstractions
| into other abstractions.
|
| Until now, we've focused on getting AI to produce correct code.
| Now that this is beginning to be successful, I think a
| necessary next step for it to be useful is to ensure it
| produces well-abstracted and clean code (such that it scales
| infinitely)
| defaultcompany wrote:
| > My goal in most cases now is to optimize code for the limits
| of the human mind (my own in low-effort mode)
|
| I think you would appreciate the philosophy of the Grug Brained
| Developer: https://grugbrain.dev
| gspencley wrote:
| My intro to programming was that I wanted to be a game
| developer in the 90s. Carmack and the others at Id were my
| literal heroes.
|
| Back then, a lot of code optimizations was magic to me. I still
| just barely understand the famous inverse square root
| optimization in the Quake III Arena source code. But I wanted
| to be able to do what those guys were doing. I wanted to learn
| assembly and to be able to drop down to assembly and to know
| where and when that would help and why.
|
| And I wasn't alone. This is because these optimizations are not
| obvious. There is a "mystique" to them. Which makes it cool. So
| virtually ALL young, aspiring game programmers wanted to learn
| how to do this crazy stuff.
|
| What did the old timers tell us?
|
| Stop. Don't. Learn how to write clean, readable, maintainable
| code FIRST and then learn how to profile your application in
| order to discover the major bottlenecks and then you can
| optimize appropriately in order of greatest impact descending.
|
| If writing the easiest code to maintain and understand also
| meant writing the most performant code, then the concept of
| code optimization wouldn't even exist. The two are mutually
| exclusive, except in specific cases where it's not and then
| it's not even worth discussing because there is no conflict.
|
| Carmack seems to acknowledge this in his email. He realizes
| that inlining functions needs to be done with careful judgment,
| and the rationale is both performance and bug mitigation. But
| that if inlining were adopted as a matter of course, a policy
| of "always inline first", the results would quickly be an
| unmaintainable, impossible to comprehend mess that would swing
| so far in the other direction that bugs become more prominent
| because you can't touch anything in isolation.
|
| And that's the bane of software development: touch one thing
| and end up breaking a dozen other things that you didn't even
| think about because of interdependence.
|
| So we've come up with design patterns and "best practices" that
| allow us to isolate our moving parts, but that has its own set
| of trade-offs which is what Carmack is discussing.
|
| Being a 26 year veteran in the industry now (not making games
| btw), I think this is the type of topic that you need to be
| very experienced to be able to appreciate, let alone to be able
| to make the judgment calls to know when inlining is the better
| option and why.
| tetha wrote:
| I had exactly this discussion today in an architectural
| discussion about an infrastructure extension today. As our
| newest team member noted, we planned to follow the reference
| architecture of a system in some places, and chose not to
| follow the reference architecture in other places.
|
| And this led to a really good discussion pulling the reference
| architecture of this system apart and understanding what it
| optimizes for (resilience and fault tolerance), what it
| sacrifices (cost, number of systems to maintain) and what we
| need. And yes, following the reference architecture in one
| place and breaking it in another place makes sense.
|
| And I think that understanding the different options, as well
| as the optimization goals setting them apart, allows you to
| make a more informed decision and allows you to make a stronger
| argument why this is a good decision. In fact, understanding
| the optimization criteria someone cares about allows you to
| avoid losing them in topics they neither understand nor care
| about.
|
| For example, our CEO will not understand the technical details
| why the reference architecture is resilient, or why other
| choices are less resilient. And he would be annoyed about his
| time being wasted if you tried. But he is currently very aware
| of customer impacts due to outages. And like this, we can offer
| a very good argument to invest money in one place for
| resilience, and why we can save money in other places without
| risking a customer impact.
|
| We sometimes follow rules, and in other situations, we might
| not.
| mandevil wrote:
| Yes, and it is the engineering experience/skill to know when
| to follow the "rules" of the reference architecture, and when
| you're better off breaking them, that's the entire thing that
| makes someone a senior engineer/manager/architect whatever
| your company calls it.
| jschrf wrote:
| Your newest team member sounds like someone worth holding
| onto.
| lifeisstillgood wrote:
| I often Bang on about "software is a new form of literacy". And
| this I feel is a classic example - software is a form of
| literacy that not only can be executed by a CPU but also at the
| same time is a way to transmit concepts from one humans head to
| another (just like writing)
|
| And so asking "will AI generated code help" is like asking
| "will AI generated blog spam help"?
|
| No - companies with GitHub copilot are basically asking how do
| I self-spam my codebase
|
| It's great to get from zero to something in some new JS
| framework but for your core competancy - it's like outsourcing
| your thinking - always comes a cropper
|
| (Book still being written)
| debit-freak wrote:
| I think a lot of the traditional teachings of "rhetoric" can
| apply to coding very naturally--there's often practically
| unlimited ways to communicate the same semantics precisely,
| but how you lay the code out and frame it can make the human
| struggle to read it straightforward to overcome (or near-
| impossible, if you look at obfuscation).
| davidw wrote:
| > is a way to transmit concepts from one humans head to
| another (just like writing)
|
| That's almost its primary purpose in my opinion... the CPU
| does not care about Ruby vs Python vs Rust, it's just
| executing some binary code instructions. The code is so that
| other people can change and extend what the system is doing
| over time and share that with others.
| rileymat2 wrote:
| I get your point, but often the binary code instructions
| between those is vastly different.
| j7ake wrote:
| Computational thinking is more important than software per
| se.
|
| Computational thinking is the mathematical thinking.
| j7ake wrote:
| To make an advance in a field, you must simultaneously believe
| in what's currently known as well as distrust that the paradigm
| is all true.
|
| This gives you the right mindset to focus on advancing the
| field in a significant way.
|
| Believing in the paradigm too much will lead to only
| incremental results, and not believing enough will not provide
| enough footholds for you to work on a problem productively.
| hnuser123456 wrote:
| On a positive note, most AI-gen code will follow a style that
| is very "average" of everything it's seen. It will have its own
| preferred way of laying out the code that happens to look like
| how most people using that language (and sharing the code
| online publicly), use it.
| SoftTalker wrote:
| > other times I'll spend days just breaking up thousand line
| functions into simpler blocks just to be able to follow what's
| going on
|
| Absolutely, I'll break up a long block of code into several
| functions, even if there is nowhere else they will be called,
| just to make things easier to understand (and potentially
| easier to test). If a function or procedure does not fit on one
| screen, I will almost always break it up.
|
| Obviously "one screen" is an approximation, not all
| screens/windows are the same size, but in practice for me this
| is about 20-30 lines.
| hinkley wrote:
| That's undoubtedly a Zelda Fitzgerald quote (her husband
| plagiarized her shamelessly).
|
| As a consequence of the Rule of Three, you are allowed to have
| rules that have one exception without having to rethink the
| law. All X are Y except for Z.
|
| I sometimes call this the Rule of Two. Because it deserves more
| eyeballs than just being a subtext of another rule.
| xnx wrote:
| > I was creating inconsistencies that younger developers
| nitpick
|
| Obligatory: "A foolish consistency is the hobgoblin of little
| minds"
|
| Continued because I'd never read the full passage: "... adored
| by little statesmen and philosophers and divines. With
| consistency a great soul has simply nothing to do. He may as
| well concern himself with his shadow on the wall. Speak what
| you think now in hard words, and to-morrow speak what to-morrow
| thinks in hard words again, though it contradict every thing
| you said to-day. -- 'Ah, so you shall be sure to be
| misunderstood.' -- Is it so bad, then, to be misunderstood?
| Pythagoras was misunderstood, and Socrates, and Jesus, and
| Luther, and Copernicus, and Galileo, and Newton, and every pure
| and wise spirit that ever took flesh. To be great is to be
| misunderstood." -- Ralph Waldo Emerson, Self-Reliance: An
| Excerpt from Collected Essays, First Series
| gorgoiler wrote:
| > Inlining functions also has the benefit of not making it
| possible to call the function from other places.
|
| I've really gone to town with this in Python. def
| parse_news_email(...): def parse_link(...): ...
| def parse_subjet(...): ... ...
|
| If you are careful, you can rely on the outer function's
| variables being available inside the inner functions as well.
| Something like a logger or a db connection can be passed in once
| and then used without having to pass it as an argument all the
| time: # sad def f1(x, db, logger): ...
| def f2(x, db, logger): ... def f3(x, db, logger): ...
| def g(xs, db, logger): for x0 in xs: x1 =
| f1(x0, db, logger) x2 = f2(x1, db, logger) x3
| = f3(x2, db, logger) yikes x3 # happy
| def g(xs, db, logger): def f1(x): ... def f2(x):
| ... def f3(x): ... for x in xs: yield
| f3(f2(f1(x)))
|
| Carmack commented his inline functions as if they were actual
| functions. Making actual functions enforces this :)
|
| Classes and "constants" can also quite happily live inside a
| function but those are a bit more jarring to see, and classes
| usually need to be visible so they can be referred to by the type
| annotations.
| eru wrote:
| Funny enough, the equivalent of your Python example is how
| Haskell 'fakes' all functions with more than one argument (at
| least by default).
|
| Imperative blocks of code in Haskell (do-notation) also work
| like this.
| InDubioProRubio wrote:
| So inlining is the private of functions without a object. Pop
| it all to stack, add arguments, set functionpointer to
| instructionstart of inline code, challenge accepted, lets go
| to..
| bitwize wrote:
| It's possible to nest subprograms within subprograms in Ada. I
| take advantage of this ability to break a large operation into
| one or more smaller simpler "core" operations, and then in the
| main body of the procedure write some setup code followed by
| calls to the core operation(s).
| raverbashing wrote:
| It might be a benefit in some cases, but I do feel that
| f1/f2/f3 are the prime candidates for actual unit testing
| toenail wrote:
| > Inlining functions also has the benefit of not making it
| possible to call the function from other places.
|
| Congrats, you've got an untestable unit.
| xboxnolifes wrote:
| The unit here is the email, not the email's link or subjects.
| Those are implementation details.
| mgsouth wrote:
| What do you use unit tests for, other than verifying
| implementation details?
|
| Perhaps we have a difference in definition. To me, a unit
| test for a function such as "parse_news_email" would
| explore variations in parameters and states. Because of
| combinatorial explosion, that often means at least some
| white-box testing. I'm not going to generate random
| subjects and senders, and received-froms, I'm going to
| target based on internal details. Are we doing smart things
| with the message ID hostname? Then what happens if two
| messages come in with the same message ID but from
| different relays? The objective is that the unit test
| wrings out the implementation details, and the _caller 's_
| unit test doesn't need to exhaustively test them.
|
| This white-box texting may require directly poking at or
| mocking internal functions or at least abusing how they're
| called. For example, parsing the news item might entail
| pulling up _and modifying_ conversation thread cache
| entries or state. For some of the tests you may need hand-
| crafted cache state, it 's not feasible to create unique
| states for each parameter combination you're testing, and
| testing a combination will pollute the state for the
| following combinations. Or maybe the function depends upon
| an external resource you can't beat to death with a million
| identical requests. So the least-bad, simplest solution may
| be to freeze or back out part of the normal state update in
| the unit test. Which would usually involve directly
| invoking the internal routines.
|
| Can this lead to fragile, false-positve to the point of
| useless tests? You betcha. That's where entertaining two
| contrary viewpoints is needed :) Use experience and good
| judgement about pros and cons in the particular situation.
| randomdata wrote:
| Unit tests are for documenting the API contract for the
| user. You are going to target based on what you are
| willing to forevermore commit to for those who will use
| what you have created. Indeed, what happens when two
| messages come in with the same message ID is something
| the user needs to be aware of and how it functions needs
| to remain stable no matter what you do behind the scenes
| in the future, so you would absolutely want to document
| that behaviour. _How_ it is implemented is irrelevant,
| though. The only thing that matters is that, from the
| public API perspective, it is handled appropriately.
|
| There is a time and place for other types of tests, of
| course. You are right that unit tests are not the be all
| and end all. A good testing framework will allow you to
| mark for future developers which tests are "set in stone"
| and which are deemed throwaway.
| mgsouth wrote:
| We're in general agreement about the purpose of unit
| tests. I disagree on a couple of points.
|
| Tests do _not_ document the API. No test is complete, and
| for that reason alone can 't completely document
| anything. For example, a good API might specify that "the
| _sender_ must be non-null, and must be valid per RFC
| blah. " There's no way to test that inclusively, to check
| all possible inputs. You can't use the test cases to
| deduce "we must meet RFC blah." You might _suspect_ it,
| but you 'd be risking undefined behavior it you stray
| from input that doesn't exactly match the test cases. And
| before anyone objects "the API docs can be incomplete
| too," well, that true. But the point is that a written
| API has vastly more descriptive power than a set of test
| cases. (The same applies to "self-documenting code". Bah
| humbug.) There's also the objection "but you can't
| guarentee cases you don't test!" Also true. That's
| reality. _You can never test all your intended behavior._
| You pick your test cases to do the best you can, and
| change your cases as problems pop up.
|
| The other thing I would shy away from is including
| throwaway tests in the framework. Throwaways are a thing,
| developers use them all the time, but don't make them
| unwanted stepchildren--poorly (incompletely?) designed,
| slapped together, limited scope, confusing for another
| developer (including time-traveling self) to wade through
| and decide whether this is a real failure or just bogus
| test. They're tech debt. _Less frequently used_ tests are
| another matter. For example, release-engineering tests
| that only get run against release candidates. But these
| should be just as much real, set in stone, as any other
| deliverable.
|
| Which I guess is a third viewpoint nuance difference. I
| treat tests as being part of the package just as much as
| any other deliverable. They morph and shift as APIs
| change, or dependencies mutate, or bugs are found. They
| aren't something that can be put to the side and left to
| vegetate.
| randomdata wrote:
| _> There 's no way to test that inclusively, to check all
| possible inputs._
|
| Which means the RFC claim is false and should not be
| asserted in the first place. The API may incidentally
| accept valid RFC input, but there is no way to know that
| it does for sure for all inputs. You might _suspect_ it
| conforms to the RFC, but to claim that it does with
| certainty is incorrect. Only what is documented in the
| tests is known to be true.
|
| Everything else is undefined behaviour. Even if you do
| happen to conform to an RFC in one version, without
| testing to verify that continues to hold true, it
| probably won't.
|
| This is exactly why unit tests are the expected
| documentation by users. It prevents you, the author, from
| make spurious claims. If you try, the computer will catch
| you in your lies.
|
| _> The other thing I would shy away from is including
| throwaway tests in the framework._
|
| What does that mean? I suspect you are thinking of
| something completely different as this doesn't quite make
| sense with respect to what I said. It probably makes
| sense in another context, and if I have inferred that
| context correctly, I'd agree... But, again, untreated to
| our discussion.
| mgsouth wrote:
| OK, one more round. An API spec is a _contract_ , not a
| _guarentee of correctness._ You, as the client, are free
| to pass me any data that fits the spec. If my parsing
| library does the wrong thing, then I 've got a bug and
| need to fix it. My tests are also defective and need to
| be adjusted.
|
| If you passed 3.974737373 to cos(x), and got back 200.0,
| would you be mollified if the developers told you "that
| value clearly isn't in the unit test cases, so you're in
| undefined behavior"? Of course not. The _spec_ might be
| "x is a single-float by value, 0.0 <= x < 2.0 * PI,
| result is the cosine of X as a single-float." That's a
| contract, an intent--an API.
|
| The same for a mail parser. If my library croaks with a
| valid (per RFC) address then I've got a problem. If I try
| to provide some long, custom, set of cases I will or
| won't support, then my customer developers are going to
| be rightfully annoyed. What are _they_ supposed to do
| when they get a valid but unsupported address? Note we
| 're not talking about carving out broad exceptions
| reasonable in context ("RFC 5322 except we don't support
| raw IP addresses foo@[1.2.3.4]", "we treat all usernames
| as case-insensitive"). And we're not talking about "Our
| spec (intent) is foo, but we've only tested blah blah
| blah."
|
| Early in my career I would get pretty frustrated by users
| who were not concerned with arranging their data and
| procedures the right way, clueless about what they
| _really_ were doing. OK, so I still get frustrated by
| stupid :) But it 's gradually seeped into my head that
| what matters is the user's intentions. Specs are an
| imperfect simplificaton of those very complex things,
| APIs are imperfect simplifcations of the specs, and our
| beautiful code and distributed clusters and redundant
| networks are extremely limited and imperfect
| implementations of the APIs. Some especially harmful
| potential flaws get extra attention during arch,
| implementation, and testing. When things get too far out
| we fix them.
| Spivak wrote:
| > What do you use unit tests for, other than verifying
| implementation details?
|
| 1. Determining when the observable behavior of the
| program changes.
|
| 2. Codifying _only_ the specific behaviors that are known
| to be relied on by callers.
|
| 3. Preventing regressions after bugs are fixed.
|
| Failing tests are alarm bells, when do you want them to
| grab your attention?
| mgsouth wrote:
| Excellent points, violently agree, my question was poorly
| worded. The _purpose_ of units tests is to verify the
| contracted API is actually being provided _by the
| implementation details_. A clearer question might have
| been "what are unit tests for if not to exercise the
| implementation details, verifying they adhere to the
| API?" Unit tests validate implementation details,
| integration tests validate APIs.
|
| To me, a good unit test beats the stuffing out of the
| unit. It's as much a part of the unit as the public
| functions, so should take full advantage of internal
| details (keeping test fragility in mind); of course that
| implies the unit test needs ongoing maintenance just as
| much as the public functions. If you're passing a small
| set of inputs and checking the outputs, well that's a
| smoke test, not a unit test.
|
| To answer your last question, I want the alarm bells to
| ring whenever the implementation details don't hold up.
| That's whether the function code changed, a code or state
| dependency changed, or the testing process itself
| changed. If at all feasible all the unit tests run every
| time the the complete suite is run, in full meat-grinder
| mode. "Complete suite" is hand-wavy; e.g. it might be the
| suite for a major library, but not the end-to-end
| application.
| jeltz wrote:
| Which is usually a positive. Testing tiny subunits usually
| just makes refactoring and adding new features hard while not
| improving test quality.
| ninetyninenine wrote:
| Not according to jon carmack. He stated he switched to pure
| functional programming in the intro which is basically
| stating all his logic is in the form of unit testable pure
| functions.
| foldr wrote:
| Nothing about pure functional programming requires unit
| testing all of your functions. You can decide to unit
| test larger or smaller units of code, just as you can in
| any other paradigm.
| ninetyninenine wrote:
| In pure functional programming a pure function is unit
| testable by definition of what a pure function is. I
| never said it requires functions to be tested. Just that
| it requires functions to be testable.
|
| In other paradigms do not do this. As soon as a module
| touches IO or state it becomes entangled with that and
| NOT unit testable.
|
| Is it still testable? Possibly. But not as a unit.
| nuancebydefault wrote:
| I found this [] article of Carmack. While reading, I
| understood there is a large set of gray shades to
| pureness of "pure functional" code. He calls being
| functional a useful abstraction, a function() is never
| purely functional.
|
| [] https://web.archive.org/web/20120501221535/http://gama
| sutra....
| ninetyninenine wrote:
| When people say pure functional programming they never
| mean the entire program is like this.
|
| Because if it were your program would have no changing
| state and no output.
|
| What they mean is that your code is purely functional as
| much as possible. And there is high segregation between
| functional code and non functional code in the sense that
| state and IO is segregated as much as possible away into
| very small very general functionality.
| tightbookkeeper wrote:
| > pure fp
|
| No he didn't.
| sunshowers wrote:
| Like most things being talked about here, so much depends
| on the specifics.
|
| I think developers should generally try and aim for, at
| every scale, the outputs of a system to be pure functions
| of the inputs (whether by reducing the scope of the system
| or expanding the set of things considered inputs). Beyond
| that there are so many decisions at the margin that are
| going to be based on personal inclination.
| hansvm wrote:
| Testing is a tool that sometimes makes your life easier.
| IME, many (not all) tiny subunits do actually have better
| tests when examined at that level. You just want to avoid
| tests which will need to be updated for unrelated changes,
| and try to avoid writing code which propagates that sort of
| minutia throughout the codebase:
|
| > while not improving test quality
|
| The big wins from fine-grained testing are
|
| 1. Knowing _where_ your program is broken
|
| 2. Testing "rare" edge cases
|
| Elaborating on (2), your code probably works well enough on
| some sort of input or you wouldn't ship it. Tests allow you
| to cheaply test all four Turkish "i"s and some unicode
| combining marks, test empty inputs, test what happens when
| a clock runs backward ever or forward too slowly/quickly,
| .... You'll hit some of those cases eventually in prod,
| where pressures are high and debugging/triaging is slow,
| and integration tests won't usually save you. I'm also a
| huge fan of testing timing-based logic with pure functions
| operating on the state being passed in (so it's tested,
| better than an integration test would accomplish, and you
| never have to wait for anything godawful like an actual
| futex or sleep or whatever).
|
| > makes refactoring and adding new features hard
|
| What you're describing is a world where accomplishing a
| single task (refactoring, adding a new feature) has ripple
| effects through the rest of the system, or else the tests
| are examining proxy metrics rather than invariants the tiny
| subunits should actually adhere to. Testing being hard is a
| symptom of that design, and squashing the symptom (avoiding
| tests on tiny subunits) won't fix any of the other problems
| it causes.
|
| If you're stuck in some codebase with that property and
| without the ability to change it, by all means, don't test
| every little
| setup_redis_for_db_payment_handling_special_case_hulu
| method. Do, however, test things with sensible, time-
| invariant names -- data structures, algorithms, anything
| that if you squint a bit looks kind of like parsing or
| serialization, .... If you have a finicky loop with a bunch
| of backoff-related state, pull the backoff into its own
| code unit and test how it behaves with clocks that run
| backward or other edge cases. The loop itself (or any other
| confluence of many disparate coding concepts) probably
| doesn't need to be unit tested for the reasons you mention,
| but you usually can and should pull out some of the
| components into testable units.
| bluGill wrote:
| The problem is there is rarely a clear interface for your
| subunit. As such you will want to refactor that interface
| in ways that break tests in the future. If you are
| writing another string you can probably come up with a
| good interface and then write good tests that won't make
| refactoring hard - but string should be a solved problem
| for most of us (unless you are writing a new programming
| language) and instead we are working on problems that are
| not as clear and only our competitors work on so we can't
| even learn from others.
| ninetyninenine wrote:
| This is a major insight. Defining a local function isn't a
| big deal you can always just copy and pasta it out to global
| scope.
|
| Any time you merge state with function you can no longer move
| the function. This is the same problem as OOP. Closures can't
| be modular the same way methods in objects can't be modular.
|
| The smallest unit of testable module is the combinator. John
| Carmack literally mentioned he does pure functional
| programming now which basically everyone in this entire
| thread is completely ignoring.
| ahoka wrote:
| Congratulations, you are writing test for things that would
| not need test if weren't put behind a under-defined
| interface. Meanwhile sprint goals are not met and overall
| product quality is embarrassing, but you have 100% MC/DC
| coverage of your addNumbersOrThrowIfAbove(a, b, c).
| gorgoiler wrote:
| Yup, and I should have called this out as a downside. Thank
| you for raising it.
|
| On visibility, one of the patterns I've always liked in Java
| is using package level visibility to limit functions to that
| code's package _and_ that packages tests, where they are in
| the same package (but possibly defined elsewhere.)
|
| (This doesn't help though with the reduction in argument
| verbosity, of course.)
| jayd16 wrote:
| Ideally, you've just moved the unit boundary to where it
| logically should be instead of many small implementation
| details that should not be exposed.
| scbrg wrote:
| Not sure I believe the benefit of this approach outweighs the
| added difficulty wrt testing, but I certainly agree that Python
| needs a _yikes_ keyword :-)
| nuancebydefault wrote:
| What is the benefit of such a yikes? Or do you consider it a
| yikes language as a whole?
|
| Personally I like that functions can be inside functions, as
| a trade off between inlining and functional seperation in
| C++.
|
| The scope reduction makes it easier to track bugs while it
| has the benefits of separation of concern.
| scbrg wrote:
| > What is the benefit of such a yikes? Or do you consider
| it a yikes language as a whole?
|
| None, it was just a simple joke based on the typo in the
| post I replied to. I like Python, and have in fact been
| happily using it as my main language for over 20 years.
| gorgoiler wrote:
| Ahhh, now I (top level author) get it :)
| wraptile wrote:
| The latter pattern is very popular in Python web scraping and
| data parsing niches as the code is quite verbose and specific
| and I'm very happy with this approach. Easy to read and debug
| and the maintenance is naturally organized.
| grumbel wrote:
| That's not an improvement, as it screws up the code flow. The
| point of inline blocks is that you can read the code the same
| way as it is executed. No surprised that code might be called
| twice or that a function call could be missed or reordered.
| Adding real functions causes exactly the indirection that one
| wanted to avoid in the first place. If the block has no name
| you know that it will only be executed right where it is
| written.
| gorgoiler wrote:
| Yeah that's a valid point. I tend to have in mind that as
| soon as I pull any of the inner functions out to the publicly
| visible module level I can say goodbye to ever trying to stop
| people reusing the code when I don't really want them to.
|
| For example, if your function has an implicit, undocumented
| contract such as assuming the DB is only a few milliseconds
| away, but they then reuse the code for logging to DBs over
| the internet, then they find it's slow and speed it up with
| caching. Now your DB writing code has to suffer their cache
| logic bugs when it didn't have to.
| a_t48 wrote:
| You can do this in C++, too, but the syntax is a little uglier.
| kllrnohj wrote:
| Not that bad? int main() { int
| a = -1; [&] { a = 42;
| printf("I'm an uncallable inline block"); }();
| printf(" "); [&] {
| printf("of code\n"); }(); [&] {
| printf("Passing state: %d\n", a); }();
| return 0; }
| a_t48 wrote:
| It's not horrible, a little bit verbose though.
| LoganDark wrote:
| Remember to `nonlocal xs, db, logger` inside those inner
| functions. I'm not sure if this is needed for variables that
| are only read, but I wouldn't ever leave it out.
| pansa2 wrote:
| > _I 'm not sure if this is needed for variables that are
| only read_
|
| It's not needed. In fact, you _should_ leave it out for read-
| only variables. That's standard practice - if you use
| `nonlocal` people reading the code will expect to see writes
| to the variables.
| orf wrote:
| That's gonna be quite expensive, don't do this in hot loops.
| You're re-defining and re-creating the function object each
| time the outer function is called.
| gorgoiler wrote:
| Good point. I measured it for 10^6 loops:
|
| (1) 40ms for inline code;
|
| (2) 150ms for an inner function with one expression;
|
| (3) 200ms for a slightly more complex inner function; and
|
| (4) 4000ms+ for an inner function and an inner class.
| def f1(n: int) -> int: return n * 2 def
| f2(n: int) -> int: def g(): return n
| * 2 return g() def f3(n: int)
| -> int: def g(): for _ in range(0):
| try: pass except
| Exception as exc: if isinstance(exc,
| 1): pass
| else: while True:
| pass raise Exception()
| return n * 2 return g() def
| f4(n: int) -> int: class X: def
| __init__(self, a, b, c): pass
| def _(self) -> float: return 1.23
| def g(): for _ in range(0):
| try: pass except
| Exception as exc: if isinstance(exc,
| 1): pass
| else: while True:
| pass raise Exception()
| return n * 2 return g()
| zelphirkalt wrote:
| Where is the part, where this is "careful"? This is just how
| scopes work. I don't see what is special about the inner
| functions using things in the scope of the outer functions.
| rossant wrote:
| (2014)
| mindsuck wrote:
| First discussed here back then:
| https://news.ycombinator.com/item?id=8374345
| dang wrote:
| Thanks! Macroexpanded:
|
| _John Carmack on Inlined Code_ -
| https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1
| comment)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=14333115 - May 2017 (2
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=12120752 - July 2016
| (199 comments)
|
| _John Carmack on Inlined Code_ -
| https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260
| comments)
| mcosta wrote:
| 2007
| gdiamos wrote:
| How much of this is specific to control loops that execute at
| 60hz?
| lloeki wrote:
| None.
|
| > The real enemy addressed by inlining is unexpected dependency
| and mutation of state, which functional programming solves more
| directly and completely. However, if you are going to make a
| lot of state changes, having them all happen inline does have
| advantages; you should be made constantly aware of the full
| horror of what you are doing. When it gets to be too much to
| take, figure out how to factor blocks out into pure functions
| (and don.t let them slide back into impurity!).
|
| Some years ago at job foo I wrote a Ruby library that was doing
| some stuff. Time was of the essence, I was a one-man team, and
| the trickiness of it required a clear understanding of the
| details, so I wrote a single ~1000 LOC file comprising of the
| entirety of the namespace module of that library, with but a
| couple or three functions.
|
| Then a new hire joined my one-man team. I said: apologies for
| this unholy mess, it's overdue for a refactoring, with a bunch
| of proper classes with small methods and split in a few files
| accordingly. They said: not at all, the code was exceptionally
| clear; I could sit and understand every bit of it down to the
| grittier critical details in under an hour, and having seen it
| written this way it is obvious to me that these details of
| interactions would not have been abstracted away, but obscured
| away.
| yoz wrote:
| Your final ten words of the comment are a perfectly concise
| explanation of the problem; thank you! And it drives home
| something I often forget about _why_ code units should do
| Only One Thing.
| Cthulhu_ wrote:
| Thing is, a lot of developers see long code and think "this
| is a Bad Thing" because of dogma, but in practice, a lot of
| developers never actually wrote anything nontrivial like
| that.
| otikik wrote:
| I have worked with many developers and I have seen them
| follow two distinct paths when encountering complex code.
|
| There's one camp that wants to use abstractions and names,
| and there's another (in my experience, smaller) camp which
| prefers to have as few abstractions as possible, and "every
| gritty detail visible".
|
| I think both strategies have advantages and disadvantages.
| The group that likes abstractions can "ignore parts of the
| code" quickly, which potentially makes them "search" faster.
| If there's a bug that needs fixing, or a new feature that
| needs to be added, they will reach the part of the code that
| will need modifications faster.
|
| The detail-oriented people can take a bit longer to identify
| the code that needs modification, but they also tend to be
| able to make those modifications faster. They also tend to be
| be great "spelunkers". They seem to have a "bigger cache", so
| to speak. But it is not infinite. They will eventually not be
| able to hold all the complexity in their heads, just like the
| first group. It will just take a bit longer.
|
| I am firmly on the first group and that is how I write my
| code. I have been fortunate enough to encounter enough people
| from the other group to know not to diss their code
| immediately, and to appreciate it for its merits. When
| working in a team with both kinds of personalities one has to
| make compromises ("please remove all of these 1-line
| functions, Jonathan will hate them", and "could you split
| this 3k lines function into 2 or 3 smaller ones, for easier
| review?").
| whstl wrote:
| Some might consider me part of the "second group", but I'm
| perfectly fine with abstractions and I create them all the
| time.
|
| I do however have a problem with indirections that don't
| really abstract anything and only exist for aesthetical
| reasons.
|
| Not every function/method is an "abstraction". Having too
| many one-line methods is as bad as pretending that
| functions with 2k/3k lines are appropriate in all cases.
| cryptonym wrote:
| > Minimize control flow complexity and "area under ifs",
| favoring consistent execution paths and times over "optimally"
| avoiding unnecessary work.
|
| If your control loop must always run under 16ms, you better
| make sure the worst case is 16ms rather than trying to optimise
| best or mid case. Avoid ifs that skips processing, that's good
| for demo but doesn't help reaching prod quality goals.
| Sometimes it doesn't bring the benefits you think, sometimes it
| hides poorly optimised paths, sometimes it creates subtle bugs.
| Of course always use your own discernment...
|
| That would be very different in a typical cloud app where the
| goal is to keep CPU, memory and network usage as low as
| possible, not much caring about having a constant response time
| on each REST endpoint.
| Netch wrote:
| All the code which is not in hot path may conform to any rules,
| and typically is designed according to something like SOLID, to
| make understanding and maintenance as simple as possible (and
| suitable to any average coder).
|
| All the code which performance, memory cost, etc. is critical,
| should be adjusted to fit into required confine even if it will
| violate all other tenets. This often results in combination of
| opposite approaches - anything that does well.
|
| Finally, one just profiles the code and fixes all most spending
| paths. This is what now any average programmer can do. What it
| can't do - and what Carmack has been doing for decades - is to
| predict such places and fixes them proactively at architectural
| level; and to find tricky solutions that average joe-the-
| programmer hasn't heard ever.
| cjfd wrote:
| Mostly, all of it. People who are not writing that kind of loop
| probably should not do any of this. Optimize for code clarity,
| which may involve either inlining or extracting depending on
| the situation.
| dehrmann wrote:
| Always read older stuff from Carmack remembering the context. He
| made a name for himself getting 3D games to run on slow hardware.
| The standard advice of write for clarity first, make sure
| algorithms have reasonable runtimes, and look at profiler data if
| it's slow is all you need 99% of the time.
| physicles wrote:
| I agree with this in general, but his essay on functional
| programming in C++ (linked at the top of the page) is
| phenomenal and is fantastic general advice when working in any
| non-functional language.
| nicolaslegland wrote:
| _Link at the top of the page_ broken, found an archived
| version at https://web.archive.org/web/20120501221535/http://
| gamasutra....
| Cthulhu_ wrote:
| And before that, 2D games (side-scrolling platformers were not
| a thing on PC hardware until Carmack did it, iirc). I think his
| main thing is balancing clarity - what happens when and in what
| order - with maintainability.
|
| Compare this with enterprise software, which is orders of
| magnitude more complex than video games in terms of business
| logic (the complexity in video games is in performance
| optimization), but whose developers tend to add many layers of
| abstraction and indirection, so the core business process is
| obfuscated, or there's a billion non-functional side activities
| also being applied (logging, analytics, etc), again obfuscating
| the core functionality.
|
| It's fun to go back to more elementary programming things, in
| e.g. Advent of Code challenges or indeed, game development.
| nadam wrote:
| "compare this with enterprise software, which is orders of
| magnitude more complex than video games in terms of business
| logic" Maybe this was true 20 years ago, but I do not think
| this is true today. Game code of some games is almost as
| complex as enterprise software or even more complex in some
| cases (think of grand strategy games like Civilization or
| Paradox games). The difference is that it still needs to be
| performant, so the evolutionary force just kills programmers
| and companies creating unperformant abstractions. In my
| opinion game programming is just harder than enterprise
| programming if we speak about complex games. (I have done
| both). The only thing which is easier in game programming is
| that it is a bit easier to see clearly in terms of 'business
| requirements', and also it is more meritocratic (you can
| start a game company anywhere on the globe, no need to be at
| business centers.) And of course game programming is more
| fun, so programmers do the harder job even for less money.
|
| For people who think game programming is less complex than
| enterprise software, I suggest the CharacterMovementComponent
| class in unreal engine which is the logic of movement of
| characters (people) in a networked game environment... With
| multiple thousand lines of code in just the header is not
| uncommon in unreal. And this is not complex because of
| optimization mostly. This is very complex and messy logic. Of
| course we can argue that networking and physics could be done
| in a simple naive way, which would be unacceptable in terms
| of latency and throughput, so all in all complexity is
| because of optimization after all. But it is not the 'fun'
| elegant kind of optimization, it is close to messy enterprise
| software in some sense in my opinion.
| ykonstant wrote:
| I have heard modern game development compared to OS
| development in terms of complexity and I think that
| comparison is quite apt; especially when the game involves
| intricate graphics and complicated networking involving
| multiple time scales as you say.
| high_na_euv wrote:
| >Compare this with enterprise software, which is orders of
| magnitude more complex than video games in terms of business
| logic
|
| I dont buy it in games like gta, cyberpunk or witcher 3
| a_t48 wrote:
| In both design space and programming complexity, you're
| right.
| aidenn0 wrote:
| > And before that, 2D games (side-scrolling platformers were
| not a thing on PC hardware until Carmack did it, iirc). I
| think his main thing is balancing clarity - what happens when
| and in what order - with maintainability.
|
| Smooth side-scrollers did exist on the PC before Keen (An
| early one would be the PC port of Defender). _Moon Patrol_
| even had jumping in the early '80s.
|
| Furthermore other contemporaries of Carmack were making full-
| fledged side-scrolling platformers in ways different from how
| Keen did it (there were many platformers released in 1990).
| They all involved various limitations on level design (as did
| what Keen used), but I don't believe any of them allowed both
| X and Y scrolling like the Keen games did.
| marginalia_nu wrote:
| I find the inlined style can actually improve clarity.
|
| A lot of code written toward the "uncle bob" style where you
| maximize the number of functions has fantastic local clarity,
| you can see exactly what the code you are looking at is doing;
| but atrocious global clarity, where it's nearly impossible to
| figure out what the system does on a larger scale.
|
| Inlining can help with that, local clarity deteriorates a bit,
| but global clarity typically improves by reducing the number of
| indirections. The code does indeed also tend to get faster, as
| it's much easier to identify and remove redundant code when you
| have it all in front of you. ... but this also improves the
| clarity of the code!
|
| You can of course go too far, in either direction, but my sense
| is that we're often leaning much too far toward short isolated
| functions now than is optimal.
| silvestrov wrote:
| > atrocious global clarity
|
| much like microservices.
| Cthulhu_ wrote:
| I feel like this style is also encouraged in Go and / or the
| clean/onion architecture / DDD, to a point, where the core
| business logic can and should be a string of "do this, then do
| that, then do that" code. In my own experience I've only had a
| few opportunities to do so (most of my work is front-end which is
| a different thing entirely), the one was application
| initialisation (Create the logger, then connect to the database,
| then if needed initialize / migrate it, then if needed load test
| data. Then create the core domain services that uses the database
| connection. Then create the HTTP handlers that interface with the
| domain services. Then start the HTTP server. Then listen for an
| end process command and shut down gracefully), the other was pure
| business logic (read the database, transform, write to file, but
| "database" and "file" were abstract concepts that could be
| swapped out easily). You don't really get that in front-end
| programming though, it's all event driven etc.
| xxs wrote:
| > the one was application initialisation
|
| ...and then you want to parallelize as much as possible to
| allow for fast boot times which helps the development process
| immensely.
|
| One of the things I've learned is that optimizing for developer
| quality of life is one of the best approaches when it comes to
| correctness and performance. Then, the developers would be able
| to run multiple iterations of the real thing.
| ninetyninenine wrote:
| His overall solution highlighted in the intro is that he's moved
| on from inlining and now does pure functional programming.
| Inlining is only relevant for him during IO or state changes
| which he does as minimally as possible and segregates this from
| his core logic.
|
| Pure functional programming is the bigger insight here that most
| programmers will just never understand why there's a benefit
| there. In fact most programmers don't even completely understand
| what FP is. To most people FP is just a bunch of functional
| patterns like map, reduce, filter, etc. They never grasp the true
| nature of "purity" in functional programming.
|
| You see this lack of insight in this thread. Most responders
| literally ignore the fact that Carmack called his email
| completely outdated and that he mostly does pure FP now.
| wruza wrote:
| Some grasp it but see its trade-off contract, which is
| demanding.
| ninetyninenine wrote:
| With practice it just becomes another paradigm of
| programming. The trade off is really a skill issue from this
| perspective.
|
| The larger issue is performance which is a legitimate reason
| for not using fp in many cases. But additionally in many
| cases there is no performance trade off.
| nuancebydefault wrote:
| I've never seen pure FP...
| nuancebydefault wrote:
| I might add Carmack's take on functional programming https://
| web.archive.org/web/20120501221535/http://gamasutra....
| sdeframond wrote:
| I would recommend you take a look at Haskell or Elm.
|
| John Karmack did and talked about it
| https://m.youtube.com/watch?v=1PhArSujR_A
| eyelidlessness wrote:
| Surely if you've seen any non-trivial amount of code, you
| have seen pure FP applied piecemeal even if not broadly. A
| single referentially transparent function _is pure FP_ , even
| if it's ultimately called by the most grotesque stateful
| madness.
| wmanley wrote:
| Here's the link where he discusses functional programming
| style:
|
| https://web.archive.org/web/20170116040923/http://gamasutra....
|
| He does _not_ say that that his email is completely outdated -
| he just says that calling pure functions is exempt from the
| inlining rule.
|
| He's not off writing pure FP now. His approach is still deeply
| pragmatic. In the link above he discusses degrees of function
| purity. "Pure FP" has a whole different connotation - where
| whole programs are written in that constrained style.
| ninetyninenine wrote:
| He literally said he's bullish on pure fp. Which means he is
| off writing pure fp. His own article about it never
| explicitly or implicitly implies a "pragmatic approach".
|
| I never said he said his email was completely outdated. He
| for sure implies it's outdated and updates us on his views of
| inlining which I also mentioned.
| coob wrote:
| > I never said he said his email was completely outdated.
|
| From your prior message:
|
| > Carmack called his email completely outdated
| ninetyninenine wrote:
| Ok my bad but I did mention what he's doing with
| inlining. So I contradicted myself in the original
| message which you didn't identify.
|
| He still does inlining.
| solomonb wrote:
| The original article literally starts with this:
|
| > In the years since I wrote this, I have gotten much more
| bullish about pure functional programming, even in C/C++
| where reasonable: (link) > >The real enemy addressed by
| inlining is unexpected dependency and mutation of state,
| which functional programming solves more directly and
| completely. However, if you are going to make a lot of state
| changes, having them all happen inline does have advantages;
| you should be made constantly aware of the full horror of
| what you are doing.
|
| He explicitly says that functional programming solves the
| same issue as inlining but more directly and completely.
| pragma_x wrote:
| Thank you for this. I appreciate that this (classic) article
| lays bare the essence of FP without the usual pomp and "use
| Lisp/Scheme/Haskell already" rhetoric. My takeaway is that FP
| is mostly about using functions w/o side effects (pure),
| which can be achieved in any programming language provided
| you're diligent about it.
| zelphirkalt wrote:
| This is a bit naive though. It depends on what you want to
| do and whether the language you are using offers the
| required primitives and other things like persistent
| functional data structures. Without those, you will find
| yourself hard-pressed to make FP happen. It is of course
| usually possible with most languages (except those where
| primitives are already mutating and therefore infectiously
| prevent you from writing pure functions), but it might not
| be idiomatic at all, or might not be feasible to roll all
| things your own, to replace any mutating basics. For
| example imagine having to copy a data structure all over
| the place again and again, because its methods are mutating
| its internal state. That would be inefficient, much more
| inefficient than a well written corresponding functional
| data structure, and it would be ugly code.
|
| Are you going to write that extra data structure, when your
| task is actually something else? Some management breathing
| down your neck, asking when something will be done? Or not
| so well versed coworkers complaining about you adding a lot
| of code that might need to be maintained by them, while
| they don't understand FP? Do you even have the knowledge to
| implement that data structure in the first place, or will
| you need to study a couple of papers and carefully
| translate their code, if any, i to your language and then
| verify expected performance in meaningful benchmarks?
|
| Lots of problems can arise there in practice.
| ninetyninenine wrote:
| No functional programming is about programming as if your
| code is a math equation.
|
| In math people never use procedures. They write definitions
| in math in terms of formulas and expressions.
|
| If you can get everything to fit on one line in your
| programming. Then you are doing functional programming.
|
| The lack of side effects, lack of mutation and high
| modularity are the beneficial outcome of fp, it is not the
| core of what you're doing. The core of what you're doing is
| your defining your program as a formula/equation/expression
| rather then a list of procedures or steps. Of course, why
| you would write your program this way is because of the
| beneficial outcomes.
|
| By coincidence if you write your code in a way where you
| just account for the side effects like deliberately
| avoiding mutation, IO and side effects... then your program
| will become isomorphic to a mathematical function. So it
| goes both ways.
|
| Another thing you will note and most people don't get this
| is that for loops don't exist in FP. The fundamental unit
| of "looping" in fp is always done with recursion, just like
| how they would do it in mathematical expressions.
| Hendrikto wrote:
| > he's moved on from inlining and now does pure functional
| programming
|
| Neither of those are true. He does more FP "where reasonable",
| and that decreases the need for inlining. He does not do pure
| FP, and he still inlines.
| ninetyninenine wrote:
| He literally says he's more bullish on pure fp. Read it. And
| I also wrote about where he still inlines.
| solomonb wrote:
| "pure FP" does not mean only writing in a functional style.
| Purity refers to referential transparency, ie., functions do
| not depend on or modify some global state.
| zelphirkalt wrote:
| Actually even further: They also don't modify/mutate any
| arguments. If they did, then that could raise problems with
| concurrency.
| eyelidlessness wrote:
| I think more people grasp functional programming all the time,
| or at least the most salient detail: referential transparency.
| It's easy to show the benefits in the small, without getting
| heavy on academics: pure functions are easier to test,
| understand, and change with confidence. All three of these
| reinforce each other, but they're each independently beneficial
| as well.
|
| There are tons of benefits to get from learning this lesson in
| a more intentional way--I know that I changed my entire outlook
| on programming after some time working in Clojure!--but I've
| seen other devs take the same lessons in multi-paradigm
| contexts as well.
| ninetyninenine wrote:
| Not just this. Modularity is the main insight as well. The
| reason why oop doesn't work is because methods can't be
| broken down. Your atom is oop is literally a collection of
| methods tied to mutating state. You cannot break down that
| collection further.
|
| In pure fp. You can break your function down into the
| smallest computational unit possible. This is what prevents
| technical debt of the architectural nature as you can rewrite
| your code as simply recomposing your modular logic.
| zelphirkalt wrote:
| That most wont get it is due to the fact that most are kind of
| "industrial programmers", who only learn and use mainstream OOP
| languages amd as such never actually use a mainly FP language a
| lot. Maybe on HN the ratio is better than on the whole market
| though.
| randomtoast wrote:
| My browser says "The connection to number-none.com is not
| secure". Guess it is only a matter of time until HTTPS becomes
| mandatory.
| otikik wrote:
| > I have gotten much more bullish about pure functional
| programming, even in C/C++ where reasonable: (link)
|
| The link is no longer valid, I believe this is the article in
| question:
|
| https://www.gamedeveloper.com/programming/in-depth-functiona...
| ninetyninenine wrote:
| Probably the more important link. He's basically saying his old
| email is outdated and he does pure FP now.
| meheleventyone wrote:
| This is over a decade old at this stage, it would be
| interesting to know how his thoughts have evolved since.
| wruza wrote:
| I wish languages had the following: let x = block
| { ... return 5 } // x == 5
|
| And the way to mark copypaste, e.g. common foo {
| asdf(qwerty(i+j)); printf("%p", write)); bar();
| } ...(repeats verbatim 20 times)... ... common
| foo { asdf(qwerty(i+k)); printf("%d",
| (int)write); // cast to int bar(); } ...
|
| And then you could `mycc diff-common foo` and see:
| <file>:<line>: common <file>:<line>: common ...
| <file>:<line>: @@...@@ -asdf(qwerty(i+j));
| +asdf(qwerty(i+k)); @@...@@ -printf("%p",
| write)); +printf("%d", (int)write); // cast to int
|
| With this you can track named common blocks (allows using
| surrounding context like i,j,k). Without them being functions and
| subject for functional entanglement $subj discusses. Most common
| code gets found out and divergences get bold. IDE support for
| immediate highlighting, snippeting and auto-common-ing similar
| code would be very nice.
|
| Multi-patching common parts with easily reviewing the results
| would also be great. Because the bugs from calling a common
| function arise from the fact that you modify it and it suddenly
| works differently for some context. Well, you can comment a
| common block as fragile and then ignore it while patching:
| common foo { // @const: modified and fragile! ...
| }
|
| You still see differences but it doesn't add in a multi-patch
| dialog.
|
| Not expecting it to appear anywhere though, features like that
| are never considered. Maybe someone interested can feature it in
| circles? (without my name associated)
| badmintonbaseba wrote:
| In C++ it's an idiom to use immediately invoked lambdas:
| auto x = []{ /*...*/ return 5; }();
|
| There is/was an attempt to introduce a more of a first-class
| language construct for such immediate "block expressions":
|
| https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2023/p28...
|
| I'm not convinced that automatic checking of copy-paste errors
| of such blocks make much sense though. At least I think the
| false positive rate would be way too high.
| wruza wrote:
| IIFE exist, but are cumbersome to type/read in most
| languages. C++ is probably the winner by syntax and semantics
| here.
|
| _false positive rate would be way too high_
|
| The key idea is not to have identical blocks, but to have a
| way to overview changes in similar code, similar by origin
| and design. It's a snippet diff tool, not a typo
| autocorrector. There's no false positives cause if "common
| foo" has zero diff in all cases, it probably _should_ be
| "foo(...)".
| sltkr wrote:
| GCC has supported statement expressions for ages:
| https://gcc.gnu.org/onlinedocs/gcc/Statement-Exprs.html
|
| They're also used extensively in the Linux kernel, mostly to
| implement macros: https://github.com/search?q=repo%3Atorvalds
| %2Flinux%20%22(%7...
| mkoubaa wrote:
| Now if only c++ could guarantee copy elision from lambda
| returns...
| badmintonbaseba wrote:
| There is a lot of guaranteed copy elision since C++17, what
| exactly do you mean?
| pjmlp wrote:
| It depends if using C++17 and later versions.
| lexicality wrote:
| Can you help me understand why this would be beneficial, other
| than avoiding using the word "function"?
| wruza wrote:
| I guess you're asking about the block part -- it's a minor
| syntactic convenience and not the main point of the comment.
| It avoids the word function/lambda or def-block and related
| syntactic inconvenience like parentheses around and at the
| end and interference with ASI (when applicable).
| kibwen wrote:
| You're looking for blocks-as-expressions, e.g. the
| following is valid Rust: let x = {
| whatever; 5 }; // assigns 5 to x
| atq2119 wrote:
| Regarding compound statements returning values: There are a
| number of languages which have that, including Rust.
| Ironically, it made me wish for a reversed form of the
| construct, i.e. something like { ...; expr }
| --> x; // x is a new variable initialized to expr
|
| I feel like this would help readability when the compound
| statement is very large.
| Ono-Sendai wrote:
| There is actually a major problem with long functions - they take
| a long time to compile, due to superlinear complexity in
| computation time as a function of function length. In other words
| breaking up a large function into smaller function can greatly
| reduce compile times.
| badmintonbaseba wrote:
| That honestly feels like a minor problem, and not something to
| optimize for. Also an aggressively inlining compiler will
| experience exactly the same problem. AFAIK at least clang
| always inlines a static (as in internal linkage) function if
| it's used only once in the translation unit, no matter how
| large it is.
| Ono-Sendai wrote:
| Visual studio doesn't do that inlining. And it is a
| significant problem, I have had to refactor my code into
| multiple functions because of it.
| badmintonbaseba wrote:
| It might be a significant problem, but not in the code, but
| the compiler. Fair enough, you are working around a
| compiler issue.
| Ono-Sendai wrote:
| If you consider any superlinear complexity a 'compiler
| issue' I guess.
| badmintonbaseba wrote:
| It absolutely is, if it makes compile times unreasonable
| for reasonable code. Compilers have to make trade-offs
| like this all the time, they can't use overly excessive
| optimizations.
| Ono-Sendai wrote:
| I dunno. O(n^2) is for sure a bug. But O(nlogn) I think
| is reasonable.
| badmintonbaseba wrote:
| O(nlogn) is probably reasonable. Why break up a long
| function then if you are experiencing O(nlogn) scaling of
| compile time on function size?
| Ono-Sendai wrote:
| Because it can still result in compile times I find
| excessive. For example breaking up a function that takes
| 5 seconds to compile into a bunch of functions that take
| 1 to 2 seconds in total.
| tightbookkeeper wrote:
| If you are willing to make code worse to micro optimize compile
| times (not even sure this is true) then you should not use any
| modern language with complex type checking (rust, swift, C#,
| etc).
| Ono-Sendai wrote:
| Writing a medium to large program in C++, you really need to
| fight long compile times or they can get out of hand. That
| affects the way you write code quite a lot, or it should at
| least. I've heard Rust and Swift also suffer from long
| compile times.
| tightbookkeeper wrote:
| Agreed.
|
| But For C++ template combinatorics are going to dominate
| any slow down due to function length.
| nuancebydefault wrote:
| > The function that is least likely to cause a problem is one
| that doesn't exist, which is the benefit of inlining it.
|
| I think that summarizes the case pro inlining.
| physicsguy wrote:
| I think when developing something from scratch, it's actually not
| a terrible strategy to do this and pick out boundaries when they
| become clearer. Creating interfaces that make sense is an art,
| not a science.
| IshKebab wrote:
| I think the major problem with this is scope. Now a variable
| declared at the top of your function is in scope for the entire
| function.
|
| Limiting scope is one of the best tools we have to prevent bugs.
| It's one reason why we don't just use globals for everything.
| wheybags wrote:
| You can artificially create scope. I often write code like:
| Foo f = null; { ... stuff with variables
| f = barbaz; }
| SJC_Hacker wrote:
| Is the point that any var declared in between the braces
| automatically goes out of scope, to minimize potential
| duplication of var names and unintended behavior ?
|
| The worst I've seen is old school C programmers who insisted
| on reusing loop variables in other loops. Even worse, those
| loop variables were declared _inside the loop declaration_ ,
| which old C standards allowed to visible outside of it.
|
| So they would have stuff like this for(int
| i=0; i<10; i++) { ... } for (;i<20;i++) { ... }
|
| Later versions of C++ disallowed this, which led to some
| interesting compile failures, which led to insistence of the
| old stubborn programmers that new compilers simply _not be
| used_
| IshKebab wrote:
| Now you have to make `f` nullable _and_ you run the risk of
| not initialising it and getting a null pointer.
|
| You can't do it in C, but in functional style languages you
| can do this: let f = { let bar = ...;
| let baz = ...; let barbaz = ...; barbaz
| };
|
| Which is a lot nicer. But if you ask me it's just a function
| by another name except it still doesn't limit scope quite as
| precisely as a function.
| nikki93 wrote:
| GCC and clang (and maybe others) have 'statement
| expressions': https://godbolt.org/z/sqYnbh4Ej
| jjallen wrote:
| One benefit that I can think of for inlined code is the ability
| to "step" through each time step/tick/whatever and debug the
| state at each step of the way.
|
| And one drawback I can think of is that when there are more than
| something like ten variables finding a particular variable's
| value in an IDE debugger gets pretty difficult. It would be at
| this point that I would use "watches", at least in the case of
| Jetbrains's IDEs.
|
| But then yeah you can also just log each step in a custom way
| verifying the key values are correct which is what I am doing as
| we speak.
| mellosouls wrote:
| (2014)
|
| Ten years ago - a long time in coding.
| rm445 wrote:
| It's at least twenty new web frameworks, but maybe not so long
| in low-level stuff. You can probably rely on C99 being
| available now more than you could in 2014.
| sylware wrote:
| I have been super picky about what JC says since he moved the ID
| engine from plain and simple C99 to c++.
| rickreynoldssf wrote:
| The clean code people are losing their collective minds reading
| that. lol
| randomdata wrote:
| Why's that? Uncle Bob seems pretty clear that most of your code
| should be free of side effects, and that necessary state
| mutation should be isolated to one place. Carmack is saying the
| same thing.
| shortrounddev2 wrote:
| Can someone explain what inlined means here? It was my assumption
| that the compiler will automatically inline functions and you
| don't need to do it explicitly. Unless it means something else in
| this context
| AlotOfReading wrote:
| It means not using explicit functions, just writing the same
| code as little inline blocks inside the main function because
| it allows you to see all the things that would be hidden if all
| the code wasn't immediately visible.
|
| To the other point though, the quality of compiler inlining
| heuristics is a bit of a white lie. The compiler _doesn 't_
| make optimal choices, but very few people care enough to notice
| the difference. V8 used a strategy of considering the entire
| source code function length (including comments) in inlining
| decisions for many years, despite the obvious drawbacks.
| shortrounddev2 wrote:
| Well there's also compiler directives other than `inline`,
| like msvc's `__inline` and `__forceinline` (which probably
| also have an equivalent in gcc or clang), so personally I
| don't think you need to make the tradeoff between readability
| and reusability while avoiding function calls. Not to mention
| C++ constevals and C-style macros, though consteval didn't
| exist in 2007
| AlotOfReading wrote:
| __forceinline is purely a suggestion to the compiler, not a
| requirement. Carmack's point isn't about optimizing the
| costs of function calls though. It's about the benefits to
| code quality by having everything locally visible to the
| developer.
| shortrounddev2 wrote:
| It's an interesting view because I find neatly
| compartmentalized functions easier to read and less error
| prone, though he does point out that copying chunks of
| code such as vector operations can lead to bugs when you
| forget to change some variable. I guess it depends on the
| function. Something like Vector c =
| dotProduct(a, b);
|
| is readable enough and doesn't warrant inlining, I think.
| There's nothing about `dotProduct` that I would expect to
| have any side effects, especially if its prototype looks
| like: Vector dotProduct(Vector const&
| a, Vector const& b) const;
| AlotOfReading wrote:
| That's a pure function, which he says should be the goal.
| It's impure functions that he's talking about.
| exodust wrote:
| For some reason this quote by Carmack stands out for me:
|
| > _" it is often better to go ahead and do an operation, then
| choose to inhibit or ignore some or all of the results, than try
| to conditionally perform the operation."_
|
| I'm not the audience for this topic, I do javascript from a
| designer-dev perspective. But I get in the weeds sometimes,
| maxing out my abilities and bogged down by conditional logic. I
| like his quote it feels liberating... "just send it all for
| processing and cherry-pick the results". Lightbulb moment.
| wiz21c wrote:
| my CPU does that all of the time, it is speculative execution
| :-)
| VyseofArcadia wrote:
| > That was a cold-sweat moment for me: after all of my harping
| about latency and responsiveness, I almost shipped a title with a
| completely unnecessary frame of latency.
|
| In this era of 3-5 frame latency being the norm (at least on e.g.
| the Nintendo Switch), I really appreciate a game developer having
| anxiety over a single frame.
| pragma_x wrote:
| To be fair, back in 2014 that was one frame at 60Hz or slower
| for some titles. At 80-120Hz, 3-5 frames is comparatively
| similar time.
| 01HNNWZ0MV43FF wrote:
| I don't think high frame rates are common outside of PC
| gaming yet.
|
| Wikipedia indicates the Switch maxes out at 1080p60, and the
| newest Zelda only at 900p30 even when docked
|
| https://en.m.wikipedia.org/wiki/Nintendo_Switch
| VyseofArcadia wrote:
| I believe both the PS5 and whatever nonsense string of Xs,
| numbers, and descriptors MS named this gen's console can do
| 144Hz output. I don't know how many games take advantage of
| that or whether that refresh rate is common on TVs.
| philistine wrote:
| 60 FPS isn't even promised on PS5 Pro. Most graphically
| demanding titles still aim for 30 FPS on consoles, with
| any game able to support 60 FPS consistently worth
| noting.
| Narishma wrote:
| What they said is true. There are some games with 120 FPS
| modes on PS5 and Series X, maybe even series S. That
| doesn't mean every game (or even most) are like that,
| just that the hardware supports it. At the end of the day
| you can't stop developers targeting whatever framerate
| they want.
| marxisttemp wrote:
| I play Fortnite and Call of Duty at 120Hz VRR on Xbox
| Series X.
| astrobe_ wrote:
| I've heard that a good reaction time is around 200 ms, some
| experiments seem to confirm this figure [1]. At 60Hz, a frame
| is displayed every 17 ms.
|
| So it would take a 12 frames animation and a trained gamer
| for a couple of frames to make a difference (e.g. push the
| right button before the animation ends and the opponent's
| action takes effect).
|
| [1] https://humanbenchmark.com/tests/reactiontime/statistics
| underdeserver wrote:
| I'm not sure this is the right way to look at it. I can't
| find stats right now, but I recall reading top players
| making frame-perfect moves in games like Smash Bros. Melee
| and Rocket League.
| philistine wrote:
| Frame perfect moves are exceedingly common in most top
| fields. Just watch any video about the latest speedruns.
|
| The thing with latency is it needs to be consistent. If
| your latency is between 3 to 5 frames you blew it because
| you can't guarantee the same experience on every button
| press. If you always have 3 frames of latency, with
| modern screens, analog controls, and game design aware of
| those limitations, that's much better. Look at modern
| games like _Celeste_ , who has introduced _Coyote Time_
| to account for all the latency of our modern hardware.
| mywittyname wrote:
| The mistake with focusing on reaction time is that humans
| can anticipate actions and can perform complex sequences
| of actions pretty quickly (we have two hands and 10
| fingers). So someone playing one of those "test your
| reaction time" games might only score like 30ms. But
| someone playing a musical instrument can still play a
| 64th note at 120BPM.
|
| Imagine playing a drum that took between 0 and 5 extra
| frames at 60FPS between striking the head and it
| producing a sound. Most people would notice that kind of
| delay, even if they can't "react" that quickly.
|
| In games, frame delay translates to having to hold down a
| key (or wait before pressing the next one) for longer
| than is strickly necessary in order to produce an effect.
| Since fighting games are all about key sequences, the
| difference between needing to hold key for 0 frames and 5
| frames is massive when you consider key combinations
| might be sequences of up to 5 key presses. 5 frames of
| delay x five sequential key presses x 8ms a frame =
| 1600ms vs 1 frame x 5 seq. key presses x 8ms = 40ms.
|
| There's a massive difference between taking 1.6s to
| execute a complex move and 0.040s.
| leni536 wrote:
| Another example is music (and relatedly, rythm games).
| With memorized music you have maximal anticipation of
| actions. The regular rithm only amplifies that
| anticipation. Musicians can be very consistent at timing
| (especially rithm section), and very little latency or
| jitter can throw that off.
| sjm wrote:
| Reaction time is completely different to the input latency
| Carmack is worrying about in his scenario. Imagine if you
| thought I'm going to move my arm, and 200ms later your arm
| actually moved. Apply the same to a first-person shooter
| --- imagine you nudge your mouse slightly, and 200ms later
| you get some movement on screen. That is ___hugely___
| noticeable.
| harrison_clarke wrote:
| https://www.researchgate.net/publication/266655520_In_the_b
| l...
|
| this is for a stylus, but people can detect input latency
| as low as 1ms (possibly lower)
|
| with VR, they use the term "motion to photon latency", and
| if it's over ~20ms, people start getting dizzy. at 200ms,
| nobody is going to be keeping their lunch down
|
| google noticed people making fewer searches if they delayed
| the result by 100ms
|
| edit: if you want an easy demo, open up vim/nano over ssh,
| and type something. then try it locally
| munificent wrote:
| Why would you even bother running at a game at 120Hz if the
| user's response to what's being drawn is effectively 24-30
| FPS?
| kragen wrote:
| You've seen games running at 120Hz and at 60Hz. The
| difference is obvious, isn't it? The difference between
| 24Hz and 60Hz is certainly obvious: that's the visual
| difference between movies and TV sitcoms.
|
| I can type about 90 words per minute on QWERTY, which is
| about 8 keystrokes per second. That means that the
| _average_ interval between keystrokes is about 120
| milliseconds, already significantly less than my
| 200-millisecond reaction time, and many keystrokes are
| closer together than that--but I rarely make typographical
| errors. Fast typists can hit 150 words per minute.
| Performing musicians consistently nail note timing to
| within about 40 milliseconds. So it turns out that people
| do routinely time their physical movements a lot more
| precisely than their reaction time. Their _jitter_ is much
| lower than their _latency_ , a phenomenon you are surely
| familiar with in other contexts, such as netcode for games.
|
| If someone's latency is 200 milliseconds but its jitter
| (measured as standard deviation) is 10 milliseconds, then
| reducing the frame latency from a worst-case 16.7
| milliseconds (or 33.3 milliseconds in your 30Hz example) to
| a worst-case 8.3 milliseconds, and average-case 8.3
| milliseconds to average-case 4.2 milliseconds, you're
| knocking off a whole 0.42 standard deviations off their
| latency. If they're playing against someone else with the
| same latency, that 0.42 _s_ advantage is very significant!
| I think they 'll win almost 61% of the time, but I'm not
| sure of my statistics+.
|
| See also https://danluu.com/input-lag/#appendix-why-
| measure-latency:
|
| > _Latency matters! For very simple tasks, people can
| perceive latencies down to 2 ms or less. Moreover,
| increasing latency is not only noticeable to users, it
| causes users to execute simple tasks less accurately. If
| you want a visual demonstration of what latency looks like
| and you don't have a super-fast old computer lying around,
| check out this MSR demo on touchscreen latency._
|
| > _The most commonly cited document on response time is the
| nielsen group[sic] article on response times, which claims
| that latncies[sic] below 100ms feel equivalent and
| perceived[sic] as instantaneous. One easy way to see that
| this is false is to go into your terminal and try_ sleep 0;
| echo "pong" _vs._ sleep 0.1; echo "test" _(or for that
| matter, try playing an old game that doesn 't have latency
| compensation, like quake 1, with 100 ms ping, or even 30 ms
| ping, or try typing in a terminal with 30 ms ping). For
| more info on this and other latency fallacies, see this
| document on common misconceptions about latency._
|
| (The original contains several links substantiating those
| claims.)
|
| https://danluu.com/keyboard-latency/#appendix-counter-
| argume... has a longer explanation.
|
| ______
|
| + First I tried sum(rnorm(100000) < rnorm(100000) +
| 0.42)/1000, which comes to about 61.7 (%). But it's not a
| consistent 0.42 _s_ of latency being added; it 's a random
| latency of up to 0.83 _s_ , so I tried sum(rnorm(100000) <
| rnorm(100000) + runif(100000, max=0.83))/1000, which gave
| the same result. But that's not taking into account that
| actually both players have latency, so if we model random
| latency of up to a frame for the 60Hz player with
| sum(rnorm(100000) + runif(100000, max=1.67) > rnorm(100000)
| + runif(100000, max=0.83))/1000, we get more like a 60.8%
| chance that the 120fps player will out-twitch them. I'm
| sure someone who actually knows statistics can tell me the
| correct way to model this to get the right answer in closed
| form, but I'm not sure I could tell the correct closed-form
| formula from an incorrect one, so I resorted to brute
| force.
| munificent wrote:
| _> You 've seen games running at 120Hz and at 60Hz. The
| difference is obvious, isn't it?_
|
| Honestly, I have not. I'm not much of a gamer, even
| though I used to be a game developer.
|
| Certainly the difference between 30Hz and 60Hz is
| noticeable.
|
| Maybe this is just because I'm old school but if it were
| me, I would absolutely prioritize low latency over high
| frame rate. When you played an early console game, the
| controls felt like they were concretely wired to the
| character on screen in a way that most games I play today
| lack. There's a really annoying spongey-ness to how games
| feel that I attribute largely to latency.
|
| I don't really give a shit about fancy graphics and
| animation (I prefer 2D games). But I want the controls to
| feel solid and snappy.
|
| I also make electronic music and it's the same thing
| there. Making music on a computer is wonderful and
| powerful in many ways, but it doesn't have the same
| immediacy as pushing a button on a hardware synth (well,
| on most hardware synths).
| kragen wrote:
| Oh! I assumed that because you were a famous game
| developer you would hang out with gamers who would
| proudly show off their 120Hz monitor setups.
|
| I agree that low latency is more important than high
| frame rate, and I agree about the snappiness. But low
| _jitter_ is even more important for that than low
| latency, and a sufficiently low frame rate imposes a
| minimum of jitter.
|
| Music is even less tolerant of latency, and PCM measures
| its jitter tolerance in single-digit microseconds.
| marxisttemp wrote:
| You're still getting more information, which allows you to
| be more accurate with your inputs e.g. tracking a moving
| target.
| chandler5555 wrote:
| yeah but when people talk about input lag for consoles its
| generally still in the 60hz sense, rare for games to be 120hz
|
| smash brothers ultimate for example runs at 60fps and has 5-6
| frames of input lag
| doctorpangloss wrote:
| > In this era of 3-5 frame latency being the norm (at least on
| e.g. the Nintendo Switch)
|
| Which titles is this true for? Have you or anyone else
| measured?
| kllrnohj wrote:
| You're over-crediting Carmack and under-crediting current game
| devs. 3-5 frames might be current end-to-end latency, but
| that's not what Carmack is talking about. He's just talking
| about the game loop latency. Even at ~4 frames of end-to-end
| latency, he'd be talking about an easily avoided 20%
| regression. That's still huge.
| BenoitEssiambre wrote:
| Here are some information theoretic arguments why inlining code
| is often beneficial:
|
| https://benoitessiambre.com/entropy.html
|
| In short, it reduces scope of logic.
|
| The more logic you have broken out to wider scopes, the more
| things will try to reuse it before it is designed and hardened
| for broader use cases. When this logic later needs to be updated
| or refactored, more things will be tied to it and the effects
| will be more unpredictable and chaotic.
|
| Prematurely breaking out code is not unlike using a lot of global
| variables instead of variables with tighter scopes. It's more
| difficult to track the effects of change.
|
| There's more to it. Read the link above for the spicy details.
| norir wrote:
| This is why I think it's a mistake that many popular languages,
| including standard c/c++, do not support nested function
| definitions. This for me is the happy medium where code can be
| broken into clear chunks, but cannot be called outside of the
| intended scope. A good compiler can also detect if the nested
| function is only called once and inline it.
| badmintonbaseba wrote:
| C++ has lambdas and local classes. Local classes have some
| annoying arbitrary limitations, but they are otherwise
| useful.
| knodi123 wrote:
| After spending a lot of time writing idiomatic React
| components in es6, I've found my love of locally declared
| lambdas to really grow. If I give the lambdas really good
| names, I find that the main body of my component is very,
| very readable, even more so than if I'd used a more
| traditional style liberally sprinkled with comments.
| zelphirkalt wrote:
| Giving your lambdas names defeats part of their purpose
| though.
| marcosdumay wrote:
| > If I give the lambdas really good names
|
| That's a really funny way to say it.
| humanfromearth9 wrote:
| In Java, a local function reference (defined inside a method
| and never used outside of this method ) is possible. Notre
| that this function is not really tied to an object, which is
| why I don't call it a method, and I don't use the expression
| "method reference", it is just tired to the function that
| contains it, which may be a method - or not.
| kccqzy wrote:
| Code can always be called outside of that scope just by
| returning function pointers or closures. The point is not to
| restrict calling that code, but to restrict the ability to
| refer to that piece of code by name.
|
| As mentioned by others, C++ has lambdas. Even if you don't
| use lambdas, people used to achieve the same effect by using
| plenty of private functions inside classes, even though the
| class might have zero variables and simply holds functions.
| In even older C code, people are used to making one separate
| .c file for each public function and then define plenty of
| static functions within each file.
| zelphirkalt wrote:
| Of course all this needs to be weighed against maintainability
| and readability of the code. If the code base is not mainly
| about something very performance critical and this kind of
| thing shows to be a bottleneck, then changing things away from
| more readable towards performance optimized implementation
| would require a very good justification. I doubt, that this
| kind of optimization is justified in most cases. For that
| reason I find the wording "prematurely breaking out code" to be
| misleading. In most cases one should probably prioritize
| readability and maintainability and if breaking code out helps
| those, then it cannot be premature. It could only be premature
| from a performance limited perspective, which might have not
| much to do with the use case/purpose of the code.
|
| It is nice, if a performance optimization manages to keep the
| same degree of readability and maintainability. Those concerns
| covered, sure we should go ahead and make the performance
| optimization.
| BenoitEssiambre wrote:
| What I'm advocating here is only coincidentally a performance
| optimization. Readability and maintainability (and improved
| abstraction) are the primary concern and benefit of
| (sometimes) keeping things inline or more specifically of
| reducing entropy.
| donatj wrote:
| I have a coworker that LOVES to make these one or two line single
| use functions that absolutely drives me nuts.
|
| Just from a sheer readability perspective being able to read a
| routine from top to bottom and understand what everything is
| doing is invaluable.
|
| I have thought about it many times, I wish there was an IDE where
| you could expand function calls inline.
| wodenokoto wrote:
| It's called "self documenting code" and the way you self
| document code it is by taking all your comments and make them
| into functions, named after your would-be comment.
|
| I'm not a fan either.
| MetaWhirledPeas wrote:
| Everything must be done to taste. I think code can be made
| "self-documenting" without going overboard and doing silly
| things.
| Tyr42 wrote:
| Sometimes it's easier to define some vocabulary and then use
| it. Like defining push and pop on a stack vs stack[++ix] = blah
| and blah = stack[ix--].
|
| And avoids needing to think about it being prefix or postfix
| after you don't that one time.
|
| But at other times it's insufferable, when the abstraction is
| leaky and unintuitive.
| zelphirkalt wrote:
| This can be done in a good way and in bad ways. With most code
| you will be calling builtin procedures/functions. You also
| don't look under the hood for those usually. But for the code
| of your coworker it seems to irritate you. This could mean many
| things. Just to name a few: (1) The names are not giving a good
| idea what those functions do. (2) The level of abstraction is
| not the same inside the calling function, so that you feel the
| need to check the implementation detail of those small
| functions. (3) You don't trust the implementation of those
| smaller functions. (4) The separated out functions could be not
| worth separating out and being given names, because what the
| code in them does is clear enough without them being separated
| out. (n) or some other reason.
|
| The issue does not have to be that those things are split out
| into separate small functions. The issue might be something
| else.
| endlessmike89 wrote:
| Link to the Wayback Machine cache/mirror, in case you're also
| experiencing a "Bad Gateway/Connection refused" error
|
| https://web.archive.org/web/20241009062005/http://number-non...
| oglop wrote:
| Oh good, a FP post. I love watching people argue over nothing.
|
| Here's the actual rule, do what works and ships. Don't posture.
| Don't lament. Don't idealize. Just solve the fucking problem with
| the tool and method that fits and move on.
|
| And do not try to use this comment threat to understand FP. Too
| many cooks, and most of the are condescending douchebags. Go look
| at Wikipedia or talk with an AI about it. Don't ask this place,
| it's all just lectures and nitpicks.
| eapriv wrote:
| "Comment threat" is a nice one.
| adamrezich wrote:
| This isn't actually an FP post.
| easeout wrote:
| Come to think of it, execute-and-inhibit style as described here
| is exactly what's going on when in continuous deployment you run
| your same pipeline many times a day with small changes, and gate
| new development behind feature flags. We're familiar with the
| confidence derived from frequently repeating the whole job.
| adamrezich wrote:
| I find that when initially exploring a problem space, it's useful
| to consider functions as "verbs" to help me think through the
| solution, and that feels useful in helping me figure out a
| solution to my problem--I've isolated some_operation() into its
| own function, and it's easy to see at a glance whether or not
| some_operation() does the specific thing its name claims to do
| (and if so, how well).
|
| But then after things have solidified somewhat, it's good
| practice to go back through your code and determine whether those
| "verbs" ended up being used more than once. Quite often,
| something that I thought would be repeated enough to justify
| being its own function, is actually only invoked in one specific
| place--so I go back and inline these functions as needed.
|
| The less my code looks like a byzantine tangle of function
| invocations, and the more my code reads like a straightforward
| list of statements to execute in order, the better it makes me
| feel, because I know that I'm not unnecessarily hiding
| complexity, and I can get a better, more concrete feel for what
| my program's execution looks like.
| dang wrote:
| Related:
|
| _John Carmack on Inlined Code_ -
| https://news.ycombinator.com/item?id=39008678 - Jan 2024 (2
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=33679163 - Nov 2022 (1
| comment)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=25263488 - Dec 2020 (169
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=18959636 - Jan 2019 (105
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=14333115 - May 2017 (2
| comments)
|
| _John Carmack on Inlined Code (2014)_ -
| https://news.ycombinator.com/item?id=12120752 - July 2016 (199
| comments)
|
| _John Carmack on Inlined Code_ -
| https://news.ycombinator.com/item?id=8374345 - Sept 2014 (260
| comments)
| kragen wrote:
| There is a longer version of this thought-provoking post, also
| including Carmack's thoughts in 02012, at
| https://cbarrete.com/carmack.html. But maybe that version has
| not also had threads about it.
| dang wrote:
| It doesn't seem to have, since
| https://news.ycombinator.com/from?site=cbarrete.com is empty.
|
| Should we change the top link to that URL?
| kragen wrote:
| I do think it's a better page, but I wouldn't change the
| link if I were in charge. On the other hand, I think
| everyone is grateful that you're in charge of HN and not
| me. _Especially_ not me. So I think you should use your
| judgment.
| fabiensanglard wrote:
| How does a program work when its disallow "backward branches".
| Same thing with "subroutine calls" how do you structure a program
| without them?
| kragen wrote:
| Well, you have one backward branch at the end of the program,
| and you inline your subroutines. I'm pretty sure you've written
| shaders for ancient GPUs that had similar limitations? And
| anything you can do in hardware you can do without subroutine
| calls, and in hardware the loop starts again on every clock
| cycle.
| bdowling wrote:
| You can do a lot with a program that looks like:
| while(1) { if (condition1) ... if
| (condition2) ... // etc }
|
| Subroutine calls can be eliminated by inlining everything,
| using macros to make the code more manageable. Loops can be
| simulated using macros that expand to multiple copies of the
| code, one for each step.
|
| One advantage is that the program will never get into an
| unbounded loop because the program counter will always advance
| towards the end of the main loop.
| Jtsummers wrote:
| It allows one backward branch. Think of it like hand-rolling
| your OS scheduler for processes/threads. You also have to track
| your "program counter" yourself. As a silly example:
| typedef enum state {EVEN, ODD} state_t; state_t task1 =
| EVEN; state_t task2 = EVEN; while (1) {
| switch(task1) { case EVEN: // do even
| things task1 = ODD; break; case
| ODD: // do odd things task1 = EVEN;
| break; default: fprintf(stderr, "WTF?\n");
| exit(1); } switch(task2) { case EVEN:
| // do even things task2 = ODD; break;
| case ODD: // do odd things task2 =
| EVEN; break; default:
| fprintf(stderr, "WTF?\n"); exit(1); }
| }
|
| For every "process" you've unrolled like this, you have to
| place it into its own switch/case or call out to a function
| which has similar logic (when subroutines aren't disallowed).
| If the process is short enough you let it execute all the way
| through, bigger processes would need to be broken apart like
| above to avoid consuming an entire cycle's time (especially
| important in real-time systems).
| low_tech_love wrote:
| Interesting: this is a 2014 post from Jonathan Blow reproducing a
| 2014 comment by John Carmack reproducing a 2007 e-mail by the
| same Carmack reproducing a 2006 conversation (I assume also via
| e-mail) he had with a Henry Spencer reproducing something else
| the same Spencer read a while ago and was trying to remember
| (possibly inaccurately?).
|
| I wonder what is the actual original source (from Saab, maybe?),
| and if this indeed holds true?
| EGreg wrote:
| Is this kind of like 300 was a movie about a Frank Miller novel
| about a Greek legend about the actual Battle of Thermopylae?
| lencastre wrote:
| I'm not even pretending I understood Carmack's email/mailing list
| post but if more intelligent/experienced programmers than me care
| to help me out, what exactly is meant by this he wrote in 2007:
|
| _If a function is called from multiple places, see if it is
| possible to arrange for the work to be done in a single place,
| perhaps with flags, and inline that._
|
| Thanks,
| tcoville wrote:
| This is a heavily simplified version of what I'm suspecting
| he's trying to portray, key this wouldn't be useful for utility
| functions like string manipulation but more business logic
| being used across similar functions: def
| processOrder(): # Some common processing logic
| print("Processing the order...") def
| placeOnlineOrder(): processOrder()
| print("Sending confirmation email...") def
| placeInStoreOrder(): processOrder()
| print("Printing receipt...") # Calls from
| different locations placeOnlineOrder()
| placeInStoreOrder()
|
| Could become: def processOrder(order_type):
| # Common processing logic print("Processing the
| order...") if order_type == "online":
| print("Sending confirmation email...") elif
| order_type == "in_store": print("Printing
| receipt...") # Unified calls with different flags
| processOrder("online") processOrder("in_store")
| fluoridation wrote:
| That... looks decidedly worse. Now you have fewer functions
| that need to be concerned with multiple unrelated things for
| no reason.
| kazinator wrote:
| In my opinion, there is value in functions that have only one
| caller: it's called functional decomposition. The right
| granularity of functional decomposition can make the logic easier
| to understand.
|
| To prevent unintended uses of a helper function in C, you can
| make it static. Then at least nothing from outside of that
| translation unit can call it.
| atulvi wrote:
| who read this in John Carmack's voice?
___________________________________________________________________
(page generated 2024-10-09 23:01 UTC)