[HN Gopher] Most UI applications are broken real-time applications
___________________________________________________________________
Most UI applications are broken real-time applications
Author : todsacerdoti
Score : 100 points
Date : 2023-09-21 17:34 UTC (5 hours ago)
(HTM) web link (thelig.ht)
(TXT) w3m dump (thelig.ht)
| m463 wrote:
| I believe I agree.
|
| The iPhone UI must have either a realtime UI (or something pretty
| snappy). It responds to finger inputs in a bounded way.
| valbaca wrote:
| Dispatch queue. Android has the equivalent thing.
| [deleted]
| kridsdale3 wrote:
| More than that even. Since iOS 1, the majority of compute-
| intense work after a touch event has happened in an entirely
| different process, which has the highest OS-scheduler
| priority. This is what CoreAnimation is based on. Unlike a
| game engine or other "realtime" app, the OS itself is doing
| everything possible including preempting your main-thread to
| ensure that anything animating takes priority.
| deepsun wrote:
| I believe it's pretty easy to resolve -- declare one thread as
| "UI Thread", and forbid doing any IO on it.
|
| Android (and I believe iOS does too) enforce that.
|
| It's up to the developer to show a meaningful message/animation
| if an IO operation takes noticeable time.
| kudokatz wrote:
| > Android (and I believe iOS does too) enforce that.
|
| This is absolutely not true. Android simply detects that there
| has been no progress on the UI thread "for a few seconds"
| before force-closing the app [1]. By this time, the interaction
| has been janky/frozen for WAY too long. If you have seen bad
| iOS scrolling and lock-ups, you know this as well.
|
| I have worked on mobile software for these apps that have
| billions of users. When I pointed out how much stuff ran on the
| UI thread, there was a collective "this is just the way it is"
| response and life went on.
|
| It's super-depressing.
|
| -----
|
| [1] "Performing long operations in the UI thread, such as
| network access or database queries, blocks the whole UI. When
| the thread is blocked, no events can be dispatched, including
| drawing events.
|
| From the user's perspective, the application appears to hang.
| Even worse, if the UI thread is blocked for more than a few
| seconds, the user is presented with the "application not
| responding" (ANR) dialog."
|
| https://developer.android.com/guide/components/processes-and...
| _greim_ wrote:
| Web browsers should add a DOM method called `addEventStream()` to
| supplement `addEventListener()`. It would still accept an event
| type--e.g. `button.addEventStream('click')`--but would not accept
| a handler function. It would just return an async iterator of
| event objects. Backpressure on the iterator would place the
| element into a `:waiting` state similar to `:disabled` in which
| the element would become non-interactive. All UI event handling
| becomes a data processing pipeline.
| nitwit005 wrote:
| > This is a transparent process that is not under control of the
| application. Thus, if any given memory access can block on IO
| from a disk drive, that means the system is fundamentally not
| real-time, therefore UI applications on such a system are
| fundamentally broken.
|
| Not really. It means they chose to degrade performance when too
| much memory is used, rather than crash.
|
| All options available when "out of memory" are bad. They thought
| that was the least bad option.
| tantalor wrote:
| > So correct UI applications cannot call any blocking function
| from their main threads.
|
| Author is redefining "correct" to mean "has the property I care
| about, which in this case is performance".
|
| There are _many_ desirable distinct properties of a computer
| system: correctness, performance, security, etc.
|
| "Correct" usually means something like "gives the right answer".
| It has _nothing_ to do with performance.
| anonymoushn wrote:
| For a game like Street Fighter, you can present to someone the
| time at which the first frame begins and all of the input
| events within the game and they can calculate the unique game
| outcome that results. It's a bit embarrassing that we cannot
| have the same property in applications that we interact with
| using our keyboards all day, and we must constantly look at
| them to see if they are behaving as if we have sent the inputs
| we have sent.
| cobbal wrote:
| In my experience, "Correct" means "satisfies a specification".
| The specification is often about the final result, but it's not
| limited to it.
| cobbal wrote:
| As an example, I would call a function called "mergeSort"
| that takes O(n^2) time incorrect.
| tantalor wrote:
| You can have it fast, or correct, or both, or neither.
|
| Here's an extremely fast sorting algorithm that's
| incorrect: function cobbalSort(arr) {
| return arr; }
| jacobsenscott wrote:
| It is well known that general purpose operating systems do not
| allow real time applications. The reason is people prefer an OS
| that can do a lot of things well enough, vs a system that can do
| one thing in real time. Real time OS's exist, and you would never
| want to build a desktop environment on them.
|
| Also, the OS will never become the bottleneck for any general
| purpose software because you'll never have a team of programmers
| good enough to make it so. All the performance issues be due to
| mistakes in the application itself.
| dzaima wrote:
| The reality is that optimizing for the common scenario just gives
| such significant benefits that the worst-case scenario becomes
| practically infeasible to even consider.
|
| And it's not just OSes that don't particularly care about real-
| time - modern processors don't either. e.g., in the common
| scenarios it's possible to load maybe eight 8-byte values per
| nanosecond (maybe eight 32-byte values if you're doing SIMD), but
| if they're out of cache, it could take hundreds of nanoseconds
| per byte. Many branches will be predicted and thus not cost
| affect latency or throughput at all, but some will be
| mispredicted & delay execution by a dozen or so nanoseconds. On
| older processors, a float add could take magnitudes more time if
| it hit subnormals.
|
| If you managed to figure out the worst-case timings of everything
| on modern processors, you'd end up at pre-2000s speeds.
|
| And virtual memory isn't even the worst OS-side thing - disk
| speed is, like, somewhat bounded. But the number of processes
| running in parallel to your app is not, so if there are a
| thousand processes that want 100% of all CPU cores, your UI app
| will necessarily be able to utilize only 0.1% of the CPU. [edit
| note: the original article has been edited to have a "Real-time
| Scheduling" section, but didn't originally]
|
| So, to get real-time UIs you'd need to: 1. revert CPUs to
| pre-2000s speeds; 2. write the apps to target such; 3. disable
| ability to run multiple applications at the same time. Noone's
| gonna use that.
| anonymoushn wrote:
| I feel like this comment is from an alternate reality where the
| Windows lock screen doesn't just drop all your inputs for a
| random period of at least a few seconds when you try to enter
| your password.
| dzaima wrote:
| It is very much true that there's a very large amount of
| things that could be improved on the status quo with rather
| minimal amounts of effort, that could resolve a vast majority
| of the brokenness of responsiveness of UIs. But decreasing
| dropped frames/inputs by 10x or 100x is still just simply not
| gonna make it "real-time".
|
| I remember the windows lock screen being particularly
| annoying (an animation had to finish..?) but on Linux Mint
| I've always been able to start typing away immediately, so
| much so that I start blindly typing in the password before my
| monitor has finished turning on. Properly queueing events
| (keyboard ones at least) should be pretty simple to do
| properly, but of course many things still get it wrong.
| btilly wrote:
| The article fails to distinguish between hard real-time and soft
| real-time.
|
| If you're controlling a bandsaw, you've got a hard real-time
| application. You can't miss your window for the next instruction.
|
| Most user interfaces are soft real-time. Occasionally missing
| your window is fine. And so it is OK to do things like hash
| lookups whose average performance is O(1), and whose worst case
| performance is O(n). Ditto for dynamically resizing arrays. As
| long as you're back in time, most of the time, it can be OK.
|
| The problem isn't that we code UIs as soft real-time. It is that
| the soft bit gets squishier with each layer of abstraction. And
| there is a feedback loop where people get more and more used to
| slow applications, so nobody is concerned if their application is
| unnecessarily slow as well.
|
| Compounding this is the fact that we often measure performance in
| terms of throughput, instead of latency. Therefore, every single
| device and peripheral is willing to lose a bit of latency. It
| happens at all levels. The classic I like to quote is
| http://www.stuartcheshire.org/rants/latency.html.
|
| And the result is that modern applications on modern hardware are
| less responsive than older applications on older hardware. Fixing
| it is a question of fixing a lot of little problems. And, as we
| used to joke about Microsoft, what's the point of developing
| fault-tolerant software when we've already developed fault-
| tolerant users?
| Groxx wrote:
| Agreed here. While I wish more frameworks did what Android did,
| and allow you to literally block file and network access on the
| main thread (opt-in strict mode that I wish was default
| enabled)... worrying about paged memory in an active GUI is so,
| so much further down the hard-real-time branch of things that
| the effort/reward ratio is literally insane in almost all
| cases. It may be a fun challenge for some, but it'll absolutely
| never be a major use case, nor should it be.
| adroitboss wrote:
| So is the moral of the story is to build an entire OS like
| erlang's actor model? Doesn't that work by using lightweight
| threads and messaging passing between them. The supervisor gives
| each thread a certain amount of time to run and then puts the
| current thread on the back burner while the next thread runs for
| it's allotted time. I remember hearing it in an erlang talk by
| Joe Armstrong on Youtube. I can't remember which one though.
| amelius wrote:
| Sounds like you'd end up with a microservices architecture. Not
| a fan.
| moritzwarhier wrote:
| I felt like by reading the headline alone, I know exactly what
| this is talking about
|
| Edit: I was completely wrong, but the article is very
| interesting.
| valbaca wrote:
| > when I realized that most mainstream desktop UI applications
| were fundamentally broken.
|
| Oh boy
|
| > File system IO functions belong to a class of functions called
| blocking functions.
|
| Hasn't non-blocking IO been a major feature for about a decade
| now??
| yetanotherloss wrote:
| More like three decades. But you'll need to send the memo to
| small time players like Microsoft, who continue doing all kinds
| of stupid blocking IO on every possible user interaction
| because who even fucking knows.
| anonymoushn wrote:
| > Hasn't non-blocking IO been a major feature for about a
| decade now??
|
| This doesn't make people use it, and even the ones that try to
| use it might erroneously expect that open(2) will return in
| less than a second.
| deepspace wrote:
| Someone has not told Microsoft | It is not as simple as that.
|
| On some Windows machines with network-mounted drives, the File-
| Print-to-pdf dialog takes *minutes* to become responsive, even
| when all currently open files are on a local drive.
|
| This is the kind of thing the author is talking about. The
| programmers of that dialog box probably just called a generic
| "open file dialog" library function, without researching its
| worst-case performance.
|
| In turn the library writers probably blithely coded something
| like "check if all mounted drives are accessible", without
| stopping to consider the worst-case performance.
| pradn wrote:
| The file picker on Windows should be thought of as more like
| a process than a function call. It has to 1) read the
| metadata of the current directory's files 2) look up which
| icon to use for each 3) load the icon 3) call apps that may
| modify the file icon overlay (like "synced" or "not synced"
| cloud icons for Cloud file systems like Google Drive) 4) read
| the registry for various settings 5) load which pinned files
| a user set up - etc etc. All this involves dozens of disk
| reads, calling into third party code, etc. A lot of this may
| be cached but who knows.
|
| The alternative to this is to roll a barebones file picker -
| there might even be one available in the Windows API.
| pdonis wrote:
| _> Someone has not told Microsoft_
|
| More like the part of Microsoft that implemented non-blocking
| I/O for Windows some time ago never bothered to tell the part
| of Microsoft that writes the generic Windows UI code for
| things like the open file dialog. Or for Microsoft Office
| applications, for that matter; I still see Word and Excel
| block the UI thread when opening a file from a network drive,
| even though Windows has perfectly good asynchronous file I/O
| API calls.
| kudokatz wrote:
| > never bothered to tell the part of Microsoft that writes
| the generic Windows UI code
|
| no, they just don't care.
| seanalltogether wrote:
| Even if you rebuilt the the entire OS and libraries on a
| separation of sync vs async code, you still come to the
| inevitable problem of propagating delays to the user in a
| predictable manner.
|
| So I've pressed a button to load a file that's usually really
| fast and this time nothing happens because the async call is
| taking it's time. Is that being represented to the user in some
| meaningful way? Is the button disabled until the operation
| completes? Is it stuck in the down position to show I can't press
| it again, or does it spring back up and show a loading dialog? Is
| that any more clear in a real time system then it would be in a
| blocking situation?
|
| The problem is not blocking vs realtime, it's about programmers
| not understanding all the state their code can run in, and it's
| not clear that a realtime system would save the user when they
| fall into that unknown state.
| loopz wrote:
| UI was pretty much solved in the 90's including the problem of
| instant feedback. Then abandoned for shiny stuff.
|
| Personally I set up i3 to open most windows asynchronously, so
| my flow isn't interrupted. It's great, but takes a bit getting
| used to windows not randomly stealing focus. It's not for
| everyone though.
| bhdlr wrote:
| > solved in the 90s
|
| Ya, you know, unless you didn't speak English, needed
| accessibility, had a nonstandard screen size, had a touch
| screen, etc
| dgan wrote:
| You are replying out of context: if your file is on a network
| drive it doesn't matter if you have shiny UI, or text based
| terminal, you gonna wait for the round trip with your UI
| being unresponsive
| pests wrote:
| That doesn't follow. Your UI can be responsive while the
| round trip happens. The application won't literally stop
| working 100%.
| theamk wrote:
| Depending on the application. Take single-window editor
| (think notepad) for example. If the "load" command is
| blocking, what can you do?
|
| You cannot allow editing - the existing buffer will be
| replaced once load completes. You can show the menu, but
| most options should be disabled. And it will be pretty
| confusing for user to see existing file remain in read-
| only mode after "open" command.
|
| The most common solution if you expect loads to be slow
| is a modal status box which blocks entire UI, but maybe
| shows progress + cancel button. This definitely helps,
| but also a lot of extra code, which may not be warranted
| if usual loads are very fast.
| pests wrote:
| Well, notepad is a bad example because it has tabs now.
| If the load command is blocking, show an indicator of
| that inside the tab. The user can switch to the other tab
| and have a full editing experience - this buffer
| obviously will not be replaced.
|
| If you pick another single-window app, my response is:
| that is a decision they chose to make. They can also
| choose to go multi-window or tabbed just like notepad
| did.
| amelius wrote:
| Only if the programmer wasn't on fast local storage when
| they tested their code.
| loopz wrote:
| Yep. Non-blocking threads was a thing in the 90's also.
| BeOS probably the poster-child of that.
| dagenleg wrote:
| Sounds great, how did you do that? What's the config option?
| loopz wrote:
| I believe it may depend on i3 version, but you can make it
| work with the no_focus command, and ie. any window title,
| all, etc.: https://askubuntu.com/questions/1379653/is-it-
| possible-to-st...
| OliverGilan wrote:
| Can you be more specific about what was "solved" in the 90s?
| The platforms upon which apps are run now and the
| technologies as well as the expected capabilities of those
| apps have drastically changed. Not all UI development has
| changed just for being shiny.
|
| I see little reason why building a UI today cannot be a
| superset of what was solved in the 90s so I'm curious to know
| what that solved subset looks like to you
| loopz wrote:
| There were all kinds of standards, guidelines that made
| people recognize what the program is doing and how to
| operate it. Nowadays, UIs are mostly defective, trying its
| best to hinder effective usage and hide/strip
| functionality. There are of course progress too, but
| something was lost that make applications today much less
| intuitive and hard/tough to use.
| redserk wrote:
| I don't understand what specifically is of concern here.
| Do you have exact examples?
|
| Some platforms publish a list of recommended guidelines
| which are effectively a standard. For example here's one
| from Apple about when and how to use charts in an
| application: https://developer.apple.com/design/human-
| interface-guideline...
| loopz wrote:
| Some of it may be found in older HIG sources sure: https:
| //en.m.wikipedia.org/wiki/Human_interface_guidelines
|
| Also they were called GUI standards or UI at the time.
|
| The modern equivalents called UX isn't reflecting the
| same conglomeration of standards and conventions though.
| So not talking about the newer stuff.
|
| I'm no expert on it, and it required specialized
| expertise. It's been abandoned for mobile interfaces and
| the modern UX stuff, which often optimizes for design
| over functionality.
|
| If you've never used old software, it's hard to explain.
| But old Apple or Microsoft GUI standards would cover the
| basics, but you'd also need to study the applications and
| how they presented their GUI.
| Groxx wrote:
| While I broadly agree with the UX/HIG/design guideline
| issues that are common in modern software... literally
| none of them have anything to do with the technicals of
| how quickly the UI reacts to actions and renders the next
| state. You can have responsive UI in all of them. And all
| the modern ones also say "jank is bad, don't do that".
| johannes1234321 wrote:
| Back in the days(tm) there were widget libraries as part
| of the OS, which followed the guidelines by the OS
| vendor. This gave a foundation for somewhat similar
| behavior and standardisation of behavior. This gives
| usability.
|
| Nowadays man's applications are web apps, build without
| such frameworks, with less UI research and even where
| frameworks are used they are often built with a somewhat
| mobile first approach.
| eitland wrote:
| Just start with the fact that all desktop programs had a
| menu bar where you could find every feature - and at least
| on Windows - also a shortcut for that feature.
|
| This was broken with Ribbon and the hamburger menus that
| every application seems to have switched to for no other
| reason it seems than to copy Chrome.
|
| To be fair Ribbon is somewhat useable again, but the in the
| first version I have no idea how people were supposed to
| find the open and save functions :-)
|
| Other problems:
|
| Tooltips are gone. Yes, I can see they are hard to get
| right on mobile, but why remove them on desktop while at
| the same time when the help files and the menus were
| removed?
|
| The result is even power users like me have to hunt through
| internet forums to figure out how to use simple features.
|
| Back in the nineties I could also insert a hyphen-if-needed
| (I have no idea what it is called but the idea is that in
| languages like Norwegian and German were we create new
| words by smashing other words together it makes sense to
| put in invisible hyphens that activates whenever the word
| processor needs to break the word and disappears when the
| word is on the start or in the middle of a line and doesn't
| have to be split.)
|
| Back in the nineties I was a kid on a farm. Today I am a
| 40+ year-old consultant who knows all these things used to
| be possible but the old shortcuts are gone and I cannot
| even figure out if it these features exist anymore as
| documentation is gone, tooltips are gone and what
| documentation exist is autotranslated to something so
| ridiculously bad that I can hardly belive it. (In one
| recent example I found Microsoft had consistently
| translated the word for "sharing" (sharing a link) with the
| word for "stock" (the ones you trade).
| [deleted]
| 4rt wrote:
| Exactly this, doing it right is possible but vastly more
| complicated and requires tons more UI work.
|
| It's not impossible - tape drives jukeboxes with 5 minute
| delays aren't a new invention.
| theragra wrote:
| Im happy even with JetBrains products. They display "working"
| popup on blocking operations, and do a lot of stuff in the
| background, which is not blocking my work.
| senkora wrote:
| As a human, I would like to be able to treat user interfaces as
| predictable, physical objects.
|
| Unfortunately programs are built out of math and so I can't
| usually do this.
|
| Apple iOS actually does do a pretty good job of it, and that is a
| huge differentiator that I think is a large part of why iPhones
| feel great to use.
| valbaca wrote:
| Thank the dispatch queue and Xcode will yell at you if you do
| anything other than UI and very trivial computes on the main UI
| thread
| favorited wrote:
| Some of Apple's biggest perf wins in the last several years
| involved _removing_ dispatch asynchrony from code paths that
| really don 't need to be asynchronous.
|
| Obviously if you're writing an event-loop web server, then
| your file IO needs to be non-blocking or you're hosed. On the
| other hand, if you're reading a small configuration file from
| disk after your app launches, the most responsive option may
| well be to just read the bytes on the current thread and
| continue.
|
| https://web.archive.org/web/20190606075031/https://twitter.c.
| ..
|
| https://web.archive.org/web/20211005132519/https://twitter.c.
| ..
| ComputerGuru wrote:
| I agree with the main gist of the article and a lot of the
| points, but disagree on a few specifics.
|
| 1) mlock isn't meant to be called on all your memory, just
| something that needs it to operate correctly. The situation the
| author described where the system comes to a halt as memory
| contents are paged in and out of memory/disk is (subjectively)
| worse when the only option is for the OOM killer to begin reaping
| processes everywhere (which would happen if all apps took this
| advice).
|
| 2) the following excerpt is not how any sane modern OS scheduler
| works:
|
| > Imagine you have multiple background process running at 100%
| CPU, then a UI event comes in to the active UI application. The
| operating system may block for 100ms * N before allowing the UI
| application to process the event, where N is the number of
| competing background processes, potentially causing a delayed
| response to the user that violates the real-time constraint
|
| Modern schedulers calculate priority based off whether the
| process/thread yielded its remaining execution time in the
| previous round or was forcibly evicted. Background processes are
| additionally run at a penalty. The foreground window (on OSes
| with internal knowledge of such a thing) or terminal owner in the
| current login session group gets a priority boost. Threads
| blocked waiting for input events get a massive priority boost.
|
| (But the point stands and it might be a whole lot longer than n *
| 100ms if drivers or kernel modules are doing stuff.)
| frostiness wrote:
| > The situation the author described where the system comes to
| a halt as memory contents are paged in and out of memory/disk
| is (subjectively) worse when the only option is for the OOM
| killer to begin reaping processes everywhere.
|
| This one's interesting since your outcome often depends on what
| hardware you have. On systems with slow IO, i.e. a slow HDD,
| it's possible for swapping to make a system entirely unusable
| for minutes, whereas if swap is disabled the OOM killer is able
| to kick in and solve the issue in less than a minute. That's
| the difference between being able to keep most of your work
| open and none of your work open (because the alternative is
| being forced to reboot).
| ComputerGuru wrote:
| But in the era when spinning rust startup disks were in use
| everywhere, no app would autosave. I can't imagine the
| carnage if MS Word or Excel were just violently killed at the
| first sign of memory pressure back in the day.
| RecycledEle wrote:
| Yes.
|
| UI should be run in a separate processor in real time.
|
| I am tired of clicking to see the screen change after I click and
| the click registers on the new screen, not what I clicked on.
| smokel wrote:
| I like this perspective.
|
| In the demoscene days, we used to obsess over 60 Hz guaranteed
| framerates. Code was constantly being profiled using raster bars.
| Our effects might have been crappy, but all was buttery smooth.
|
| Then someone decided it would be a good idea to create a common
| user interface that would work on many different framerates and
| resolutions, and all was lost. Or people would try for complex 3D
| graphics and sacrifice smoothness.
|
| Some people tried to convince us that humans have a 150ms built-
| in visual processing delay, and all would be fine. This is not
| the case, and we are still stuck with mediocre animations. The
| content has improved a lot though :)
| TacticalCoder wrote:
| > Code was constantly being profiled using raster bars.
|
| For those wondering: we'd change the background color from,
| say, black to blue for the "physics" part, to green for the
| "drawing" part, to purple for the "audio rendering" part... All
| in the same frame, while the frame was being drawn.
|
| So you'd see on the border of your screen (usually outside of
| the drawing area you'd have access to) approximately which
| percentage of each frame was eaten by "physics", graphics,
| audio etc. and how close you were to missing a frame.
|
| And it was quite "hectic" typically these color "raster" bars
| would jump around quite some from one frame to another.
|
| But yeah there was indeed something very special about having a
| pixel-perfect scrolling for a 2D game running precisely at the
| refresh rate (50 Hz or 60 Hz back then). And it wasn't just
| scrolling: games running on fixed hardware often had characters
| movement set specifically in "pixels per frame". It felt so
| smooth.
|
| Something has definitely been lost when the shift to 3D games
| happened: even playing, say, Counter-Strike at 99 fps (was it
| even doing that? I don't remember) wasn't the same.
|
| People don't understand that their 144 Hz monitor running a 3D
| game still doesn't convey that smoothness that my vintage
| arcade cab does when running an old 2D game with pixel-perfect
| scrolling while never skipping a frame.
| orbital-decay wrote:
| _> People don 't understand that their 144 Hz monitor running
| a 3D game still doesn't convey that smoothness that my
| vintage arcade cab does when running an old 2D game with
| pixel-perfect scrolling while never skipping a frame._
|
| The actual motion-to-photon delay (measurable with a high-
| speed camera) in a well optimized competitive FPS is
| somewhere around 20ish ms or less these days, so it's on par
| with it in regards to the input lag, and overall it's far
| smoother because of the higher frame rate, better displays,
| and way more thought put into tiny nuances competitive
| players complain about. A proper setup feels _extremely_
| smooth, responsive, and immediate.
| proc0 wrote:
| I think the difference is that 3D is inherently continuous
| (because it is simulating a 3d space) and requires another
| layer of abstraction on top of the controllers to work
| intuitively. 2D on the other hand is pixel based which is
| usually discrete (the world units are uniform in pixels), and
| also the character movement is 1:1 mapping on the controller.
| These two factors make 2D feel more responsive, and I'm
| guessing that once VR gets better it will close the
| abstraction gap and make the controller mapping (body/hand
| movement) 1:1.
| mrob wrote:
| A lot of that smoothness was the lack of sample-and-hold blur
| on CRTs. Gaming LCDs now have modes that strobe the backlight
| to simulate this effect. Additionally, the importance of
| sample-and-hold blur declines as the frame rate increases. A
| 360Hz LCD displaying 360fps motion can look reasonably sharp
| even without strobing.
|
| Blur Busters has detailed explanations, e.g.:
|
| https://blurbusters.com/blur-busters-law-amazing-journey-
| to-...
| masfuerte wrote:
| Not just that. You hit a key and the game could react in
| literally the next frame. There was an immediacy you don't
| get now.
| PaulKeeble wrote:
| Ever since the xbox360 era of games the game world has been
| updating in parallel with the rendering of the prior frame
| at the very least, that pipeline is quite commonly 3 frames
| long now. That is a lot of latency to get some macro
| concurrency.
| dllthomas wrote:
| Metal Slug on the PS4 was jarringly unresponsive compared
| to my expectations, though I'm not sure I've played it on
| an arcade machine in 20 years so maybe it's my memory or
| physiology at least partly to blame.
| josephg wrote:
| Strangely enough, I had the same experience with diablo 4
| until I turned down some of the graphics settings. I'm
| not sure if it was some DLSS / frame generation thing or
| what, but the game felt sloppy to me. Especially in
| comparison to something like diablo 2 which feels razor
| sharp.
|
| I wish modern game developers & graphics pipelines would
| spend more effort optimizing input latency. I'll stop
| noticing the graphics 2 minutes into playing the game.
| But if the input is laggy, I'll feel that the entire time
| I'm playing. And the feeling of everything being a bit
| laggy will linger long after I stop playing.
| alex7734 wrote:
| > But if the input is laggy, I'll feel that the entire
| time I'm playing.
|
| You've already bought the game at this point so most devs
| don't care.
| josephg wrote:
| They care a lot if they get bad reviews online. That
| translates directly into sales.
|
| An insane amount of money and effort that goes into most
| AAA games. It seems like a stretch to accuse the industry
| of not caring about the quality of the games they make.
| pradn wrote:
| I'm totally with you. It has a generic art style, and
| with optimized settings, looks like goop. There's never
| going to be a game that feels like how it feels
| teleporting through a whole act in D2. Everything loads
| instantly. And it keeps doing that even at high cast
| speeds.
| scott_w wrote:
| If I recall correctly, the PS4 controller has a
| disgusting amount of input delay. I noticed this most
| strongly on the FFX Remaster where you need to hit timer
| events before the arrow enters the shaded box.
|
| It doubly screws you in the chocobo racing section
| because the controls assume no delay, meaning you can't
| respond to hazards in time and constantly overcorrect
| movement.
| masfuerte wrote:
| Some multiplayer network games memoise the game state.
| When an input event arrives from a remote player it is
| applied to the game state that existed at the time of the
| event. Then the state is fast-forwarded to the present
| time and the game continues. Ideally you shouldn't notice
| this happening.
|
| The obvious fix for laggy input is to apply the same
| processing to local input, but I've not heard of anyone
| doing this.
|
| (I think this technique is known as netcode.)
| mrob wrote:
| The Metal Slug games are already at a disadvantage
| because they run at 30fps, unusually low for arcade
| games. All else being equal, lower frame rate gives
| higher input latency.
| devwastaken wrote:
| At 14 I discovered that tiny response times are essential by
| observing terminal echos from a TCP client and server
| connection. I could clearly see a 50ms difference, down to
| about 15ms if I recall correctly. It wasn't until years later
| that I discvered high response+refresh monitors and from then
| on anything under 120FPS is just laggy if it's a game.
| PaulKeeble wrote:
| You do not want real time systems design for a GUI, its a lot
| more work and it wastes the vast majority of processing power to
| guarantee performance. You just need to be a careful about the
| amount of work you do in event handlers and split off heavy or IO
| work and split big updates. The problem is all concurrency is a
| lot harder than serial and so it tends to be something that is
| delayed until its really necessary so a lot of "not really good
| enough" handlers end up causing minor annoying delays.
| gpm wrote:
| You don't want to do the design work, but you do want the
| results.
|
| Maybe some sort of magical framework will figure out how to get
| the latter without the work in the future.
| brundolf wrote:
| > One of the fundamental problems is that many UI applications on
| Windows, Linux, and macOS call functions that are not specified
| to run in a bounded amount of time. Here's a basic example: many
| applications don't think twice about doing file IO in a UI event
| handler. That results in a tolerable amount of latency most of
| the time on standard disk drives but what if the file is stored
| on a network drive? It could take much longer than a second to
| service the file request. This will result in a temporarily hung
| application with the user not knowing what is happening. The
| network drive is operating correctly, the UI application isn't.
|
| Say what you will about JavaScript, but a great thing it's done
| for modern software is making async programming default and
| ergonomic. With most APIs, you couldn't block on IO if you
| _tried_. Which means web UIs never do
| taeric wrote:
| Huh? It used to be very common to find that a web page was
| stuck processing some script. Still isn't that uncommon, all
| told.
|
| You are mostly right that you couldn't block on IO,
| necessarily, but you could still hose up the event thread quite
| heavily.
| clumsysmurf wrote:
| If this article is true, I wonder if newer frameworks like
| Android's Compose are fundamentally flawed, at least when used in
| languages with garbage collection.
|
| Essentially, composable can recompose with every frame, like for
| an animation. But, in certain circumstances, this will cause
| allocations for every frame.
|
| For example, a modifier that is scoped to something like
| BoxScope. You can't hoist it out of the composable, it has to be
| in BoxScope. If that scope is animated on every frame, that
| modifier get re-allocated, every frame. That could be a lot of
| pressure on the GC.
|
| Edit: Then again, its hard doing anything realtime in GC
| languages like Java / Kotlin, maybe its possible if doing 0
| allocations per event.
| rcme wrote:
| Yes, these frameworks are fundamentally broken. Even a
| framework with extensive usage like React doesn't work for
| real-time applications. Or at least your only option is to
| manipulate the DOM directly for the performant real time parts.
___________________________________________________________________
(page generated 2023-09-21 23:02 UTC)