_______ __ _______
| | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----.
| || _ || __|| < | -__|| _| | || -__|| | | ||__ --|
|___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____|
on Gopher (inofficial)
(HTM) Visit Hacker News on the Web
COMMENT PAGE FOR:
(HTM) Are we stuck with the same Desktop UX forever? [video]
porise wrote 13 min ago:
I didn't watch the 45 min video but does he mention tiling
environments? They have solved every complaint I had before.
I can immediately swap to the exact windows I want without tabbing, I
can rebind everything to pull up whatever application I want, and I can
even switch a window to floating.
chrsw wrote 17 min ago:
Some of favorite desktop environments:
NeXTSTEP
BeOS/Haiku
ChromeOS
Raspberry PI
macOS
CDE
I like uniformity, simplicity and consistency, stability, few
surprises, little guessing. I want to use the computer. I don't need to
become an expert in computer interfaces. Like cars. I just want to
drive the car. I don't want to have to build or customize my own
automobile ergonomics. Much of my time is spent on the command line
anyway, but when I have to use the GUI, please make it very simple.
throwaboat wrote 7 hours 11 min ago:
I'd like to point to tabs in browsers as a problem area. Ctrl + T is a
problem.
I'll set the scene that I think most of us have experienced: you're
working on a project. You start down the rabbit hole of research to
find a solution to something. Maybe you find it quickly somehow. But
this case, you don't. The problem is too big for an easy answer and
instead requires synthesis and reflection.
Eventually, after opening 50 tabs and only closing the immediately
useless stuff, you find that you need to circle back up the problem
solving chain. The problem is that you have 45 tabs open and no method
to the madness that is clearly visible.
This further compounds if you're trying to solve a new problem with an
existing set of tabs that haven't been cleaned out from the last
problem.
Nowhere in this process is the UX leading you to solving a problem.
My half-baked solution is to allow for the user to enter "research
mode". When a new tab is opened, the browser halts the user and prompts
for what they found on the last tab that led them to opening this new
tab. When the user leaves research mode, any leafs left should also
prompt for a summary or omitted as irrelevant. Then, once all the tabs
have been accounted for, a report can be generated which shows all the
URLs and the user's notes. Bonus points if allows generation of MLA /
APA citations automagically. Further bonus points if I can highlight
sections of text / images while in research mode to fill my new tab
questionnaire as I go.
gausswho wrote 1 hour 46 min ago:
You may like Zen browser.
It''s best just to ignore tabs anyway. After reopening a window
they'll be inactive until you open them.
Findecanor wrote 15 hours 23 min ago:
I would say that it is the term "UX" that is the confusing part of
"UX/UI".
By Don Norman's original definition [0], it is not merely another term
for "UI" but specifically when you do have a wider scope and not
working with a user interface specifically.
So, the term "UX/UI" would refer to being able to both work with the
wider scope, and to go deeper to work with user interface design.
0:
(HTM) [1]: https://www.youtube.com/watch?v=9BdtGjoIN4E&t=10s
bgbntty2 wrote 15 hours 55 min ago:
This is a (very) rambling comment since I added things to it as I
watched the video.
I think the state of the current Desktop UX is great. Maybe it's a
local maximum we've reached, but I love it. I mostly use XFCE and there
are just a few small things I'd like changed or fixed. Nothing that I
even notice frequently.
I've used tiling window managers before and they were fine, but it was
a bit of a hassle to get used to them. And I didn't feel they gave me
something I couldn't do with a stacking window manager. I can arrange
windows to the sides or corners of the monitor easily with the mouse or
the keyboard. On XFCE holding down alt before moving a window lets me
select any part of the window, not just the title bar, so it's just
"hold down ALT, point somewhere inside the window and flick the window
into a corner or a side with the mouse". If I really needed to view 10
windows at the same time, I'd consider a tiling window manager, but
virtual desktops on XFCE are enough for me. I have a desktop for my
mails, shopping, several for various browsers, several for work, for
media, and so on. And I instantly go to the ones I want either with
Meta+ (for example, Meta+3 for emails), or by scrolling with my middle
mouse on the far right on my taskbar where I see a visual
representation of my virtual desktops - just white outlines of the
windows relative to the monitors.
Another thing I've noticed about desktop UX is that application UX
seems follow the trends of website UX where the UX is so dumbed down,
even a drunken caveman who's never seen a computer can use it. Tools
and options are hidden behind menus. Even the menus are hidden behind a
hamburger icon. There's a lot of unnecessary white space everywhere.
Sometimes there's even a linear progression through a set of steps, one
step at a time, instead of having everything in view all the time -
similar to how some registration forms work where you first enter your
e-mail, then you click next to enter a password, then click next again,
and so on. I always use "compact view" or "details view" where it's
possible and hide thumbnails unless I need them. I wish more sites and
apps were more like HN in design. If you're looking to convert (into
money or into long-term users) as many people as possible, then it
might make sense to target the technological toddlers, but then you
might lose, or at least annoy, your power users.
At the beginning of the video I thought we'll likely only see
foundational changes when we stop interacting with the computer mainly
via monitors, keyboards and mice. Maybe when we start plugging USB
ports into our heads directly, or something like that. Just like I
don't expect any foundational changes or improvements on static books
like paper or PDF. Sure, interactive tutorials are fundamentally
different in UX, but they're also a fundamentally different medium. But
at 28:00, his example of a combination of window manager + file manager
+ clipboard made me rethink my position. I have used clipboard
visualizers long ago, but the integration between apps and being able
to drag and otherwise interact with it would be really interesting.
Some more thoughts I jotted down while watching the video:
~~~~ 01:33 This UX of dragging files between windows is new to me. I
just grab a file and ALT+TAB to wherever I want to drop it if I can't
see it. I think this behavior, to raise windows only on mouse up, will
annoy me. What if I have a split view of my file manager in one window,
and other window above it? I want to drag a file from the left side of
the split-view window to the right one, but the mouse-down wont be
enough to show me the right side if the window that was above it covers
it. Or if, in the lower window, I want to drag the file into a folder
that's also in the lower window, but obscured by the upper window? It
may be a specific scenario, but
~~~~ 05:15 I'd forgotten the "What's a computer?" ad. It really grinds
my gears when people don't understand that mobile "devices" are
computers. I've had non-techies look surprised when I mention it,
usually in a sentence like "Well, smartphones are really just
computers, so, of course, it should be possible to do X with them.".
It's such a basic category.
Similarly, I remember Apple not using the word "tablet" to describe
their iPad years ago. Not sure if that has changed. Even many
third-party online stores had a separate section for the iPad.
I guess it's good marketing to make people think your product is so
unique and different than others. That's why many people reference
their iPhone as "my iPhone" instead of "my phone" or "my smartphone".
People usually don't say "my Samsung" or "my $brand" for other brands,
unless they want to specify it for clarity. Great marketing to make
people do this.
~~~~ 24:50 I'm a bit surprised that someone acknowledges that the UX
for typing and editing on mobile is awful. But I think that no matter
how many improvements happen, using a keyboard will always be much,
much faster and pleasant. It's interesting to me that even programmers
or other people who've used desktop professionally for years don't know
basic things like SHIFT+left_arrow or SHIFT+right_arrow to select, or
CTRL+left_arrow or CTRL+right_arrow to move between words, or combining
them to select words - CTRL+SHIFT+left_arrow or CTRL+SHIFT+right_arrow.
Or that they can hold their mouse button after double clicking on a
word and move it around to select several words. Watching them try to
select some text in a normal app (such as HN's comment field or a
standard notepad app) using only arrow keys without modifiers or
tapping the backspace 30 times (not even holding it down) or trying to
precisely select the word boundary with a mouse... it's like watching
someone right-click and then select "Paste" instead of CTRL+V. I guess
some users just don't learn. Maybe they don't care or are preoccupied
with more important things, but it's weird to me. But, on the other
hand, I never learned vi/vim or Emacs to the point where it would make
me X times more productive. So maybe what those users above look to me
is what I look to someone well-versed in either of those tools.
~~~~ Forgot the timestamp, it was near the end, but the projects Ink &
Switch make seem interesting. Looking at their site now.
SoftTalker wrote 16 hours 0 min ago:
The keyboard and screen UX was established in the 1970s. I've been
using a keyboard and screen to work with computers since the 1980s. I
am quite sure I'll be using a keyboard and screen until I retire. And
probably 50 years from now, we'll still be using keyboards and screens.
Some things just work.
Touch screens, voice commands, and other specialized interfaces have
and will continue to make sense for some use cases. But for sitting
down and working, same as it ever was.
gherkinnn wrote 16 hours 13 min ago:
What an excellent talk, thank you. Most refreshing of all, it is about
UX where the X stands for eXperience, rather than eXploitation.
johnea wrote 16 hours 32 min ago:
A) I'm not going to watch the video because it's hosted by goggle, and
I'm not interested in being goggled.
B) However, even without watching the video, it must be describing
corporate product UI, because in the free software world, there is a
huge variety of selections for desktop (and phone) UI choices.
C) The big question I continue to come back to in HN comments: why does
any technically astute person continue to run these monopolistic, and
therefore beige, boring, bland, corporate UIs?
You can have free software with free choice, or you can have whatever
goggle tells you...
virtualbluesky wrote 14 hours 28 min ago:
Do you have suggestions for those less informed about projects that
are pushing the envelope on desktop UX?
christophilus wrote 16 hours 55 min ago:
No, weâre not. Niri + Dank Material Shell is a different and mostly
excellent approach.
DonHopkins wrote 17 hours 2 min ago:
Golan Levin quotes Joy Mountford in his "TED Talk, 2009: Art that looks
back at you":
>A lot of my work is about trying to get away from this. This a
photograph of the desktop of a student of mine. And when I say desktop,
I don't just mean the actual desk where his mouse has worn away the
surface of the desk. If you look carefully, you can even see a hint of
the Apple menu, up here in the upper left, where the virtual world has
literally punched through to the physical. So this is, as Joy Mountford
once said, "The mouse is probably the narrowest straw you could try to
suck all of human expression through." (Laughter) [1] [2] [3] [4]
(HTM) [1]: https://flong.com/archive/texts/lectures/lecture_ted_09/index....
(HTM) [2]: https://en.wikipedia.org/wiki/Golan_Levin
(HTM) [3]: https://www.flong.com/
(HTM) [4]: https://en.wikipedia.org/wiki/Joy_Mountford
(HTM) [5]: https://www.joymountford.com/
jhhh wrote 17 hours 18 min ago:
I understand the desire to want to fix user pain points. There are
plenty to choose from. I think the problem is that most of the UI
changes don't seem to fix any particular issue I have. They are just
different, and when some changes do create even more problems there's
never any configuration to disable them. You're trying to create a
perfect, coherent system for everyone absent the ability to configure
it to our liking. He even mentioned how unpopular making things
configurable is in the UI community.
A perfect pain point example was mentioned in the video: Text selection
on mobile is trash. But each app seems to have different solutions,
even from the same developer. Google Messages doesn't allow any text
selection of content below an entire message. Some other apps have
opted in to a 'smart' text select which when you select text will guess
and randomly group select adjacent words. And lastly, some apps will
only ever select a single word when you double tap which seemed to be
the standard on mobile for a long time. All of this is inconsistent and
often I'll want to do something like look up a word and realize oh I
can't select the word at all (G message), or the system 'smartly'
selected 4 words instead, or that it did what I want and actually just
picked one word. Each application designer decided they wanted to make
their own change and made the whole system fragmented and worse
overall.
ahartmetz wrote 5 hours 52 min ago:
>Text selection on mobile is trash
Doesn't have to be - Blackberry BB10 had damn near solved it. I think
they had some patents on it, but these should have expired, and I
noticed some corresponding changes in Android. But it's still far
from being as good as BB10. What BB10 had was a kind of combined
cursor and magnifying glass that controlled really well, plus the
ability to tap the thing left or right to move one letter at a time.
jauntywundrkind wrote 9 hours 21 min ago:
> that it did what I want and actually just picked one word. Each
application designer decided they wanted to make their own change and
made the whole system fragmented and worse overall.
This is the trouble. It's been decades of the OS becoming less and
less relevant. Apps have more power, more will to build their own
thing.
And there's less and less personal computing left. There's the design
challenges, the UX being totally different. But the OS used to be a
common substrate that the user could use to do things. And the OS has
just vanished vanished vanished, receeded into the sea. Leaving these
apps to totally dominate the experience, apps that are so often
little more than thin clients to some far off cloud system, to
basically some corporations mainframe.
The OS's relevance keeps shrinking, and it's awful for users. Why
bother making new UX for the desktop, if the capabilities budget is
still entirely on the side of the app? What actually needs to change
is's UX of the desktop or other OS paradigm (mobile), it's a
fundamental shift in taking power out of the mainframe and having a
personal computer that's worth a damn, that again has more than a
quantum of capability embued in it that it can deliver to the user.
(My actual hope is that someday the web can do some of this, because
apps have near always been a horrible thing for users that gives them
no agency, no control, that's pre baked to be only what is delivered
to the user.)
PunchyHamster wrote 14 hours 8 min ago:
> He even mentioned how unpopular making things configurable is in
the UI community.
Inability to imagine someone might have different idea about what's
useful is general plague of UI/UX industry. And there seem to be zero
care given to usage by user that have to use the app longer than 30
seconds a day. Productivity vs learning time curve is basically flat,
and low, with exception being pretty much "the tools made by X for X"
like programming IDEs
stephenlf wrote 11 hours 49 min ago:
Convention over configuration is a powerful idea. Most people
donât want to twiddle with configs. The power user approach is
the way to go.
hulitu wrote 24 min ago:
> Most people donât want to twiddle with configs.
Most people also don't care about the mothers of programmers.
Until, you know, they have to send an SMS using exactly
(particular) one of the 2 SIMs present in the phone and the 20
years old app will not let them.
eviks wrote 7 hours 41 min ago:
Of course they don't, but since there is no magical way to make
incompatible desires/workflow compatible, configuration is the
only way out
seba_dos1 wrote 6 hours 43 min ago:
I rarely need to configure something on my PCs, but rarely is
not never, and when I do really need an option, it better be
there. There's a gradient between unmaintainable
multidimensional matrices of options and "one size ought to fit
everyone" and both ends of it make the user miserable.
ryandrake wrote 12 hours 43 min ago:
Back in the 90s, you had a setting for everything! It was glorious.
This trend of deliberately not making things configurable is the
worst, and we canât seem to escape it because artists are in
charge of the UI rather than human interaction professionals.
App designers need to understand that their opinions on how the app
should look and work are just that: opinions. Opinions they should
keep to themselves.
3v1n0 wrote 11 hours 9 min ago:
Try to maintain the whole matrix or possibilities then you tell
me...
rcxdude wrote 2 hours 36 min ago:
Which is why you should think about how these options interact
and compose at the start, as opposed to only adding options in
an ad-hoc manner (whether you do it willy-nilly or only when
your arm is really twisted)
hulitu wrote 27 min ago:
"You mean we shouldn't use 10 layers of abstraction and 274
libraries to achieve our goal ? I mean, we use a lot of
resuources, but look how polished the UI is: everything is
flat. "
Thank god the RAM prices have risen. Maybe some people will
start to programm with their heads instead of their (AI) IDE.
eviks wrote 7 hours 43 min ago:
Indeed, people aren't paid to do the good things, only the easy
ones
diziet_sma wrote 16 hours 1 min ago:
Universal search on Google Pixels has solved a lot of the text
selection problems on Android for me, with the exception being
selecting text which requires scrolling.
porkbrain wrote 16 hours 21 min ago:
Text selection used to be frustrating on mobile for me too until
Google fixed it with OCR. I get to just hold a button briefly and
then can immediately select an area of the screen to scan text from,
with a consistent UX. Like a screenshot but for text.
bathtub365 wrote 9 hours 53 min ago:
Does it automatically scroll down while selecting if the text is
larger than the screen?
porkbrain wrote 6 hours 13 min ago:
Fair point, it does not on my device
clearleaf wrote 12 hours 48 min ago:
This is such an indictment of modern technology. No offense is
meant to you for doing what works for you, but it is buck wild that
this is the "fix" they've come up with.
As somebody learning about this for the first time it sounds
equivalent to a world where screenshotting became really hard so
people started taking photos of their screen so they could
screenshot the photo.
How could such a fundamental aspect of using a computer become so
ridiculous? It's like satire.
porkbrain wrote 6 hours 3 min ago:
Unfortunately, some apps don't support text selection and on some
websites the text selection is unpredictable.
I'd actually compare screen OCR to screenshots. Instead of every
app and every website implementing their own screenshot
functionality, the system provides one for you.
Same goes for text selection. Instead of every context having to
agree on tagging the text and directions, your phone has a quick
way of letting you scan the screen for text.
To be fair, I still use the "hold the text to select it" approach
when I want to continue with the "select all" action and have
some confidence that is going to do what I want.
zbentley wrote 29 min ago:
> some apps don't support text selection and on some websites
the text selection is unpredictable.
That correctly identifies the problem. Now why is that, and how
can we fix it?
It seems fixable; native GUI apps have COM bindings that can
fairly reliably produce the text present in certain controls in
the vast majority of cases. Web apps (and "desktop" apps that
are actually web apps) have accessibility attributes and at
least nominally the notion of separating document data from
presentation. Now why do so few applications support text
extraction via those channels? If the answer is "it's
hard/easier not to", how can we make the right way easier than
the wrong way?
supportengineer wrote 15 hours 46 min ago:
Thatâs how I do it on the iPhone as well. I take a screen shot
first.
You can count on it, it is reliable, it always works.
throwaway894345 wrote 9 hours 6 min ago:
Unless you need to select more text than fits on the screen
taskforcegemini wrote 15 hours 58 min ago:
They are using OCR for selecting plain text?
aoeusnth1 wrote 10 hours 25 min ago:
It's possible to use the Gemini "ask me about this screen" to OCR
the selected area of the screenshot. I guess that might be more
efficient in some contexts then trying to use the native text
select.
eastbound wrote 14 hours 2 min ago:
On iPhone too, taking a screenshot is the single reliable way to
select text.
throwaway894345 wrote 9 hours 7 min ago:
It becomes possible. Getting the handles to move correctly is
still often a frustrating experience.
AlienRobot wrote 15 hours 8 min ago:
At least it's not AI... yet.
hulitu wrote 11 min ago:
It is a poor person, sitting in a 3rd world country,
thanscribing the text in your clipboard. See Alexa for details.
/s
I'm only half joking.
xnx wrote 14 hours 55 min ago:
Multi-modal LLMs like Gemini are better than traditional OCR in
most ways.
xnx wrote 17 hours 29 min ago:
I felt rage baited when he crossed out Jakob Nielsen and promoted Ed
Zitron ( [1] ). Bad AI is not good UI, but objecting based on AI being
"not ethically trained" and "burning the planet" aren't great reasons.
(HTM) [1]: https://youtu.be/1fZTOjd_bOQt=1852
GaryBluto wrote 17 hours 21 min ago:
[1] You're missing the ampersand.
It's really strange how he spins off on this mini-rant about AI
ethics towards the end. I clicked on a video about UI design.
(HTM) [1]: https://www.youtube.com/watch?v=1fZTOjd_bOQ&t=1852s
xnx wrote 17 hours 12 min ago:
Same. AI is absolutely the future of human computer interaction
(exactly the article from Jakob Nielsen that he crossed out). Even
the father of WIMP, Douglas Engelbart, thought it was flawed:
""Here's the language they're proposing: You point to something and
grunt". AI finally gives us the chance to instruct computers as
humans.
immibis wrote 17 hours 40 min ago:
I don't want to see what any of today's companies would come up with to
replace the desktop. Microsoft has tried a few times and they all
sucked.
migueldeicaza wrote 17 hours 54 min ago:
Scrubbed the talk, saw âM$â in a slide, flipped the bozo bit
whatever1 wrote 18 hours 2 min ago:
Desktop is dead. Gamers will move to consoles and Valve-like platforms.
Rest of productivity is done on a single window browser anyway. Llms
will accelerate this
Coders are the only ones who still should be interested in desktop UX,
but even in that segment many just need a terminal window.
shmerl wrote 14 hours 20 min ago:
No, thanks. I'm a gamer but I don't need a console like UX as the
only option.
sprash wrote 17 hours 42 min ago:
It's not dead. It's being murdered. Microsoft, Apple, Gnome and KDE
are making the experience worse with each update. Productive work
becomes a chore. And the last thing we need is more experiments. We
need more performance, responsiveness, consistency and less latency.
Everything got worse on all 4 points for every desktop environment
despite hardware getting faster by several orders of magnitude.
This also means that I heavily disagree with one of the points of the
presenter. We should not use the next gen hardware to develop for the
future Desktop. This is the most nonsensical thing I heard all day.
We need to focus on the basics.
sho_hn wrote 15 hours 7 min ago:
FWIW, this just isn't true for KDE. We hit a rough patch with the
KDE 4.x series - 17 years ago - that has been difficult to live
down, but have done much in the way of making amends since,
including learning from and avoiding the mistakes we made back
then.
For example, we intentionally optimized Plasma 5 for low-powered
devices (we used to have stacks of the Pinebook at dev sprints,
essentially a RaspPi-class board in a laptop shell), shedding more
than half the menory and compute requirements in just that
generational advance.
We also have a good half-decade of QA focus behind us, including
community-elected goals like a consistency campaign, much like what
you asked for.
I'm confident Plasma 5 and 6 have iteratively gotten better on all
four points.
It's certainly not perfect yet, and we have many areas to still
improve about the product, some of them greatly. But we're
certainly not enshittifying, and the momentum remains very high.
Nearly all modern, popular new distros default to KDE (e.g.
Bazzite, CachyOS, Asahi, Valve SteamOS) and our donation totals
from low-paying individual donors - a decent proxy for user
satisfaction - have multiplied. I've been around the commnunity for
about 20 to 25 years and it's never been a more vibrant project
than today.
Re the fantastic talk, thanks for the little KDE shout-out in the
first two minutes!
vortext wrote 16 hours 14 min ago:
KDE? It has great performance, it's highly configurable, and it's
been improving.
Many people don't seem to like GNOME 3, but it has also been
getting better, in my view.
I agree Windows and macOS have been getting worse.
kvemkon wrote 16 hours 41 min ago:
> Gnome
I can't imagine what I'd be doing without MATE (GNOME 2 fork ported
to GTK+ 3).
Recently I've stumbled upon:
> I suspect that distro maintainers may feel we've lost too many
team members so are going with an older known quantity. [1] This
sounds disturbing.
(HTM) [1]: https://github.com/mate-desktop/caja/issues/1863#issuecomm...
silisili wrote 17 hours 20 min ago:
I agree with this. I remember when Gnome 3 came out, there were a
lot of legitimate complaints that were handwaved away by the
developers as "doesn't work well on a mobile interface", despite
Gnome having approximately zero install cases onto anything mobile.
AFAICT that probably hasn't changed, all these years later.
WD-42 wrote 15 hours 39 min ago:
I donât know. I just started distributing a gtk app and Iâve
already gotten two issue reports from people using it on mobile
experiencing usability problems. Not something I thought Iâd
have to worry about when I started but I guess they are out
there.
hollerith wrote 17 hours 56 min ago:
>productivity is done on a single window browser anyway
When I need to get productive, sometimes I disable the browser to
stop myself from wasting time on the web.
whatever1 wrote 17 hours 47 min ago:
And you likely open the browser that happens to be called VS Code,
Figma etc
hollerith wrote 17 hours 29 min ago:
The point though is that my vscode window does not have an
address bar I can use to visit Youtube or Pornhub at any time.
I guess the larger point is that you need a desktop to run vscode
or Figma, so the desktop is not dead.
linguae wrote 17 hours 57 min ago:
Is it dead because people donât want the desktop, or is it dead
because Big Tech wonât invest in the desktop beyond whatâs
necessary for their business?
Whether intentional or not, it seems like the trend is increasingly
locked-down devices running locked-down software, and Iâm also
disturbed by the prospect of Big Tech gobbling up hardware (see the
RAM shortage, for example), making it unaffordable for regular
people, and then renting this hardware back to us in the form of
cloud services.
Itâs disturbing and I wish we could stop this.
PunchyHamster wrote 13 hours 53 min ago:
MS invests in actively making desktop experience.
But outside of that I doubt there will be many users actually doing
stuff (as opposed to just ingesting content) that will abandon
desktop, and other ones like Mac UI isn't getting worse
xnx wrote 17 hours 7 min ago:
Desktop is all about collaboration and interaction with other apps.
The ideal of every contemporary SaaS is that you can never download
your "files" so you stay locked in.
vjvjvjvjghv wrote 16 hours 23 min ago:
Exactly. Interoperability is not cool anymore. You need to lock
users in
snovv_crash wrote 17 hours 59 min ago:
For content consumption sure.
For content creation though, desktop still rules.
immibis wrote 17 hours 41 min ago:
Sounds like a dead market. Nobody needs to create content any more
now that we have AI.
eek2121 wrote 18 hours 14 min ago:
For the same reason we don't reinvent the wheel. Or perhaps, the same
reason we don't constantly change things like a vehicle. It works well,
and introducing something new means a learning curve that 99% of folks
won't want to deal with, so at that point, you are designing something
new for the other 1% of folks willing to tackle it. Unless it's an
amazing concept, it won't take off.
ErroneousBosh wrote 17 hours 26 min ago:
> Or perhaps, the same reason we don't constantly change things like
a vehicle.
Are we stuck with the same brake pedal UX forever?
linguae wrote 18 hours 48 min ago:
I enjoyed this talk, and I want to learn more about the concept of
âlearning loopsâ for interface design.
Personally, I wish there were a champion of desktop usability like how
Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and
Google lost the plot in the 2010s due to two factors: (1) the rise of
mobile and Web computing, and (2) the realization that software
platforms are excellent platforms for milking users for cash via
pushing ads and services upon a captive audience. To elaborate on the
first point, UI elements from mobile and Web computing have been
applied to desktops even when they are not effective, probably to save
development costs, and probably since mobile and Web UI elements are
seen as âmodernâ compared to an âold-fashionedâ desktop. The
result is a degraded desktop experience in 2025 compared to 2009 when
Windows 7 and Snow Leopard were released. Itâs hamburger windows,
title bars becoming toolbars (making it harder to identify areas to
drag windows), hidden scroll bars, and memory-hungry Electron apps
galore, plus pushy notifications, nag screens, and ads for services.
I donât foresee any innovation from Microsoft, Apple, or Google in
desktop computing that doesnât have strings attached for monetization
purposes.
The open-source world is better positioned to make productive desktops,
but without coordinated efforts, it seems like herding cats, and it
seems that one must cobble together a system instead of having a system
that works as coherently as the Mac or Windows.
With that said, I wonât be too negative. KDE and GNOME are
consistent when sticking to Qt/GTK applications, respectively, and
there are good desktop Linux distributions out there.
XorNot wrote 14 hours 12 min ago:
GTK's dedication to killing the standard top bar menu layout is on
intensely irritating.
We now have giant title bars to accommodate the hamburger menu
button, which opens a list of...standard menu bar sub menu options.
You could fit all the same information into the same real estate
space, using the original and tested paradigm.
gtowey wrote 18 hours 37 min ago:
It's because companies are no longer run by engineers. The MBAs and
accountants are in charge and they could care less about making good
products.
At Microsoft, Satya Nadella has an engineering background, but it
seems like he didn't spend much time as an engineer before getting an
MBA and playing the management advancement game.
Our industry isn't what it used to be and I'm not sure it ever could.
vjvjvjvjghv wrote 16 hours 29 min ago:
I have heard a big factor is that a lot of the newer devs donât
really use desktop OS outside of work. So for them developing a
desktop OS is more of an abstract project like for me developing
software for medical devices which I never use myself.
ryandrake wrote 12 hours 36 min ago:
Iâve never understood the idea of a software developer who
doesnât use a computer outside of development. Who are these
people?
Telaneo wrote 11 hours 57 min ago:
People who got into software development not because they enjoy
working with computers, but rather because it pays well.
Outside of work, they're the same as any other casual who's got
a phone as their primary computing device.
hulitu wrote 3 min ago:
> any other casual who's got a phone as their primary
computing device.
I tried to use my phone as a "computing device", but i mostly
can use it as a toy. Working with text and files on a phone
is... how to say nicely ... interesting.
Anonyneko wrote 7 hours 30 min ago:
Also people who now have other commitments, such as family,
or became tired of computers over their career and don't want
to fiddle with them outside of work anymore. I feel like an
outlier in my office, even the nerdiest of my developer
colleagues sold his PC in favor of Steam Deck and phones.
linguae wrote 17 hours 34 min ago:
I feel a major shift happened in the 2010s. The tech industry
became less about making the world a better place through
technology, and more about how to best leverage power to make as
much money as possible, making a world a better place be damned.
This also came at a time when tech went from being considered a
nerdy obsession to tech being a prestigious career choice much like
how law and medicine are viewed.
Tech went from being a sideshow to the main show. The problem is
once tech became the main show, this attracts the money- and
career-driven rather than the ones passionate about technology.
Itâs bad enough working with mercenary coworkers, but when
mercenaries become managers and executives, they are now the boss,
and if the passionate donât meet their bossesâ expectations,
they are fired.
I left the industry and I am now a tenure-track community college
professor, though I do research during my winter and summer breaks.
I think there are still niches where a deep love for computing
without being overly concerned about âstock line go upâ metrics
can still lead to good products and sustainable, if small,
businesses.
jack_tripper wrote 17 hours 13 min ago:
>The tech industry became less about making the world a better
place through technology
When the hell was even that?
corysama wrote 14 hours 3 min ago:
A trope in the first season of HBOâs Silicon Valley is
literally every company other than the main characters
professing their mission statement to be âMaking the world a
better place through (technobabble)â
The subtle running joke was that while the main characters
technobabble was fake, every other background SV startup was
âMaking the world a better place through Paxos-based
distributed consensusâ and other real world serious tech.
mc32 wrote 16 hours 18 min ago:
Things like hypertext, search, email and early social networks
(chat networks connecting disparate people) and also the
paperless office (finally). Images and video corrupted
everything as they now became that which addicted eyeballs.
lo_zamoyski wrote 15 hours 37 min ago:
> chat networks
I think you may be looking at history through rose-tinted
glasses. Sure, social media today is not the same, so the
comparison isnât quite sensible, but IRC was an unpleasant
place full of petty egos and nasty people.
hulitu wrote 5 min ago:
> but IRC was an unpleasant place full of petty egos and
nasty people.
One should take a look at HN. /s
I find the discussions on the early Internet (until around
2010) more civilised than today.
Today, the internet is fully weaponized by and for big
companies and 3 letter agencies.
vjvjvjvjghv wrote 16 hours 25 min ago:
In the 80s and 90s there was much more idealism than now. There
were also more low hanging fruit to develop software that makes
peopleâs lives better. There was also less investor money
floating around so it was more important to appeal to end
users. To me it seems tech has devolved into a big money
making scheme with only the minimum necessary actual technology
and innovation.
andrekandre wrote 13 hours 25 min ago:
> In the 80s and 90s there was much more idealism than now.
that idealism was already fading by then, which had started
much earlier in the preceding decades (see, memex/hypertext
etc)
> tech has devolved into a big money making scheme with
only the minimum necessary actual technology and innovation
in the end, they are businesses, so it could be assumed that
such orientation would take over in the end eventually
though, no?
its the system of incentives we all live under (make more
money or die)
ryandrake wrote 12 hours 38 min ago:
> make more money or die
This is not true for the vast majority of people making
these things. At some point, most businesses go from
âmake money or dieâ to financial security: âmake line
go up forever for no reasonâ.
lotsofpulp wrote 1 hour 57 min ago:
I bet the vast majority of people making things also want
cutting edge healthcare for themselves and loved ones,
for their whole life, which is equivalent to make money
or die.
throwaway894345 wrote 9 hours 4 min ago:
i discovered the meaning of life and its name is
âincreasing shareholder valueâ
lo_zamoyski wrote 15 hours 9 min ago:
I would agree that it was different, but I also think this
may be history viewed through rose-tinted glasses somewhat.
> There were also more low hanging fruit to develop software
that makes peopleâs lives better.
In principle, maybe. In practice, you had to pay for
everything. Open source or free software was not widely
available. So, the profit motive was there. The conditions
didnât exist yet for the profit model we have today to
really take off, or for the appreciation of it to exist.
Still, if thereâs a lot of low-hanging fruit, that means
the maturity of software was generally lower, so itâs a bit
like pining for the days when people lived on the farm.
> There was also less investor money floating around so it
was more important to appeal to end users.
Iâm not so sure this appeal was so important (and investors
do care about appeal!). If you had market dominance like
Microsoft did, you could rest on your laurels quite a bit
(and that they did). The software ecosystem you needed to use
also determined your choices for you.
> To me it seems tech has devolved into a big money making
scheme with only the minimum necessary actual technology and
innovation.
As I said earlier, the profit motive was always there. It was
just expressed differently. But I will grant you that the
image is different. In a way, the mask has been dropped. When
facebook was new, no one thought of it as a vulgar engine for
monetizing people either (I even recall offending a Facebook
employee years ago when I mentioned this, what should frankly
have been obvious), but it was just that. It was all just
that, because the basic blueprint of the revenue model was
there from day one.
gldrk wrote 11 hours 20 min ago:
>In practice, you had to pay for everything.
As a private individual, you didn't actually have to pay
for anything once you got an Internet connection. Most
countries never even tried enforcing copyright laws against
small fish. DRM was barely a thing and was easily broken
within days by l33t teenagers.
Normal_gaussian wrote 18 hours 24 min ago:
It's great to hear from someone who thinks these people still care!
It has rarely been my experience, but I haven't been everywhere
yet.
mattkevan wrote 18 hours 51 min ago:
Really interesting. Going to have to watch in detail.
Iâm in the process of designing an os interface that tries to move
beyond the current desktop metaphor or the mobile grid of apps.
Instead itâs going to use âframesâ of content that are acted on
by capabilities that provide functionality. Very much inspired by
Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.
A newton-like content soup combined with a persistent LLM intelligence
layer, RAG and knowledge graphs could provide a powerful way to create,
connect and manage content that breaks out of the standard document
model.
compressedgas wrote 10 hours 52 min ago:
Reminds me of Yuzuru Tanaka's Meme Media and Meme Market
Architectures (2003)
(HTM) [1]: https://onlinelibrary.wiley.com/doi/book/10.1002/047172307X
__d wrote 13 hours 52 min ago:
Is there anything you can share yet? It sounds interesting
sprash wrote 18 hours 53 min ago:
Unpopular take: Windows 95 was the peak of Desktop UX.
GUI elements were easily distinguishable from content and there was
100% consistency down to the last little detail (e.g. right click
always gave you a meaningful context menu). The innovations after that
are tiny in comparison and more opinionated (things like macos making
the taskbar obsolete with the introduction of Exposé).
SoftTalker wrote 16 hours 2 min ago:
I would say Windows 2000 Pro, but that really wasn't too different
from Windows 95. The OS was much better though, being based on NT.
throaway45425 wrote 5 hours 5 min ago:
KDE Plasma is better than all of these right now.
Cinnamon also.
Telaneo wrote 11 hours 36 min ago:
I don't think it's a stretch to call it the UI language of 95,
while 2000 just adds more functionality within the bounds of that
framework. Add in the Win7 search bar in the start menu, and the OS
not crashing, you haven't really done anything of note with the UI
beyond staying within its framework. It'll still be a Win95 UI.
Meanwhile, WinXP started to fiddle with the foundation of that
framework, sometimes maybe for the better, sometimes maybe for the
worse. Vista did the same. 7 mostly didn't and instead mostly fixed
what Vista broke, while 8 tried to throw the whole thing out.
fragmede wrote 17 hours 39 min ago:
Heh, the number of points you've probably gotten for that comment, I
don't think that it's that unpopular. Win 98 was my jam but it looks
hella dated today, but as you said, buttons were clearly marked, but
also menus were navigatible via keyboard, soms support for themes and
custom coloring, UIs were designable via a GUI builder in VB or
Visual Studio using MFC which was very resource friendly compared to
using Electron today. Because smartphones and tablets, but even the
wide variety of screen sizes also didn't exist so it was a simpler
time. I can't believe how much of a step back Electron is for UI
creation compared to MFC, but that wasn't cross-platform and usually
elements were absolute positioned instead of the relative resizable
layout that's required today.
kvemkon wrote 16 hours 26 min ago:
> buttons were clearly marked
Recently some UI ignored my action by clicking an entry in a list
from drop down button. It turned out, this drop down button was
additionally a normal button if you press it in the center. Awful.
> UI creation compared to MFC
Here I'd prefer Borland with (Pascal) Delphi / C++ Builder.
> relative resizable layout that's required today.
While it should be beneficial, the reality is awful. E.g. why is
the URL input field on [1] so narrow? But if you shrinks the
browser window width the text field becomes wide eventually! That's
completely against expectations.
(HTM) [1]: https://web.archive.org/save
calmbonsai wrote 18 hours 56 min ago:
For desktops, basically, yes. And that's OK.
Take any other praxis that's reached the 'appliance' stage that you use
in your daily life from washing machines, ovens, coffee makers, cars,
smartphones, flip-phones, televisions, toilets, vacuums, microwaves,
refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and
then everything afterwards consists of edge-case features,
personalization, or regulatory compliance.
Desktop Computers are no exception.
danans wrote 16 hours 46 min ago:
> Take any other praxis that's reached the 'appliance' stage that you
use in your daily life from washing machines, ovens, coffee makers,
cars ...
I wish the same could be said of car UX these days but clearly that
has regressed away from optimal.
mrob wrote 17 hours 41 min ago:
I can think of two big improvements to desktop GUIs:
1. Incremental narrowing for all selection tasks like the Helm [0]
extension for Emacs.
Whenever there is a list of choices, all choices should be displayed,
and this list should be filterable in real time by typing. This
should go further than what Helm provides, e.g. you should be able to
filter a partially filtered list in a different way. No matter how
complex your filtering, all results should appear within 10 ms or so.
This should include things like full text search of all local
documents on the machine. This will probably require extensive
indexing, so it needs to be tightly integrated with all software so
the indexes stay in sync with the data.
2. Pervasive support for mouse gestures.
This effectively increases the number of mouse buttons. Some tasks
are fastest with keyboard, and some are fastest with mouse, but
switching between the two costs time. Increasing the effective number
of buttons increases the number of tasks that are fastest with mouse
and reduces need for switching.
[0]
(HTM) [1]: https://emacs-helm.github.io/helm/
calmbonsai wrote 10 hours 27 min ago:
I use Emacs as my daily-driver so point well taken wrt incremental
drill-down though I'd argue that's not just a "desktop thing". You
see that in the Contacts manager of every smartphone.
I see "mouse gestures" as merely an incremental evolution for
desktops.
Low latency capacitive touch-screens with gesture controls were,
however, revolutionary for mobile devices and dashboards in
vehicles.
Hammershaft wrote 18 hours 46 min ago:
All of the other examples you gave are products constrained by
physical reality with a small set of countable use-cases. I don't
think computer operating systems are simply mature appliance-like
products that have been optimized down their current design. I think
there is a lot of potential that hasn't been realized because the
very few players in the operating system space have been been
hill-climbing towards a local maxima set by path dependence 40 years
ago.
calmbonsai wrote 18 hours 25 min ago:
To be precise, we're talking about "Desktop Computers" and not the
more generic "information appliances".
For example, we're not remotely close to having a standardized
"watch form-factor" appliance interface.
Physical reality is always a constraint. In this case,
keyboard+display+speaker+mouse+arms-length-proximity+stationary.
If you add/remove/alter _any_ of those 6 constraints, then there's
plenty of room for innovation, but those constraints _define_ a
desktop computer.
pegasus wrote 17 hours 17 min ago:
That's just the thing, desktops computers have always been in an
important way the antithesis of a specialized appliance, a
materialization of Turing's dream of the Universal Machine. It's
only in recent years that this universality has come under
threat, in the name of safety.
calmbonsai wrote 10 hours 35 min ago:
I wouldn't save the driver is "safety". It's happened that a
few highly-specialized symbolic manipulation tasks now have
enough market value such that they can demand highly
specialized UX to optimize task performance.
One classic example is the "Bloomberg Box": [1] which has been
around since the late '80s.
You can also see this from the reverse (analog -> digital) in
the evolution of hospital patient life-sign monitors and the
classic "6 pack" of gauges used in both aviation and
automobiles.
(HTM) [1]: https://en.wikipedia.org/wiki/Bloomberg_Terminal
pegasus wrote 5 hours 16 min ago:
I meant the universality (openness) of desktop computers
comes under threat, as the "walled garden" model seeks to
make the jump from mobile to desktop.
AndrewKemendo wrote 19 hours 13 min ago:
The computer form factor hasnât changed since the mainframe: look
into a screen for where to give input, select visual icons via a
pointer, type text via keyboard into a text entry box, hit an action
button, recieve result, repeat
itâs just all gotten miniaturized
Humans have outright rejected all other possible computer form factors
presented to them to date including:
Purely NLP with no screen
head worn augmented reality
contact lenses,
head worn virtual reality
implanted touch sensors
etcâ¦
Every other possible form factor gets shit on, on this website and in
every other technology newspaper.
This is despite almost a century of a attempts at doing all those and
making zero progress in sustained consumer penetration.
Had people liked those form factors they wouldâve been invested in
them early on, such that they would develop the same way the laptops
and iPads and iPhones and desktops have evolved.
However nobodyâs even interested at any type of scale in the early
days of AR for example.
I have a litany of augmented and virtual reality devices scattered
around my home and work that are incredibly compelling technology - but
are totally seen as straight up dogshit from the consumer perspective.
Like everything itâs not a machine problem, itâs a human people in
society problem
immibis wrote 17 hours 41 min ago:
Phone UIs are still screen UIs, but they are not desktop UIs, and
that's not because of the shape of the device.
AndrewKemendo wrote 17 hours 31 min ago:
Tell me how thatâs not a phone and a desktop:
(HTM) [1]: https://www.instagram.com/reel/DPtvpkSExfA/
immibis wrote 3 hours 30 min ago:
This is a laptop with a cut down the middle.
albumen wrote 16 hours 24 min ago:
That's not a phone and a desktop. I feel like I'm stating the
obvious here; it's too big to be a phone, for any reasonable
definition of 'phone'.
mcswell wrote 18 hours 29 min ago:
Since mainframes, you say. Well, sonny, when I first learned
programming on a mainframe, we had punch cards and fan-fold
printouts. Nothing beats that, eh?
AnimalMuppet wrote 18 hours 31 min ago:
I do not see laptop computers as the same form factor as mainframes.
At. All.
Even more, I don't see phones as the same form factor as mainframes.
nkrisc wrote 18 hours 52 min ago:
> Purely NLP with no screen
Cumbersome and slow with horrible failure recovery. Great if it
works, huge pain in the ass if it doesn't. Useless for any visual
task.
> head worn augmented reality
Completely useless if what you're doing doesn't involve "augmenting
reality" (editing a text document), which probably describes most
tasks that the average person is using a computer for.
> contact lenses
Effectively impossible to use for some portion of the population.
> head worn virtual reality
Completely isolates you from your surroundings (most people don't
like that) and difficult to use for people who wear glasses.
Nevermind that currently they're heavy, expensive, and not
particularly portable.
> implanted sensors
That's going to be a very hard sell for the vast majority of people.
Also pretty useless for what most people want to do with computers.
The reason these different form factors haven't caught on is because
they're pretty shit right now and not even useful to most people.
The standard desktop environment isn't perfect, but it's good and
versatile enough for what most people need to do with a computer.
AndrewKemendo wrote 18 hours 45 min ago:
And most computers were entirely shit in the 1950s
yet here we are today
You mustâve missed the point: people invested in desktop
computers when they were shitty vacuum tubes that blow up.
That still hasnât happened for any other user experience or
interface.
> it's good and versatile enough for what most people need to do
with a computer
Exactly correct! Like I said itâs a limitation of the human
society, the capabilities and expectations of regular people are so
low and diffuse that there is not enough collective intelligence to
manage a complex interface that would measurably improve your
abilities.
Said another way, itâs the same as if a baby could never
âgraduateâ from Duplo blocks to Lego because lego blocks are
too complicated
ares623 wrote 19 hours 25 min ago:
Are we stuck with the same toothbrush UX forever?
esafak wrote 17 hours 10 min ago:
There are electric-, ultrasonic-, mouthpiece-, and irrigating
toothbrushes...
Maybe the experience has not changed for the average person, but
alternatives are out there.
ErroneousBosh wrote 17 hours 27 min ago:
I was going to say "are we stuck with the same bicycle UX forever".
Because we've been stuck with the same bicycle UX for like 150 years
now.
Sometimes shit just works right, just about straight out of the gate.
DangitBobby wrote 14 hours 35 min ago:
There have been absolute fucking gobs of UX changes to bikes in
just the last 5 years. They just usually end up on mid range or
higher end bikes. Obviously they don't fundamentally change the way
a bike works, otherwise it wouldn't be a bike anymore.
ErroneousBosh wrote 5 hours 18 min ago:
Like what? What would you call a "UX change"?
esafak wrote 17 hours 4 min ago:
This is what bicycles originally looked like:
(HTM) [1]: https://en.wikipedia.org/wiki/Velocipede#/media/File:Veloc...
ErroneousBosh wrote 16 hours 55 min ago:
Yes, something like 200 years ago.
By the 1870s we'd pretty much standardised on the "Safety
Bicycle", which had a couple of smallish wheels about two and a
half feet in olden days measurements in diameter, with a chain
drive from a set of pedals mounted low in the frame to the rear
wheel.
By the end of the 1880s, you had companies mass-producing bikes
that wouldn't look unreasonable today. All we've done since is
make them out of lighter metal, improve the brakes from pull rods
to cables to hydraulic discs brakes, and give them more gears (it
wouldn't be until the early 1900s that the first hub gears became
available, with - perhaps surprisingly - derailleurs only coming
along 100 years ago).
(HTM) [1]: https://en.wikipedia.org/wiki/Safety_bicycle
calmbonsai wrote 18 hours 21 min ago:
I can imagine some sort of car-wash-like partial mouth insertion
interface (think "smart cleaner/retainer"), but it would be
cost-prohibitive and, likely, not offer any appreciable cleaning
benefits.
LeFantome wrote 19 hours 17 min ago:
I feel like toothbrush UX has improved quite a bit.
yearolinuxdsktp wrote 18 hours 48 min ago:
Itâs changed, but is a wash:
On the positive side, my electronic toothbrush allows me to avoid
excessive pressure via real-time green/red light.
On the negative side, it guilt trips me with a sad face emoji any
time my brushing time is under 2 minutes.
AndrewKemendo wrote 19 hours 5 min ago:
Toothbrush UX is the same today as it was when we were hunter
gatherers: use an abrasive tool to ablate plaque from the teeth and
gums without removing enamel [1] The variety of form factors
offered are the only difference
(HTM) [1]: https://www.youtube.com/watch?v=zMuTG6fOMCg
mrob wrote 17 hours 31 min ago:
As somebody who's tried using a miswak [0] teeth-cleaning twig
out of curiosity, I can say with confidence it's not the same
experience as using a modern toothbrush. It's capable of cleaning
your teeth effectively, but it's slower and more difficult than a
modern toothbrush. The angle of the bristles makes a huge
difference. When the bristles face forward like with a
teeth-cleaning twig your lips get in the the way a lot more.
Sideways bristles are easier to use.
[0]
(HTM) [1]: https://en.wikipedia.org/wiki/Miswak
jrowen wrote 18 hours 55 min ago:
Yes, whittling down a stick is pretty much the same experience as
using an electric toothbrush. Or those weird mouthguard things
they have now.
I don't think most people would find this degree of reduction
helpful.
AndrewKemendo wrote 18 hours 48 min ago:
> Yes, whittling down a stick is pretty much the same
experience as using an electric toothbrush
Correct? I agree with this precisely but assume youâre
writing it sarcastically
From the point of view of the starting state of the mouth to
the end state of the mouth the USER EXPERIENCE is the same:
clean teeth
The FORM FACTOR is different: Electric version means ONLY that
I donât move my arm
âMost peopleâ canât do multiplication in their head so
Iâm not looking to them to understand
echoangle wrote 18 hours 16 min ago:
Thatâs just not what user experience means, two products
having the same start and end state doesnât mean the user
experience is the same. Imagine two tools, one a CLI and one
a GUI, which both let you do the same thing. Would you say
that they by definition have the same user experience?
AndrewKemendo wrote 18 hours 4 min ago:
If you drew both brushing processes as a UML diagram the
variance would be trivial
Now compare that variance to the variance options given
with machine and computing UX options
youâll see clearly that one (toothbrushing) is less than
one stdev different in steps and components for the median
use case and one (computing) is nearly infinite variance
(no stable stdev) between median use case steps and
components.
The fact that the latter state space manifold is available
but the action space is constrained inside a local minima
is an indictment on the capacity for action space traversal
by humans.
This is reflected again with what is a point action space
(physically ablate plaque with abrasive) in the possible
state space of teeth cleaning for example: chemical
only/non ablative, replace teeth entirely every month,
remove teeth and eat paste, etcâ¦
So yes I collapsed that complexity into calling it âUXâ
which classically can be described via UML
jrowen wrote 15 hours 38 min ago:
I would almost define "experience" as that which can't be
described by UML.
Ask any person to go and find a stick and use it to brush
their teeth, and then ask if that "experience" was the
same as using their toothbrush. Invoking UML is absurd.
AndrewKemendo wrote 12 hours 5 min ago:
You know some of us old timers still remember a time
before people just totally abandoned the concept of
having functional definitions and iso standards and
things like that.
Funny how we havenât done anything on the scale of
Hoover Dam, Three Gorges, ISS etcâ¦since those got
thrown away
User Experience also means something specific in
information theory and UX and UML is designed to model
that explicitly: [1] Good luck vibe architecting
(HTM) [1]: https://www.pst.ifi.lmu.de/~kochn/pUML2001-Hen...
analogpixel wrote 19 hours 30 min ago:
Why didn't Star Trek ever tackle the big issues, like them constantly
updating the LCARS interface every few episodes to make it better, or
having Geordi La Forge re-writing the warp core controllers in Rust?
rzerowan wrote 17 hours 9 min ago:
Mostly i believe its that the writers envisioned and were able to
wrldbuildinsucha way that the tech was not a subject but was rather a
part of the scenery/background with the main object being the people
and their relationships.
Additionally in some cases where alien tech was interfaced with the
characters inthe storysome UI/code rewites were written in, for
example in DS9 where the Cardassian interfaces/AI are frustrating to
Chief O'Brien and his efforts to remedy/upgrade such gets a recurring
role in the story.
Conversly recent versions have taken the view of foregrounding tech
aidied with flashy CGI to handwave through a lot.Basically using it
as a plot device when the writing is weak.
RedNifre wrote 17 hours 16 min ago:
Because the LCARS GUI is only for simple recurring tasks, so it's
easy to find an optimal interface.
Complex tasks are done vibe coding style, like La Forge vibe video
editing a recording to find an alien: [1] I do wonder if
conversational interfaces will put an end to our GUI churn
eventually...
(HTM) [1]: https://www.youtube.com/watch?v=4Faiu360W7Q
PunchyHamster wrote 13 hours 56 min ago:
Conversational interfaces are slow and will still be slow even if
AI latency will be zero.
It might be nice way for making complex, one off tasks by personnel
unfamiliar with all the features of the system, but for fast day to
day stuff, button per function will always be a king.
lo_zamoyski wrote 13 hours 25 min ago:
Keyboard button even. TUIs are faster for many cases.
TheOtherHobbes wrote 3 hours 41 min ago:
The obvious answer is direct brain interfacing. Poking at stuff
with your fat fingers is barely faster than speech.
The less obvious answer is how to make it work. That is a hard
problem.
And the challenge is how to make it work ethically, especially
given where Late Capitalism has ended up.
Otherwise we won't turn into Star Fleet, we'll turn into the
Borg.
JuniperMesos wrote 18 hours 5 min ago:
Man, I should hope that the warp core controllers on the USS
Enterprise were not written in C.
On the other hand, if the writers of Star Trek The Next Generation
were writing the show now, rather than 35-40 years ago - and
therefore had a more expansive understanding of computer technology
and were writing for an audience that could be relied upon to
understand computers better than was actually the case - maybe there
would've been more episodes involving dealing with the details of
Future Sci-Fi Computer Systems in ways a programmer today might find
recognizable.
Heck, maybe this is in fact the case for the recently-written
episodes of Star Trek coming out in the past few years (that seem to
be much less popular than TNG, probably because the entire media
environment around broadcast television has changed drastically since
TNG was made). Someone who writes for television today is more likely
to have had the experience of taking a Python class in middle school
than anyone writing for television decades ago (before Python
existed), and maybe something of that experience might make it into
an episode of television sci-fi.
As an additional point, my recollection is that the LCARS interface
did in fact look slightly different over time - in early TNG seasons
it was more orange-y, and in later seasons/Voyager/the TNG movies it
generally had more of a purple tinge. Maybe we can attribute this
in-universe to a Federation-wide UX redesign (imagine throwing in a
scene where Barclay and La Forge are walking down a corridor having a
friendly argument about whether the new redesign is better or worse
immediately before a Red Alert that starts the main plot of the
episode!). From a television production standpoint, we can attribute
this to things like "the set designers were actually trying to
suggest the passage of time and technology changing in the context of
the show", or "the set designers wanted to have fun making a new
thing" or "over the period of time that the 80s/90s incarnations of
Star Trek were being made, television VFX technology itself was
advancing rapidly and people wanted to try out new things that were
not previously possible" - all of which have implications for
real-world technology as well as fake television sci-fi technology.
bigstrat2003 wrote 14 hours 31 min ago:
> recently-written episodes of Star Trek coming out in the past few
years (that seem to be much less popular than TNG, probably because
the entire media environment around broadcast television has
changed drastically since TNG was made)
That's probably part of it. But the larger part is that new Star
Trek is very poorly written, so why is anyone going to bother
watching it?
Findecanor wrote 18 hours 27 min ago:
I have often thought that Star Trek is supposed to show a future in
which computer technology and user interfaces have evolved to a
steady state that don't need to change that much, and which is
superior to our own in ways that we don't yet understand. And because
it hasn't been invented yet, the show does not invent it either.
It is for the audience to imagine that those printed transparencies
back-lit with light bulbs behind coloured gel are the most intuitive,
easy to use, precise user interfaces that the actors pretend that
they are.
calmbonsai wrote 18 hours 31 min ago:
Trek needs to visibly "sci-fi-up" extant tech in order to have the
poetic narrative license to tell its present-day parables.
Things just need to "look futuristic". The don't actually need to
have practical function outside whatever narrative constraints are
imposed in order to provide pace and tension to the story.
I forget who said it first, but "Warp is really the speed of plot".
PunchyHamster wrote 13 hours 54 min ago:
Case in point - nobody sensible would put realtime ship controls on
a touchscreen if the designed use of it was combat or complex human
driven manoeuvrers.
calmbonsai wrote 10 hours 17 min ago:
I always laughed at the "fixed-position" rotatable laptop
equivalents [1] that were on people's desks as if somehow a
single physical desktop location would work for everyone--let
alone aliens.
In truth, that was due to having a fixed sight-line and focal
distance to the camera so any post-production LCARS effects could
be matched-moved to the action and any possible alternative
lighting conditions. Offhand, I can't think of any explicit
digital match-moving shots, but I'm certain that's the reason.
As pointed out in that infamous Red Letter Media video, all the
screens on the bridge ended up casting too much glare so they
very blatantly used gaffer tape on them [2] . :)
(HTM) [1]: https://legendary-digital-network-assets.s3.amazonaws.co...
(HTM) [2]: https://www.youtube.com/watch?v=yzJqarYU5Io
thaumaturgy wrote 19 hours 3 min ago:
Because, something that a lot of tech-obsessed Trek fans never seem
to really come to terms with, is that Trek didn't fetishize
technology.
In the Trek universe, LCARS wasn't getting continuous UI updates
because they would have advanced, culturally, to a point where they
recognized that continuous UI updates are frustrating for users. They
would have invested the time and research effort required to better
understand the right kind of interface for the given devices, and
then... just built that. And, sure, it probably would get updates
from time to time, but nothing like the way we do things now.
Because the way we do things now is immature. It's driven often by
individual developers' needs to leave their fingerprints on
something, to be able to say, "this project is now MY project", to be
able to use it as a portfolio item that helps them get a bigger
paycheck in the future.
Likewise, Geordi was regularly shown to be making constant
improvements to the ship's systems. If I remember right, some of his
designs were picked up by Starfleet and integrated into other ships.
He took risks, too, like experimental propulsion upgrades. But, each
time, it was an upgrade in service of better meeting some present or
future mission objective. Geordi might have rewritten some software
modules in whatever counted as a "language" in that universe at some
point, but if he had done so, he would have done extensive testing
and tried very hard to do it in a way that wouldn't've disrupted ship
operations, and he would only do so if it gained some kind of
improvement that directly impacted the success or safety of the whole
ship.
Really cool technology is a key component of the Trek universe, but
Trek isn't about technology. It's about people. Technology is just a
thing that's in the background, and, sometimes, becomes a part of the
story -- when it impacts some people in the story.
lo_zamoyski wrote 13 hours 26 min ago:
> continuous UI updates are frustrating for users [â¦] It's driven
often by individual developers' needs to leave their fingerprints
on something, to be able to say, "this project is now MY project",
to be able to use it as a portfolio item that helps them get a
bigger paycheck in the future.
Yes, although users also judge updates by what is apparent. Imagine
if OS UIs didnât change and you had to pay for new versions. So
Iâm sure UI updates are also partly motivated by a desire to
signal improvements.
PunchyHamster wrote 14 hours 5 min ago:
That's fetishizing Star Trek a bit - they had touch interface for
controlling the ship in middle of combat, explosions and everything
shaking around which is hardly optimal, both on and off combat
(imagine levitating hand across touch panel for hours at end)
cons0le wrote 16 hours 57 min ago:
>Because the way we do things now is immature. It's driven often by
individual developers' needs to leave their fingerprints on
something, to be able to say, "this project is now MY project", to
be able to use it as a portfolio item that helps them get a bigger
paycheck in the future.
AKA resume-driven development. I personally know several people
working on LLM products, where in private they admit they think
LLMs are scams
bena wrote 17 hours 37 min ago:
LCARS was technically a self-adapting system that was personalized
to a degree per user. So it was continuously updating itself. But
in a way to reduce user frustration.
Now, this is really because LCARS is "Stage Direction: Riker hits
some buttons and stuff happens".
dragonwriter wrote 18 hours 25 min ago:
> In the Trek universe, LCARS wasn't getting continuous UI updates
In the Trek universe, LCARS was continuously generating UI updates
for each user, because AI coding had reached the point that it no
longer needs specific direction, and it responds autonomously to
needs the system itself identifies.
krapp wrote 18 hours 47 min ago:
>In the Trek universe, LCARS wasn't getting continuous UI updates
because they would have advanced, culturally, to a point where they
recognized that continuous UI updates are frustrating for users.
Not to be "that guy" but LCARS wasn't getting continuous UI updates
because that would have cost the production team money and for TNG
at least would have often required rebuilding physical sets. It
does get updated between series because as part of setting the
design language for that series.
And Geordi was shown constantly making improvements to the ship's
systems because he had to be shown "doing engineer stuff."
jfengel wrote 18 hours 56 min ago:
Most of Trek's tech is just a way to move the story along.
Transporters were introduced to avoid having to land a shuttle.
Warp drive is just a way to get to the next story. Communicators
relay plot points.
Stories which focus on them as technology are nearly always boring.
"Oh no the transporter broke... Yay we fixed it".
Mistletoe wrote 18 hours 56 min ago:
Isn't it probably just that they don't really have money in Star
Trek so there is no contract promising amazing advances in the
LCARS if we just pay this person or company to revamp it? If
someone has money to be made from something they will always want
to convince you the new thing is what you need.
krapp wrote 18 hours 46 min ago:
Remember that in Star Trek humans have evolved beyond the desire
to work for money or personal gain, so everyone just volunteers
their time, and somehow this just always works.
amelius wrote 18 hours 57 min ago:
I still wonder why not everybody was lingering in the holodeck all
the time.
(equivalent of people being glued to their smartphones today)
(Related) This is one explanation for the Fermi paradox: Alien
species may isolate themselves in virtual worlds
(HTM) [1]: https://en.wikipedia.org/wiki/Fermi_paradox
d3Xt3r wrote 17 hours 24 min ago:
Most likely because this was a star ship (or space station) with
a limited number of personnel, all of whom have fixed duties that
need to be done. You simply can't afford to waste your time away
in holodecks.
The people we saw on screen most of the time also held important
positions on the ship (especially the bridge, or engineering) and
you can't expect them to just waste significant chunks of time.
Also, don't forget that these people actually like their jobs.
They got there because they sincerely wanted to, out of personal
interest and drive, and not because of societal pressures like in
our present world. They already figured out universal basic
income and are living in an advanced self-sufficient society, so
they don't even need a job to earn money or live a decent life -
these people are doing their jobs because of their pure, raw
passion for that field.
Telaneo wrote 11 hours 51 min ago:
Also, holodecks are limited in number. Voyager had two, and
during one episode where the plot point was that they were in
an area of space with literally nothing, the holodecks were in
such high demand they had to schedule time there so everybody
got a bit each. With Voyager having 150~ people onboard, I can
easily imagine that sucking. The Enterprise had more holodecks
(4-6~?), but with around 1000 people onboard, if they were in
the same situation of there being nothing to do, the Holodecks
would probably have been equally crowded.
RedNifre wrote 17 hours 25 min ago:
The lack of capitalism meant that the holodeck program authors
had no need to optimize their programs for user retention to show
them more ads. So much fewer people suffer from holodeck
addiction in Star Trek than are glued to their screens in our
world.
XorNot wrote 14 hours 7 min ago:
Although the funniest thing about the holodeck these days is
LLMs have answered a question: can you have realistic
non-sentient avatars? Evidently yes, and holodeck authorship is
likely a bunch of prompt engineering, with really advanced
stuff happening when someone trains a new model or something.
Similarly in Stat Wars with droids: Obi-Wan is right, droids
can't think and deserve no real moral consideration because
they're just advanced language models in bodies (C3PO insisting
on proper protocol because he's a protocol droid is the
engineering attempt to keep the LLM on track).
AndrewKemendo wrote 19 hours 9 min ago:
Because itâs a fantasy space opera show that has nothing to do with
reality
fortyseven wrote 20 hours 24 min ago:
You know, sometimes things just work. They get whittled way at until we
end up with a very refined endpoint. Just look at cell phones. Black
rectangles as far as the eye can see. For good reason. I'm not saying
don't explore new avenues ( foldables, etc. ), but it's perfectly fine
to come to settle into a metaphor that just works.
7thaccount wrote 19 hours 31 min ago:
The Windows 95-XP taskbar is good. Everything else has been downhill.
pdonis wrote 19 hours 13 min ago:
I use Trinity Desktop on Linux because it's basically the same as
the Windows 95-XP taskbar interface, and has no plans to change.
rolph wrote 20 hours 37 min ago:
problem is with pushing a UX at users and enforcing that model when the
user changes it to something comfortable when you should be looking at
what the users are throwing away, and what they are replacing it with.
MS is a prime example, dont do what MS has been doing, remember whos
hardware it actually is, remain aware that what a developer, and a
board room understands as improvement, is not experienced in the same
way by average retail consumers.
joelkesler wrote 21 hours 20 min ago:
Great talk about the future of desktop user-interfaces.
ââ¦Scott Jenson gives examples of how focusing on UX -- instead of
UI -- frees us to think bigger. This is especially true for the
desktop, where the user experience has so much potential to grow well
beyond its current interaction models. The desktop UX is certainly not
dead, and this talk suggests some future directions we could take.â
âScott Jenson has been a leader in UX design and strategic planning
for over 35 years. He was the first member of Appleâs Human Interface
group in the late '80s, and has since held key roles at several major
tech companies. He served as Director of Product Design for Symbian in
London, managed Mobile UX design at Google, and was Creative Director
at frog design in San Francisco. He returned to Google to do UX
research for Android and is now a UX strategist in the open-source
community for Mastodon and Home Assistant.â
scottjenson wrote 1 day ago:
I've given dozens of talks, but this one seems to have struck a chord,
as it's my most popular video in quite a while. It's got over 14k views
in less than a day.
I'm excited so many people are interested in desktop UX!
averynicepen wrote 15 hours 39 min ago:
This was a really fantastic talk and kept me riveted for 40 minutes.
Where can I find more?
agumonkey wrote 15 hours 43 min ago:
where can we find advanced ux labs ? i'm tired of the figma trend
ChuckMcM wrote 16 hours 11 min ago:
I think you did a great job of bringing fairly nuanced problems into
perspective for a lot of people who take their interactions with
their phone/computer/tablet for granted. That is a great skill!
I think an fertile area for investigation would also be 'task
specific' interactions. In XDE[1], the thing that got Steve Jobs all
excited, the interaction models are different if you're writing code,
debugging code, or running an application. There are key things that
always work the same way (cut/paste for example) but other things
that change based on context.
And echoing some of the sentiment I've read here as well, consistency
is a bigger win for the end user than form. By that I mean even a
crappy UX is okay if it is consistent in how its crappy. Heard a
great talk about Nintendo's design of the 'Mario world' games and how
the secret sauce was that Mario physics are consistent, so as a game
player if you knew how to use the game mechanics to do one thing, you
can guess how to use them to do another thing you've not yet done.
Similarly with UX, if the mechanics are consistent then they give you
a stepping off point for doing a new thing you haven't done but using
mechanics you are already familiar with.
[1] Xerox Development Environment -- This was the environment
everyone at Xerox Business Systems used when working on the Xerox
Star desktop publishing workstation.
NetOpWibby wrote 17 hours 7 min ago:
Fantastic talk, I found myself nodding in agreement a lot. In my
research on next-generation desktop interfaces, I was referred to Ink
& Switch as well and man, I sure wish they were hiring. I missed out
on the Xerox and Bell Labs eras. I'm also reading this book,
"Inventing the Future" by John Buck that details early Apple (there's
no reason the Jonathan Computer wouldn't sell like hotcakes today,
IMHO).
In my downtime I'm working on my future computing concept[1]. The
direction I'm going for the UI is context awareness and the desktop
being more of an endless canvas. I need to flesh out my ideas into
code one of these days.
P.S. Just learned we're on the same Mastodon server, that's dope.
---
[1]
(HTM) [1]: https://systemsoft.works
pjmlp wrote 17 hours 9 min ago:
It was quite interesting.
calmbonsai wrote 18 hours 39 min ago:
I concur though per my earlier post I do feel "desktop stagnation" is
inevitable and we're already there. You were channeling Don Norman
[1] in the best of ways.
(HTM) [1]: https://jnd.org/
az09mugen wrote 1 day ago:
Thanks for that nice talk, it felt like a breeze of fresh air with
basic & simple yet powerful but alas "forgotten" concepts of UX.
Will look into your other talks.
(DIR) <- back to front page