Key Takeaways
And the taskbar is also not optimal. Having text next to the icons is great, but it means you can only really have, like, 4 or 5 applications open and see all their titles and stuff. Which is why modern windows switched to just icons - which is much worse, because now you can't tell which app window is which!
The optimal taskbar, imo, is a vertical one. I basically take the KDE panel and just make it vertical. I can easily have 20+ apps open and read all their titles. Also, I generally think vertical space is more valuable for applications, and you get more of it this way.
It also allows me to ungroup apps. So that each window is it's own entry in the taskbar, so one less click. And it works because I can read the window title.
More or less, yes; Trinity Desktop is basically KDE 3. But KDE has added on a lot of other cruft since then that has no value to me.
> Having text next to the icons is great, but it means you can only really have, like, 4 or 5 applications open and see all their titles and stuff.
That's what multiple virtual desktops are for. My usual desktop configuration has 8. Each one has only a few apps open in it.
> The optimal taskbar, imo, is a vertical one.
I do this for toolbars in applications like LibreOffice; on an HD aspect ratio screen it makes a lot more sense to have all that stuff off to the side, where there's more than enough screen real estate anyway, than taking up precious vertical space at the top.
But for my overall desktop taskbar, I've tried vertical and it doesn't work well for me--because to show titles it would have to be way too wide for me. It does take up some vertical space at the bottom of the screen, but I can make that pretty small by downsizing it to either "Small" or "Tiny".
In the Trek universe, LCARS wasn't getting continuous UI updates because they would have advanced, culturally, to a point where they recognized that continuous UI updates are frustrating for users. They would have invested the time and research effort required to better understand the right kind of interface for the given devices, and then... just built that. And, sure, it probably would get updates from time to time, but nothing like the way we do things now.
Because the way we do things now is immature. It's driven often by individual developers' needs to leave their fingerprints on something, to be able to say, "this project is now MY project", to be able to use it as a portfolio item that helps them get a bigger paycheck in the future.
Likewise, Geordi was regularly shown to be making constant improvements to the ship's systems. If I remember right, some of his designs were picked up by Starfleet and integrated into other ships. He took risks, too, like experimental propulsion upgrades. But, each time, it was an upgrade in service of better meeting some present or future mission objective. Geordi might have rewritten some software modules in whatever counted as a "language" in that universe at some point, but if he had done so, he would have done extensive testing and tried very hard to do it in a way that wouldn't've disrupted ship operations, and he would only do so if it gained some kind of improvement that directly impacted the success or safety of the whole ship.
Really cool technology is a key component of the Trek universe, but Trek isn't about technology. It's about people. Technology is just a thing that's in the background, and, sometimes, becomes a part of the story -- when it impacts some people in the story.
(equivalent of people being glued to their smartphones today)
Similarly in Stat Wars with droids: Obi-Wan is right, droids can't think and deserve no real moral consideration because they're just advanced language models in bodies (C3PO insisting on proper protocol because he's a protocol droid is the engineering attempt to keep the LLM on track).
The people we saw on screen most of the time also held important positions on the ship (especially the bridge, or engineering) and you can't expect them to just waste significant chunks of time.
Also, don't forget that these people actually like their jobs. They got there because they sincerely wanted to, out of personal interest and drive, and not because of societal pressures like in our present world. They already figured out universal basic income and are living in an advanced self-sufficient society, so they don't even need a job to earn money or live a decent life - these people are doing their jobs because of their pure, raw passion for that field.
In the Trek universe, LCARS was continuously generating UI updates for each user.
Stories which focus on them as technology are nearly always boring. "Oh no the transporter broke... Yay we fixed it".
Now, this is really because LCARS is "Stage Direction: Riker hits some buttons and stuff happens".
AKA resume-driven development. I personally know several people working on LLM products, where in private they admit they think LLMs are scams
Not to be "that guy" but LCARS wasn't getting continuous UI updates because that would have cost the production team money and for TNG at least would have often required rebuilding physical sets. It does get updated between series because as part of setting the design language for that series.
And Geordi was shown constantly making improvements to the ship's systems because he had to be shown "doing engineer stuff."
Yes, although users also judge updates by what is apparent. Imagine if OS UIs didn’t change and you had to pay for new versions. So I’m sure UI updates are also partly motivated by a desire to signal improvements.
Things just need to "look futuristic". The don't actually need to have practical function outside whatever narrative constraints are imposed in order to provide pace and tension to the story.
I forget who said it first, but "Warp is really the speed of plot".
In truth, that was due to having a fixed sight-line and focal distance to the camera so any post-production LCARS effects could be matched-moved to the action and any possible alternative lighting conditions. Offhand, I can't think of any explicit digital match-moving shots, but I'm certain that's the reason.
As pointed out in that infamous Red Letter Media video, all the screens on the bridge ended up casting too much glare so they very blatantly used gaffer tape on them https://www.youtube.com/watch?v=yzJqarYU5Io . :)
On the other hand, if the writers of Star Trek The Next Generation were writing the show now, rather than 35-40 years ago - and therefore had a more expansive understanding of computer technology and were writing for an audience that could be relied upon to understand computers better than was actually the case - maybe there would've been more episodes involving dealing with the details of Future Sci-Fi Computer Systems in ways a programmer today might find recognizable.
Heck, maybe this is in fact the case for the recently-written episodes of Star Trek coming out in the past few years (that seem to be much less popular than TNG, probably because the entire media environment around broadcast television has changed drastically since TNG was made). Someone who writes for television today is more likely to have had the experience of taking a Python class in middle school than anyone writing for television decades ago (before Python existed), and maybe something of that experience might make it into an episode of television sci-fi.
As an additional point, my recollection is that the LCARS interface did in fact look slightly different over time - in early TNG seasons it was more orange-y, and in later seasons/Voyager/the TNG movies it generally had more of a purple tinge. Maybe we can attribute this in-universe to a Federation-wide UX redesign (imagine throwing in a scene where Barclay and La Forge are walking down a corridor having a friendly argument about whether the new redesign is better or worse immediately before a Red Alert that starts the main plot of the episode!). From a television production standpoint, we can attribute this to things like "the set designers were actually trying to suggest the passage of time and technology changing in the context of the show", or "the set designers wanted to have fun making a new thing" or "over the period of time that the 80s/90s incarnations of Star Trek were being made, television VFX technology itself was advancing rapidly and people wanted to try out new things that were not previously possible" - all of which have implications for real-world technology as well as fake television sci-fi technology.
That's probably part of it. But the larger part is that new Star Trek is very poorly written, so why is anyone going to bother watching it?
Complex tasks are done vibe coding style, like La Forge vibe video editing a recording to find an alien: https://www.youtube.com/watch?v=4Faiu360W7Q
I do wonder if conversational interfaces will put an end to our GUI churn eventually...
It might be nice way for making complex, one off tasks by personnel unfamiliar with all the features of the system, but for fast day to day stuff, button per function will always be a king.
The less obvious answer is how to make it work. That is a hard problem.
And the challenge is how to make it work ethically, especially given where Late Capitalism has ended up.
Otherwise we won't turn into Star Fleet, we'll turn into the Borg.
Conversly recent versions have taken the view of foregrounding tech aidied with flashy CGI to handwave through a lot.Basically using it as a plot device when the writing is weak.
It is for the audience to imagine that those printed transparencies backlit with light bulbs behind coloured gel are the most intuitive, easy to use, precise user interfaces that the actors pretend that they are.
https://www.youtube.com/watch?v=zMuTG6fOMCg
The variety of form factors offered are the only difference
Correct? I agree with this precisely but assume you’re writing it sarcastically
From the point of view of the starting state of the mouth to the end state of the mouth the USER EXPERIENCE is the same: clean teeth
The FORM FACTOR is different: Electric version means ONLY that I don’t move my arm
“Most people” can’t do multiplication in their head so I’m not looking to them to understand
Now compare that variance to the variance options given with machine and computing UX options, and you’ll see clearly that one (toothbrushing) is less than one stdev different in steps and components for the median use case and one (computing) is nearly infinite variance (no stable stdev) between median use case steps and components
So yes I collapsed that complexity into calling it “UX” which classically can be described via UML
Ask any person to go and find a stick and use it to brush their teeth, and then ask if that "experience" was the same as using their toothbrush. Invoking UML is absurd.
Funny how we haven’t done anything on the scale of Hoover Dam, Three Gorges, ISS etc…since those got thrown away
User Experience also means something specific in information theory and UX and UML is designed to model that explicitly:
https://www.pst.ifi.lmu.de/~kochn/pUML2001-Hen-Koch.pdf
Good luck vibe architecting
UML and functional definitions and iso standards are still important, it's just not UX.
Good luck never observing users using your product. Not everything is a space shuttle.
On the positive side, my electronic toothbrush allows me to avoid excessive pressure via real-time green/red light.
On the negative side, it guilt trips me with a sad face emoji any time my brushing time is under 2 minutes.
Because we've been stuck with the same bicycle UX for like 150 years now.
Sometimes shit just works right, just about straight out of the gate.
By the 1870s we'd pretty much standardised on the "Safety Bicycle", which had a couple of smallish wheels about two and a half feet in olden days measurements in diameter, with a chain drive from a set of pedals mounted low in the frame to the rear wheel.
By the end of the 1880s, you had companies mass-producing bikes that wouldn't look unreasonable today. All we've done since is make them out of lighter metal, improve the brakes from pull rods to cables to hydraulic discs brakes, and give them more gears (it wouldn't be until the early 1900s that the first hub gears became available, with - perhaps surprisingly - derailleurs only coming along 100 years ago).
In the last 20 years alone we've seen introduced or popularized:
Carbon frames, carbon wheels, disk/hydraulic brakes, expanded cassettes (2x11, 2x12), electric shifters, aerodynamic wheels, string spokes, and a boat load of different tires (for different levels of comfort, speed, durability, grip) for whatever you are doing. That's road bikes.
For mountain bikes you have all of that (minus aerodynamic stuff), dropper posts, 1x drive trains (1 chainring, 12 gears on the cassette, these are so good people want them on road/gravel bikes too) plus a slow evolution of geometry that completely changes how the bike feels on different terrain (also making them safer on dangerous terrain), plus a slow march from incredibly heavy builds to today's lightweight builds which are still more than capable of handling downhill. And in that same time-frame you'll see them going from 26" to 29" wheels, which results in a massive difference in the way the bike rides, and also the bike's ability to go over obstacles. And tubeless tires are now popularized on MTB, which means you can run lower pressures for better comfort and traction, and you also spend a lot less time futzing with holes in tubes.
Not to mention... E-bikes. There's just been so much going on.
A UX revolution in teeth-cleaning technology would probably look like some kind of bio-organism or colony that eats plaque and kills plaque-producing bacteria. In an ideal world you wouldn't have to brush your teeth at all, aside from an occasionally floss or scrub.
I can have a personal dentist brush my teeth while I lie down.
There's a point where UX-that-works-at-acceptable-cost is good enough.
Maybe the experience has not changed for the average person, but alternatives are out there.
Humans have outright rejected all other possible computer form factors presented to them to date including:
Purely NLP with no screen, head worn augmented reality, contact lenses, head worn virtual reality, implanted sensors etc…
Every other possible form factor gets shit on on this website and in every other technology newspaper.
This is despite almost a century of a attempts at doing all those and making zero progress in sustained consumer penetration.
Had people liked those form factors they would’ve been invested in them early on such as they would develop the same way the laptops and iPads and iPhones and desktops have evolved. However nobody’s even interested at any type of scale in the early days of AR for example.
I have a litany of augmented and virtual reality devices scattered around my home and work that are incredibly compelling technology - but are totally seen as straight up dogshit from the consumer perspective.
Like everything it’s not a machine problem, it’s a human people in society problem
Cumbersome and slow with horrible failure recovery. Great if it works, huge pain in the ass if it doesn't. Useless for any visual task.
> head worn augmented reality
Completely useless if what you're doing doesn't involve "augmenting reality" (editing a text document), which probably describes most tasks that the average person is using a computer for.
> contact lenses
Effectively impossible to use for some portion of the population.
> head worn virtual reality
Completely isolates you from your surroundings (most people don't like that) and difficult to use for people who wear glasses. Nevermind that currently they're heavy, expensive, and not particularly portable.
> implanted sensors
That's going to be a very hard sell for the vast majority of people. Also pretty useless for what most people want to do with computers.
The reason these different form factors haven't caught on is because they're pretty shit right now and not even useful to most people.
The standard desktop environment isn't perfect, but it's good and versatile enough for what most people need to do with a computer.
yet here we are today
You must’ve missed the point: people invested in desktop computers when they were shitty vacuum tubes that blow up.
That still hasn’t happened for any other user experience or interface.
> it's good and versatile enough for what most people need to do with a computer
Exactly correct! Like I said it’s a limitation of the human society, the capabilities and expectations of regular people are so low and diffuse that there is not enough collective intelligence to manage a complex interface that would measurably improve your abilities.
Said another way, it’s the same as if a baby could never “graduate” from Duplo blocks to Lego because lego blocks are too complicated
Most of society doesn’t need more complex interfaces. Much of society doesn’t actually even need computer interfaces all that much, they’ve only been foisted on them.
Even more, I don't see phones as the same form factor as mainframes.
Take any other praxis that's reached the 'appliance' stage that you use in your daily life from washing machines, ovens, coffee makers, cars, smartphones, flip-phones, televisions, toilets, vacuums, microwaves, refrigerators, ranges, etc.
It takes ~30 years to optimize the UX to make it "appliance-worthy" and then everything afterwards consists edge-case features, personalization, or regulatory compliance.
Desktop Computers are no exception.
For example, we're not remotely close to having a standardized "watch form-factor" appliance interface.
Physical reality is always a constraint. In this case, keyboard+display+speaker+mouse+arms-length-proximity+stationary. If you add/remove/alter _any_ of those 6 constraints, then there's plenty of room for innovation, but those constraints _define_ a desktop computer.
One classic example is the "Bloomberg Box": https://en.wikipedia.org/wiki/Bloomberg_Terminal which has been around since the late '80s.
You can also see this from the reverse (analog -> digital) in the evolution of hospital patient life-sign monitors and the classic "6 pack" of gauges used in both aviation and automobiles.
Now with performant hypervisors, I just run a bunch of Linux VMs locally to minimize splash-zone and do cloud for performance computing.
I'll likely migrate fully to a Framework laptop next year, but I don't have time (atm) to do it. Ah, the good 'ole glory days of native Linux on Thinkpads.
1. Incremental narrowing for all selection tasks like the Helm [0] extension for Emacs.
Whenever there is a list of choices, all choices should be displayed, and this list should be filterable in real time by typing. This should go further than what Helm provides, e.g. you should be able to filter a partially filtered list in a different way. No matter how complex your filtering, all results should appear within 10 ms or so. This should include things like full text search of all local documents on the machine. This will probably require extensive indexing, so it needs to be tightly integrated with all software so the indexes stay in sync with the data.
2. Pervasive support for mouse gestures.
This effectively increases the number of mouse buttons. Some tasks are fastest with keyboard, and some are fastest with mouse, but switching between the two costs time. Increasing the effective number of buttons increases the number of tasks that are fastest with mouse and reduces need for switching.
I see "mouse gestures" as merely an incremental evolution for desktops.
Low latency capacitive touch-screens with gesture controls were, however, revolutionary for mobile devices and dashboards in vehicles.
I wish the same could be said of car UX these days but clearly that has regressed away from optimal.
That's an economic win sure, but it's tragic if we fail to unlock more of their flexibility for "end users"! IMHO that's the biggest unsolved problems of computer science: that it takes so much professional learning to unlock the real potential (and even that fails us programmers much of the time! How frequently do you say "I'll solve it myself" and the time invested is actually worth it? In the words of Steve Krouse we've not quite solved "End-programmer Programming" yet.)
I do want FOSS to follow through and offer live inspectors for everything, but no, I no longer believe users should "learn to code" as the salvation. We're nowhere near it being worth their time, we actually went downhill on that :-( Conversational AI and "vibe using" will play a role, but that's more like "adversarial interoperability", doesn't cover what I mean either.
I want cross-app interoperability to be designed in, as something everyone'd understand users want. I want agency over app state — snapshot, fork, rewind, diff etc. I want files back (https://jenson.org/files/), and more¹. I want things like versioning and collaboration to work cross-app. I want ideas and metaphors we haven't found yet — I mean it when I call it unsolved problem of CS! — that would unlock more flexible workflows for users, with less learning curve. The URL was one such idea - unlocking so much coordination by being able to share place[+state] in any medium.
I want software to be malleable (https://malleable.systems/) in ways meaningful to users. I want all apps to expose their command set for OS-level configurability of input devices & keyboard shortcuts (Steam Input on steroids). I want breakthroughs on separating "business logic" from all the shit we piled, so users can "view source" and intervene on important stuff, like they can in spreadsheets. (I want orders of magnitude smaller/simpler shit too.) I want the equivalent of unix pipes combinatorial freedom² in GUI apps. I want universal metaphors for automation, a future not unlike Yahoo Pipes promised us (https://retool.com/pipes) though I don't know in what shape. I want previewable "vector actions", less like record-macro-and-pray-the-loop-works more like multiple cursor editing. I want more apps to expose UX like PhotoShop layers, where users are more productive manipulating a recipe than they'd be directly manipulating the final result. (https://graphite.art/ looks promising but that's again for visual stuff; we need more universal metaphors for "reactive" editing... I want an spreadsheet-like interface to any directory+Makefile. I want ability to include "formulas" everywhere³.) I want various ideas from Subtext (https://www.subtext-lang.org/retrospective.html).
I want user access to the fruits of 100% reproducibility, including full control of software versions (which are presently reserved to VM/Docker/Nix masters). I want universal visibility into app behavior — what it accessed, what it changed on your computer/the world, everything it logged. Ad blockers actually achieve 2 ways to inspect/intervene: starting from the UI (I don't want to see this), and starting from network behavior (I don't want it to contact this), and both give users meaningful agency!
¹ I highly recommend following https://alexanderobenauer.com/ for his "Itemized OS" research ideas. His shared work with Ink & Switch https://www.inkandswitch.com/embark/ is a tantalizing demo too. ² http://conal.net/blog/posts/tangible-functional-programming-... was intriGUIng but still too nerdy for fitting naturally into end-user workflows. (But if you like such nerdy, https://www.tandisgame.com/ is somewhat related & fun) ³ https://calca.io/, https://worrydream.com/ExplorableExplanations/
GUI elements were easily distinguishable from content and there was 100% consistency down to the last little detail (e.g. right click always gave you a meaningful context menu). The innovations after that are tiny in comparison and more opinionated (things like macos making the taskbar obsolete with the introduction of Exposé).
Recently some UI ignored my action by clicking an entry in a list from drop down button. It turned out, this drop down button was additionally a normal button if you press it in the center. Awful.
> UI creation compared to MFC
Here I'd prefer Borland with (Pascal) Delphi / C++ Builder.
> relative resizable layout that's required today.
While it should be beneficial, the reality is awful. E.g. why is the URL input field on [1] so narrow? But if you shrinks the browser window width the text field becomes wide eventually! That's completely against expectations.
Meanwhile, WinXP started to fiddle with the foundation of that framework, sometimes maybe for the better, sometimes maybe for the worse. Vista did the same. 7 mostly didn't and instead mostly fixed what Vista broke, while 8 tried to throw the whole thing out.
I’m in the process of designing an os interface that tries to move beyond the current desktop metaphor or the mobile grid of apps.
Instead it’s going to use ‘frames’ of content that are acted on by capabilities that provide functionality. Very much inspired by Newton OS, HyperCard and the early, pre-Web thinking around hypermedia.
A newton-like content soup combined with a persistent LLM intelligence layer, RAG and knowledge graphs could provide a powerful way to create, connect and manage content that breaks out of the standard document model.
Personally, I wish there were a champion of desktop usability like how Apple was in the 1980s and 1990s. I feel that Microsoft, Apple, and Google lost the plot in the 2010s due to two factors: (1) the rise of mobile and Web computing, and (2) the realization that software platforms are excellent platforms for milking users for cash via pushing ads and services upon a captive audience. To elaborate on the first point, UI elements from mobile and Web computing have been applied to desktops even when they are not effective, probably to save development costs, and probably since mobile and Web UI elements are seen as “modern” compared to an “old-fashioned” desktop. The result is a degraded desktop experience in 2025 compared to 2009 when Windows 7 and Snow Leopard were released. It’s hamburger windows, title bars becoming toolbars (making it harder to identify areas to drag windows), hidden scroll bars, and memory-hungry Electron apps galore, plus pushy notifications, nag screens, and ads for services.
I don’t foresee any innovation from Microsoft, Apple, or Google in desktop computing that doesn’t have strings attached for monetization purposes.
The open-source world is better positioned to make productive desktops, but without coordinated efforts, it seems like herding cats, and it seems that one must cobble together a system instead of having a system that works as coherently as the Mac or Windows.
With that said, I won’t be too negative. KDE and GNOME are consistent when sticking to Qt/GTK applications, respectively, and there are good desktop Linux distributions out there.
At Microsoft, Satya Nadella has an engineering background, but it seems like he didn't spend much time as an engineer before getting an MBA and playing the management advancement game.
Our industry isn't what it used to be and I'm not sure it ever could.
This also came at a time when tech went from being considered a nerdy obsession to tech being a prestigious career choice much like how law and medicine are viewed.
Tech went from being a sideshow to the main show. The problem is once tech became the main show, this attracts the money- and career-driven rather than the ones passionate about technology. It’s bad enough working with mercenary coworkers, but when mercenaries become managers and executives, they are now the boss, and if the passionate don’t meet their bosses’ expectations, they are fired.
I left the industry and I am now a tenure-track community college professor, though I do research during my winter and summer breaks. I think there are still niches where a deep love for computing without being overly concerned about “stock line go up” metrics can still lead to good products and sustainable, if small, businesses.
When the hell was even that?
> There were also more low hanging fruit to develop software that makes people’s lives better.
In principle, maybe. In practice, you had to pay for everything. Open source or free software was not widely available. So, the profit motive was there. The conditions didn’t exist yet for the profit model we have today to really take off, or for the appreciation of it to exist. Still, if there’s a lot of low-hanging fruit, that means the maturity of software was generally lower, so it’s a bit like pining for the days when people lived on the farm.
> There was also less investor money floating around so it was more important to appeal to end users.
I’m not so sure this appeal was so important (and investors do care about appeal!). If you had market dominance like Microsoft did, you could rest on your laurels quite a bit (and that they did). The software ecosystem you needed to use also determined your choices for you.
> To me it seems tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation.
As I said earlier, the profit motive was always there. It was just expressed differently. But I will grant you that the image is different. In a way, the mask has been dropped. When facebook was new, no one thought of it as a vulgar engine for monetizing people either (I even recall offending a Facebook employee years ago when I mentioned this, what should frankly have been obvious), but it was just that. It was all just that, because the basic blueprint of the revenue model was there from day one.
As a private individual, you didn't actually have to pay for anything once you got an Internet connection. Most countries never even tried enforcing copyright laws against small fish.
> In the 80s and 90s there was much more idealism than now.
that idealism was already fading by then, which had started much earlier in the preceding decades (see, memex/hypertext etc) > tech has devolved into a big money making scheme with only the minimum necessary actual technology and innovation
in the end, they are businesses, so it could be assumed that such orientation would take over in the end eventually though, no?its the system of incentives we all live under (make more money or die)
This is not true for the vast majority of people making these things. At some point, most businesses go from “make money or die” to financial security: “make line go up forever for no reason”.
I think you may be looking at history through rose-colored glasses. Sure, social media today is not the same, so the comparison isn’t quite sensible, but IRC was an unpleasant place full of petty egos and nasty people.
One should take a look at HN. /s
I find the discussions on the early Internet (until around 2010) more civilised than today.
Today, the internet is fully weaponized by and for big companies and 3 letter agencies.
The subtle running joke was that while the main characters technobabble was fake, every other background SV startup was “Making the world a better place through Paxos-based distributed consensus” and other real world serious tech.
I tried to use my phone as a "computing device", but i mostly can use it as a toy. Working with text and files on a phone is... how to say nicely ... interesting.
We now have giant title bars to accommodate the hamburger menu button, which opens a list of...standard menu bar sub menu options.
You could fit all the same information into the same real estate space, using the original and tested paradigm.
On other hand. Vivaldi I am trying on phone has this stupid thick bar at bottom on my Android. With essentially bookmarks, back, home, forward and tabs buttons... Significantly more taking visual space...
I am really not sure what is going on in total...
Are we stuck with the same brake pedal UX forever?
Coders are the only ones who still should be interested in desktop UX, but even in that segment many just need a terminal window.
For content creation though, desktop still rules.
Whether intentional or not, it seems like the trend is increasingly locked-down devices running locked-down software, and I’m also disturbed by the prospect of Big Tech gobbling up hardware (see the RAM shortage, for example), making it unaffordable for regular people, and then renting this hardware back to us in the form of cloud services.
It’s disturbing and I wish we could stop this.
But outside of that I doubt there will be many users actually doing stuff (as opposed to just ingesting content) that will abandon desktop, and other ones like Mac UI isn't getting worse
... shitty.
When I need to get productive, often what I need is to disable the browser to stop myself from wasting time on the web.
I guess the larger point is that you need a desktop to run vscode or Figma, so the desktop is not dead.
This also means that I heavily disagree with one of the points of the presenter. We should not use the next gen hardware to develop for the future Desktop. This is the most nonsensical thing I heard all day. We need to focus on the basics.
They basically never remove features, and just add on more customization. You can get your desktop to behave exactly like Windows 95, if you want.
And the apps are some of the most productive around. Dolphin is the best file manager across every operating system, and it's not even close. Basic things like reading metadata is overlooked in all other file managers, but dolphin gives you a panel just for that. And then tabs, splits, thumbnails, and graph views.
I use XFCE now.
I can't imagine what I'd be doing without MATE (GNOME 2 fork ported to GTK+ 3).
Recently I've stumbled upon:
> I suspect that distro maintainers may feel we've lost too many team members so are going with an older known quantity. [1]
This sounds disturbing.
[1] https://github.com/mate-desktop/caja/issues/1863#issuecommen...
We also have a good half-decade of QA focus behind us, including community-elected goals like a consistency campaign, much like what you asked for.
I'm confident Plasma 5 and 6 have iteratively gotten better on all four points.
It's certainly not perfect yet, and we have many areas to still improve, some of them greatly. But we're certainly not enshittifying, and the momentum is very high. Nearly all modern, popular new distros default to KDE (e.g. Bazzite, CachyOS, Asahi, etc.) and our donations from low-paying individual donors - a decent metric for user satisfaction - has multiplied.
It's really strange how he spins off on this mini-rant about AI ethics towards the end. I clicked on a video about UI design.
1. Burning the planet on your servers is expensive, offloading it to a client-side LLM is not.
2. Ethics means risk means you won't be SOC compliant, your legal department will be mad, your users will be mad, etc.
The current status-quo of a few giant LLMs on supercomputers operated by OpenAI and Google is basically destined to fail, in my eyes. At least from a business standpoint. Consumer stuff might be different.
A perfect pain point example was mentioned in the video: Text selection on mobile is trash. But each app seems to have different solutions, even from the same developer. Google Messages doesn't allow any text selection of content below an entire message. Some other apps have opted in to a 'smart' text select which when you select text will guess and randomly group select adjacent words. And lastly, some apps will only ever select a single word when you double tap which seemed to be the standard on mobile for a long time. All of this is inconsistent and often I'll want to do something like look up a word and realize oh I can't select the word at all (G message), or the system 'smartly' selected 4 words instead, or that it did what I want and actually just picked one word. Each application designer decided they wanted to make their own change and made the whole system fragmented and worse overall.
I'm only half joking.
You can count on it, it is reliable, it always works.
That correctly identifies the problem. Now why is that, and how can we fix it?
It seems fixable; native GUI apps have COM bindings that can fairly reliably produce the text present in certain controls in the vast majority of cases. Web apps (and "desktop" apps that are actually web apps) have accessibility attributes and at least nominally the notion of separating document data from presentation. Now why do so few applications support text extraction via those channels? If the answer is "it's hard/easier not to", how can we make the right way easier than the wrong way?
“…Scott Jenson gives examples of how focusing on UX -- instead of UI -- frees us to think bigger. This is especially true for the desktop, where the user experience has so much potential to grow well beyond its current interaction models. The desktop UX is certainly not dead, and this talk suggests some future directions we could take.”
“Scott Jenson has been a leader in UX design and strategic planning for over 35 years. He was the first member of Apple’s Human Interface group in the late '80s, and has since held key roles at several major tech companies. He served as Director of Product Design for Symbian in London, managed Mobile UX design at Google, and was Creative Director at frog design in San Francisco. He returned to Google to do UX research for Android and is now a UX strategist in the open-source community for Mastodon and Home Assistant.”
41 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.