Ides We Had 30 Years Ago and Lost (2023)
Posted3 months agoActive3 months ago
blogsystem5.substack.comTechstoryHigh profile
calmmixed
Debate
70/100
IdesTuiRetro Computing
Key topics
Ides
Tui
Retro Computing
The article discusses the IDEs of the past, specifically those from the 1990s, and how they have been lost in modern development, sparking a discussion on the merits of old and new IDEs.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
25m
Peak period
122
0-12h
Avg / period
26.7
Comment distribution160 data points
Loading chart...
Based on 160 loaded comments
Key moments
- 01Story posted
Oct 18, 2025 at 8:44 AM EDT
3 months ago
Step 01 - 02First comment
Oct 18, 2025 at 9:09 AM EDT
25m after posting
Step 02 - 03Peak activity
122 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 25, 2025 at 10:39 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45626910Type: storyLast synced: 11/22/2025, 11:47:55 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
I know there's Emacs and vim, but they're far too programmable and bloated compared to the elegance of TC++, which did one job, and one job only, very well. Also, despite being an Emacs power user at this point, it's never going to be as ergonomic and well thought out with its arcane chords, while TC++ conveniently shows all possible keybinds throughout its UI.
[0] https://github.com/magiblot/tvision [1] https://github.com/magiblot/turbo
It works fine with Yori too, not only CMD.
[0] https://i.imgur.com/Qvkt3W0.png
[1] https://www.gnu.org/software/texinfo/manual/texinfo/html_nod...
(That doesn't imply I went with VS or similar fat ide, just that I didn't end up using xwpe for real. I tried code::blocks for a while but mostly just use geany or a plain editor.)
https://github.com/gphalkes/tilde
Linux Terminal programs are running in an emulated terminal, and are bound by keyboard input restrictions that DOS programs did not have.
https://github.com/cosmos72/twin
https://github.com/Julien-cpsn/desktop-tui
It’s weird because memory use for the same sorts of programs is not much worse than other languages. In Rust memory use seems comparable to C++. In Go there’s a bit more overhead but it’s still smaller than the binary. So all this is not being loaded.
I get the sense devs just don’t put a lot of effort into stripping dead code and data since “storage is cheap” but it shows next to C or even C++ programs that are a fraction of the size.
I see nothing about Rust’s safety or type system that should result in chonky binaries. All that gets turned into LLVM IR just like C or C++.
Go ships a runtime so that explains some, but not all, of its bloat.
Well, you can non-portably skip kernel32, and use ntdll, but then your program won't work in the next Windows version (same as on any platform really - you can include the topmost API layers in your code, but they won't match the layers underneath of the next version).
But system DLLs are DLLs, so also don't cause your .exe to get bloated.
On some systems, this is just not a supported configuration (like what you're talking about with Windows) and on some, they go further, and actually try and prevent you from doing so, even in assembly.)
Linux software is binary portable between distros as long as the binary was compiled using a Glibc version that is either the same or older than the distros you are trying to target. The lack of "portability" is because of symbol versioning so that the library can expose different versions of the same symbol, exactly so that it can preserve backwards compatibility without breaking working programs.
And this is not unique to Glibc, other libraries do the same thing too.
The solution is to build your software in the minimum version of libraries you are supposed to support. Nowadays with docker you can set it up in a matter of minutes (and automate it with a dockerfile) - e.g. you can use -say- Ubuntu 22 to build your program and it'll work in most modern Linux OSes (or at least glibc wont be the problem if it doesn't).
Well, duh? "Property A is possible if we match all requirements of property A".
Yes, using older distro is the de facto method of resolving this problem. Sometimes it's easy, sometimes it's hard, especially when we want to support older distros and using a new compiler version and fairly fresh large libraries (e.g. Qt). Compiling everything on older distro is possible, but sometimes it's hell.
> And this is not unique to Glibc, other libraries do the same thing too.
This only means that it is a very good idea to drop dependency on glibc if it's feasible.
macOS has a "minimum macos required" option in the compiler. Windows controls this with manifests. It's easy on other systems.
What i describe is different from what you wrote, which is that Linux is not binary compatible between distros. This is wrong because Linux is binary compatible with other Linux distributions just fine. What is not compatible is using a binary compiled using a newer version of some shared libraries (glibc included but not the only one) on a system that has older versions - but it is fine to use a binary compiled with an older version on a system with newer versions, at least as long as the library developers have not broken their ABI (this is a different topic altogether).
The compatibility is not between different distros but between different versions of the same library and what is imposed by the system (assuming the developers keep their ABIs compatible) is that a binary can use shared libraries of the same or older version as the one it was linked at - or more precisely, it can use shared libraries that expose the same or older versions of the symbols that the binary uses.
Framing this as software not being binary portable between different distros is wildly mischaracterizing the situation. I have compiled a binary that links against X11 and OpenGL on a Slackware VM that works on both my openSUSE Tumbleweed and my friend's Debian system without issues - that is a binary that is binary portable against different distros just fine.
Also if you want to use a compiler more recent than the one available in the distro you'll need to install it yourself, just like under Windows - it is not like Windows comes with a compiler out of the box.
https://github.com/golang/go/issues/16570
Which is why they have already backpedalled on this decision on most platforms. Linux is pretty much the only OS where the syscall ABI can be considered stable.
I'm fine with using libc on other systems than Linux, because toolchains on other systems actually support backward compatibility. Not on Linux.
Another advantage is that at least for Rust you can do whole program optimization. The entire program tree is run through the optimizer resulting in all kinds of optimizations that are otherwise impossible.
The only other kinds of systems that can optimize this way are higher level JIT runtimes like the JVM and CLR. These can treat all code in the VM as a unit and optimize across everything.
I get why this might lead to big intermediate files, but why do the final binaries get so big?
The main issue is that Rust binaries typically only link to libc whereas C++ binaries link to everthing under the sun, making the actual executable look tiny because that's not where most of the code lives.
When I tried to compare Rust programs to their C(++) equivalents by adding the sizes of linked libraries recursively (at least on Linux, that's impossible for Windows), I still found Rust programs to have a rather large footprint. Especially considering Rust still links to glibc which is a significant chunk of any other program as well.
I believe many of Rust's statically linked libraries do more than their equivalents in other languages, so I think some more optimisation in stripping unused code paths could significantly reduce the size of some Rust applications.
I first used emacs on terminals that were hooked to Sun workstations and you were either going to use a serial terminal which was very slow, or the terminal emulator on the Sun which was a GUI program that had to do a lot of work to draw the characters into the bitmap. So that’s your reason TUIs went away.
Most games at that time used mode 13h which was 320x200 with 8-bits per pixel which therefore indexed into a 256-colour palette (which could itself be redefined on the fly via reading from and writing to a couple of special registers - allowing for easy colour-cycling effects that were popular at that time). Here's a list of the modes: https://www.minuszerodegrees.net/video/bios_video_modes.htm
https://www.atarimagazines.com/compute///issue138/124_Laptop...
It ran the DOS text screen in VGA graphics mode, with soft-loaded fonts, but this also permitted all kinds of extra modes -- iff your DOS apps supported them. Some just read the screen dimensions and worked, some didn't.
If you made a custom font you could also have more diversity in the number of rows too but this was rarely done.
Eventually different text modes became available with higher resolution video cards and monitors. 132 columns of text were common but there were others.
Remember, the video hardware rendered text mode full-screen, and it had to be reconfigured to change to a different number of lines and columns. Only specific sizes were supported.
https://github.com/gphalkes/tilde
I used to have a copy of a Turbo Pascal graphics book with a blue-purple Porsche (not pg's hah) on the cover that included code for a raytracer. It would take about a minute to render one line at 320x200x256 colors, depending on the number of scene objects and light sources.
I used Turbo Pascal 2 as late as 1991, if not later, because that was the version we had. It was really fast on a 386 40 MHz or whatever exact type of PC we had then. A bit limiting perhaps that it only came with a library for CGA graphics, but on the other hand it made everything simpler and it was good for learning.
A few years ago I wanted to run my old Turbo Pascal games and decided to port to Free Pascal. Sadly Free Pascal turned out to only ship with the graphics library introduced in Turbo Pascal 4, but on the other hand I got a few hours of fun figuring out how to implement the Turbo Pascal 1-3 graphics API using inclined assembler to draw CGA graphics, and then my games worked (not very fun games to be honest; more fun to implement that API).
Zed has remote editing support and is open source. Resource consumption is a bizarre proposition, considering what abstractions the terminal has to be forced into to behave something like a normal window.
Really, TUIs are not very good. I get it, I use the terminal all the time and I will edit files with vim in it, but it is a pointless exercise to try to turn the terminal into something it was never meant to be and try to have it emulate something which would be trivial on a normal OS window. To be honest it makes me cringe when people talk about how much they perform tasks in the terminal, which would be much easier done in a graphical environment with proper tools.
TUIs are bizarre legacy technology, which are full of dirty hacks to somewhat emulate features every other desktop has. Why would any developer use them, when superior alternatives, not based on this legacy technology, exist and freely available?
User experience is inconsistent with features varying wildly between terminals, creating a frustrating user experience. It is also making customization difficult. E.g. in a TUI IDE you can not have font settings. Short cuts are also terminal dependent, an IDE can only use those shortcuts the terminal isn't using itself.
Something as basic as color is extremely hard to do right on a terminal. Where in a normal GUI you can give any element a simple RGB color, you can not replicate that across TUIs. The same goes for text styling, the terminal decides what an italic font it wants to use and the IDE can not modify this.
They are also very limited in graphical ability. Many features users expect in a GUI can not be replicated or can only be replicated poorly. E.g. modern data science IDEs feature inline graphics, such as plots. This is (almost) not replicable on a Terminal. If you are using profiler you might want to plot, preferably with live data. Why arbitrarily limit what an IDE can do to some character grid?
The terminal is just a very poor graphical abstraction. It arbitrarily limits what an IDE can do. Can you tell me why anybody would seriously try to use a terminal as an IDE? Terminals UIs are more complex, because they need to handle the bizarre underlying terminal, they are often less responsive, since they rely on the terminal to be responsive. There might be some very marginal improvement in resource usage, do you think that is even relevant compared to the much increased dev experience of a normal GUI?
There absolutely is no real advantage of TUIs. And generally I have found people obsessing over them to be mostly less tech literate and wanting to "show off" how cool their computer skills are. All serious developers I have ever known used graphical dev tools.
>> need an enormous array of hacks to emulate basic features
What are those hacks. As far as I can remember, TUIs ran faster on ancient hardware then anything else on today's modern computers.
People know perfectly well that I am talking about the way in which a terminal emulator can be used to display 2D graphics. By utilizing specific escape sequences to draw arbitrary glyphs on the terminal grid.
>What are those hacks.
Everything is a hack. TUIs work by sending escape sequences, which the terminal emulator then interprets in some way and if everything goes right you get 2D glyph based graphics. Literally everything is a hack to turn something which functions like a character printer into arbitrary 2D glyphs. Actually look at how bad this whole thing is. Look at the ANSI escape sequence you need to make any of this work, does that look like a sane graphics API to you? Obviously not.
>As far as I can remember, TUIs ran faster on ancient hardware then anything else on today's modern computers.
This is just delusional. Modern 2D graphics are extremely capable and deliver better performance in every metric.
>> This is just delusional.
That is a bit uncalled for.
We are not talking about DOS, we are talking about "modern" TUIs you would use on a modern Linux/Windows/MacOS system.
I even made that explicit in my first paragraph.
By the way one of the most frequent modern TUI apps that I use is Midnight Commander. It's a very nice app, which I use mostly when I SSH into a remote machine to manage it. Is there a 2D accelerated GUI that can help me do the same?
Of course it doesn't, because it isn't a graphics API. It's a styled text API.
> Modern 2D graphics are extremely capable and deliver better performance in every metric.
A big part of the complaint is https://danluu.com/keyboard-latency .
How is graphical vim even different from TUI vim? At least Emacs can render images.
We don't need to go back to the 66MHz era, but it's embarrassing that programs running on a dozen computer cores all executing at several gigahertz feel less responsive than software written half a century ago. Sure, compiling half a gigabyte of source code now finishes before the end of the year, but I rarely compile more than a hundred or new lines at a time and the process of kickstarting the compiler takes much longer than actual compilation.
A terminal is no more than a rendering environment. With some workarounds (a custom renderer and input loop most likely), you can probably compile Zed to run in a FreeDOS in the same environment you use to run Turbo Pascal. I doubt you'll get the same responsiveness, though.
Today, Python, Rlang, PHP, Java, and Lisp bring these features. But not C. Oh the irony.
At least that's the theory, in reality make has a lot of warts and implementing a good solid make file is an art. Don't even get me started on the horrors of automake, perhaps I just need to use it in one of my own projects but as someone who primarily ports others code, I hate it with a passion. It is so much easier when a project just sticks with a hand crafted makefile.
For completeness: The other half of make is to implement the rest of the build process.
Borland C++ had the compiler as part of the IDE (there was also a separate command-line version, but it was also compiled as part of the IDE). This allowed the IDE to not spawn separate processes for each file nor even need to hit the disk - the compiler (which was already in RAM as part of the IDE's process) would read the source code from the editor's buffer (instead of a file, so again, no hitting the disk) and would also keep a bunch of other stuff in memory between builds instead of reading it.
This approach allows the compiler to reuse data not only between builds but also between files of the same build. Meanwhile make is just a program launcher, the program - the compiler - need to run for each file and load and parse everything it needs to work for every single source file it needs to compile, thus rebuilding and destroying its entire universe for each file separately. There is no reuse here - even when you use precompiled headers to speed up some things (which is something Borland C++ also supported and it did speed up things even more on an already fast system), the compiler still needs to build and destroy that universe.
It is not a coincidence that one of the ways nowadays to speed up compilation of large codebases is unity builds[0] which essentially combine multiple C/C++ files (the files need to be aware of it to avoid one file "polluting" the contents of another) to allow multiple compilation units reuse/share the compilation state (such as common header files) with a single compiler instance. E.g. it is a core feature of FASTbuild[1] which combines distributed builds, caching and unity builds.
Of course Borland C++'s approach wasn't perfect as it had to run with limited memory too (so it still had to hit the disk at some point - note though that the Pascal compilers could do everything in memory, including even the final linking, even the program could remain in memory). Also bugs in the compiler could linger, e.g. i remember having to restart Borland C++ Builder sometimes every few hours of using it because the compiler was confused about something and had cached it in memory between builds. Also Free Pascal's text mode IDE (shown in the article) has the Free Pascal compiler as part of the IDE itself, but in the last release (i think) there is a memory leak and the IDE's use keeps increasing little by little every time you build, which is something that wouldn't matter with a separate program (and most people use FPC as a separate program via Lazarus these days, which is most likely why nobody noticed the leak).
[0] https://en.wikipedia.org/wiki/Unity_build
[1] https://fastbuild.org/
And yes, efficient separate and incremental compilation is major advantage of C. I do not understand why people criticize this. It works beautifully. I also think it is good that the language and build system are separate.
Why? Yes, VSCode is slow. But Zed and many neovim GUIs are extremely responsive. Why would achieving that even be impossible or even that hard? You "just" need software which is fast enough to render the correct output the frame after the input. In an age where gaming is already extremely latency sensitive, why would having a text editor with similar latency performance be so hard?
Do you have any actual evidence that zed or neovide are suffering from latency problems? And why would putting a terminal in the middle help in any way in reducing that latency?
The problem is the entire software stack between the keyboard and the display. From USB polling to driver loops and GPU callbacks, the entire software stack has become incredibly asynchronous, making it trivial for computers to miss a frame boundary. Compared to DOS or similar environments, where applications basically took control over the entire CPU and whatever peripherals it knew to access, there are millions of small points where inefficiencies can creep in. Compare that to the hardware interrupts and basic processor I/O earlier generations of computers used, where entered keys were in a CPU buffer before the operating system even knew what was happening.
VSCode isn't even that slow, really. I don't find it to be any slower than Zed, for instance. Given the technology stack underneath VSCode, that's an impressive feat by the Microsoft programmers. But the kind of performance TUI programs of yore got for free just isn't available to user space applications anymore without digging into low-level input APIs and writing custom GPU shaders.
In small part, CRTs running at 70Hz or 85Hz back in the mid-80s, as well as the much smoother display output of CRTs versus even modern LCDs, made for a much better typing experience.
I think what TUIs get right is that they are optimized for use by the keyboard.
I don’t care if they are a pain for devs to write vs OS APIs, they have the best keyboard control so I use them. I despise the mouse due to RSI issues in the past.
>I think what TUIs get right is that they are optimized for use by the keyboard.
Neovim is just as much a GUI as a TUI. You can even use it as a backend for VSCode. Nothing about the keyboard controls have anything to do with this.
I use neovim like that and the selling point for me is that it's 1 less program that I have to install and learn with the added (crucial) benefit that it doesn't update on its own, changing UI and setting that I was used to.
This exact thing remains true though, you are using the exact same neovim, but instead of it being wrapped inside a totally bizarre piece legacy software, it is rendered inside a modern graphical frontend. It looks mostly the same, except it handles fonts better, it is independent of weird terminal quirks and likely faster. There is no dowside.
And again, your point about using TUI stuff because of the input method or whatever is just false. Neovide has the exact same input method, yet has a complete GUI. Using the terminal makes no sense it all, it is the worst neovim experience there is.
It ships with your OS?
Heck, on modern terminals there's even pretty great mouse integration if you want.
Unless your window full of text is GPU-accelerated, tear-free and composited, with raytraced syntax highlighting and AI-powered antialiasing, what is even the point?
TUIs are great if your structure them around keyboard input. There's more of a learning curve, but people develop a muscle memory for them that lets them fly through operations. I think the utility of this is sorely underestimated and it makes me think of my poor mom, whose career came to an end as she struggled with the new mouse-driven, web-enabled custoner service software that replaced the old mainframe stuff.
The late 80s/early 90s trend of building GUI-like TUIs was really more to get users on board with the standard conventions of GUIs at a time when they weren't yet ubiquitous (among PC users). Unifying the UI paradigms across traditional DOS and Windows apps, with standard mouse interactions, standard pull-down menus, and standard keyboard shortcuts was a good thing at the time. Today it's less useful. Things like Free Pascal have UIs like this mainly for nostalgia and consistency with the thing they're substituting for (Turbo Pascal).
Neovim and it's frontends prove that if you remove terminal emulators the applications become better. The terminal emulator is just in the way.
There is absolutely no reason to build that keyboard focused interface around the terminal. Just drop the terminal and keep the interface, just like neovim did.
What I said about the separation of user interaction to graphics is also not an opinion.
TUIs are a superb tool. They were when they were first standardised in late-tera DOS apps in the late 1980s and early 1990s, and they still have a place today.
Here are some primary reasons you have not considered in your rant:
* UI standards and design
TUIs bring the sensible, designed-by-experts model of UI construction and human-computer interface from the world of GUIs into text-only environments such as the terminal, remote SSH connections, and so on.
For example, they let one set options using a form represented in dialog box, by Tabbing back and forth and selecting with Space or entering values, without trying to compose vast cryptic command lines.
This is not just me; this is the stuff of jokes. This is objective and repeatable.
https://xkcd.com/1168/
https://xkcd.com/1597/
* Harmonious design
A well-done TUI lets users use the same familiar UI both in a GUI and at the console. This is the actively beneficial flipside of the trivial cosmetics you are advocating: you praise a text-mode app implemented in a GUI because it can do more. That is a poor deal; a ground-up native GUI app can do much more still.
But TUIs bring the advantages of familiarity with GUIs to situations where a GUI is unavailable.
* Common UI
The apps you cite as positive examples are markedly poor at following industry-standard UI conventions, which suggests to me that you are ignorant that there are industry standard UI conventions. Perhaps you are too young. That is no crime, but it does not mean I must forgive ignorance.
Nonetheless, they exist, and hundreds of millions of people use them.
https://en.wikipedia.org/wiki/IBM_Common_User_Access
TUIs allow familiar UIs to be used even when a GUI or graphics at all are unavailable.
TUIs are not just about menus; they also define a whole set of hotkeys and so on which allow skilled users to navigate without a pointing device.
* Disabilities and inaccessibility
Presumably you are young and able-bodied. Many are not.
GUIs with good keyboard controls are entirely navigable by blind or partially-sighted users who cannot use pointing devices. They are also useful for those with motor disabilities that preclude pointing and clicking.
Millions use these, not from choice, from need.
But because those tools are there, that means that they can also use TUI apps which share the UI.
And the fact that this common UI exists for keyboard warriors like myself, who actively prefer a keyboard-centric UI, means that the benefits of a11y carry across and remain benefits for people who do not need a11y assistance.
=====
That's 4 reasons, intertwined, that you showed no sign of having considered. IMHO any 1 of the 4 is compelling on its own but combined any 2 would be inescapable and all of them together, for me, completely rebut and refute your argument.
For me the best textual interface I've ever used remains Magit in Emacs: https://magit.vc/ I wish more of Emacs was like it.
I actually use emacs as my git clients even when I'm using a different IDE for whatever reason.
Some other packages also use it. Most notably for my personal usage is the gptel package.
The real neat thing about Emacs' text interface is that it is just text that you can consistently manipulate and interact with. It is precisely the fact that I can isearch, use Occur write out a region to a file, diff two buffers, use find-file-at-point, etc. that makes it so interesting to me at least.
A far more interesting example than Magit is the compile buffer (from M-x compile): This is just a regular text buffer with a specific major mode that highlights compiler errors so that you can follow them to the referenced files (thereby relegating line-numbers to an implementation detail that you don't have to show the user at all times). But you can also save the buffer, with the output from whatever the command was onto disk. If you then decide to re-open the buffer again at whatever point, it still all looks just as highlighted as before (where the point is not that it just uses color for it's own sake, but to semantically highlight what different parts of the buffer signify) and you can even just press "g" -- the conventional "revert" key -- to run the compile job again, with the same command as you ran the last time. This works because all the state is syntactically present in the file (from the file local variable that indicates the major mode to the error messages that Emacs can recognize), and doesn't have to be stored outside of the file in in-memory data structures that are lost when you close Emacs/reboot your system. The same applies to grepping btw, as M-x grep uses a major mode that inherits the compile-mode.
For people who can look at a list of key bindings once and have them memorized, maybe. Turns out most people are not like that, and appreciate an interface that accounts for that.
You also completely ignore that the menus are used to set arguments to be used by the command subsequently invoked, and that the enabled/disabled arguments and their values can be remembered for future invocations.
> The fact that Transient hooks into the MVC and breaks elementary navigation such as using isearch
Not true. (Try it.) This was true for very early versions; it hasn't been true for years.
> or switching around buffers
Since you earlier said that transient menus could be replaced with regular prefix keys, it seems appropriate to point out that transient menus share this "defect" with regular prefix keys, see https://github.com/magit/transient/issues/17#issuecomment-46.... (Except that in the case of transient you actually can enable such buffer switching, it's just strongly discouraged because you are going to shoot yourself in the foot if you do that, but if you really want to you can, see https://github.com/magit/transient/issues/114#issuecomment-8....
> has irritated me ever since Magit adopted the new interface.
I usually do not respond to posts like this (anymore), but sometimes the urge is just too strong.
I have grown increasingly irritated by your behavior over the last few weeks. Your suggestion to add my cond-let* to Emacs had a list of things "you are doing wrong" attached. You followed that up on Mastodon with (paraphrasing) "I'm gonna stop using Magit because it's got a sick new dependency". Not satisfied with throwing out my unconventional syntax suggestion, you are now actively working on making cond-let* as bad as possible. And now you are recycling some old misconceptions about Transient, which can at best be described as half-truths.
To clarify, the "custom buffer" can list the bindings. Think of Ediff and the control buffer at the bottom of the frame.
I am not saying that transient offers nothing over regular prefix keys, there is a common design pattern that has some definitive and useful value. My objection is that the implementation is more complex than it should be and this complexity affects UX issues.
> Not true. (Try it.) This was true for very early versions; it hasn't been true for years.
Then I was mistaken about the implementation, but on master C-s breaks transient buffers for me on master and I cannot use C-h k as usual to find out what a key-press execute. These are the annoyances I constantly run into that break what I tried to describe in my previous comment.
> Except that in the case of transient you actually can enable such buffer switching, it's just strongly discouraged because you are going to shoot yourself in the foot if you do that
I did not know about this, so thank you for the link. I will probably have to take a closer look, but from a quick glance over the issue, I believe that the problem that you are describing indicates that the fear I mentioned above w.r.t. the complexity of transient might be true.
> I usually do not respond to posts like this (anymore), but sometimes the urge is just too strong.
I understand your irritation and don't want to deny its validity. We do not have to discuss this publicly in a subthread about DOS IDEs, but I am ready to chat any time. I just want you to know that if I am not saying anything to personally insult you. Comments I make on cond-let and Magit sound the way they do because I am also genuinely irritated and concerned about developments in the Emacs package space. To be honest, it often doesn't occur to me that you would read my remarks, and I say this without any malicious or ulterior motives, in my eyes you are still a much more influential big-shot in the Emacs space, while I see myself as just a junior janitor, who's opinions nobody cares about. But these self-image and articulation problems are mine, as are their consequences, so I will do better to try to remember that the internet is a public space where anyone can see anything.
The `C-h` override is pretty cool there too, e.g. if from magit-status I do `C-h -D` (because I'm wondering what "-D Simplify by decoration" means), then it drops me straight into Man git-log with point at
(Ooh, I learnt a new trick from writing a comment, who say social media is a waste of time)- Search for something using C-s - Exit isearch by moving the point (e.g. C-n) - Is the transient buffer still usable for you? In my case it becomes just a text buffer and all the shortcuts just got mapped to self-insert-command.
(I'm the author of Magit and Transient. (Though not the original author of Magit.))
The transient menus certainly play an important role but I think other characteristics are equally important.
A few years ago I tried to provide an abstract overview of Magit's "interface concepts": https://emacsair.me/2017/09/01/the-magical-git-interface/. (If it sounds a bit like a sales pitch, that's because it is; I wrote it for the Kickstarter campain.)
...and everyone else, including everyone who is also using a GUI on Linux - even if they use the GUI version of Emacs.
Any non-trivial use of emacs ends up involving a pile of customizations.
Also, another user said it has a tutorial when opened which should teach the basics in “10 to 15 min” but I have a feeling I would need 0 minutes to learn the basics of turbo c++.
I get that there are diehard eMacs and vim fans and honestly I’m happy for them. But at the end of the day scientifically speaking ease of use is not JUST down to familiarity alone. You can objectively measure this stuff and some things are just harder to use than others even with preloaded info.
Well, Turbo C++ (at least the one in the article) does use common conventions but those were conventions of 1992 :-P. So Copy is Ctrl+Ins, Paste is Shift+Ins, save is F2, open is F3, etc. Some stuff are similar to modern editing like Shift+motion to select, F1 for help, F10 to activate the menu bar, etc. And all shortcut keys are displayed on the menu bar commands so it is easy to learn them (some of the more intricate editor shortcut keys are not displayed in the menus, but are mentioned in the help you get if you press F1 with an editor window active).
That and lack of a decent visual debugger situation.
So I have this weird thing where I use emacs for interactive git rebasing, writing commit messages, editing text files and munging text... and then RustRover for everything else.
It's sorta like the saying, "I wish I was the person my dogs think I am"... "I wish emacs was actually the thing that I think it is" ?
Since it has no dependencies, I wouldn't be surprised if it gets merged into Emacs core at some point.
How did the magit guy or people even come up with the data model? Always had the feeling that it went beyond the git data model. And git porcelain is just a pile of shards.
I moved to NeoVim many years ago and have been using NeoGit (a supposed Magit clone) the entire time. It's good but I'm missing the "mind blowing" part. I'd love to learn more though! What features are you using that you consider amazing?
If you want to do some really advanced stuff, sure it's a little arcane, but the vast majority of stuff that people use in git is easy enough. Branching and committing and merging never seemed that hard to me.
the big thing i am missing from it is a branch history. a record for every commit to which branch it once belonged to. no improved interface can fix that. that would have to be added to the core of git.
I'm as hardcode CLI user as it gets, I've only lived in the CLI since the mid 80s and still firmly there.
git is the absolute worst CLI ever in the history of humanity.
It's not all that different from a typical TUI interface.
Magit isn't great because of the interface. It's great because the alternative (plain git) has such a crappy interface. Contrast principle and all.
And that's different from many TUIs how?
Like in the GUI analogy, you can then choose to remember and use the displayed keyboard shortcuts for frequently used operations, but you don’t have to.
You can even see the menu atop the screen shot in the article, with the familiar names etc.
So the only thing you need to know are those commands. And that's the main appeal of Emacs, to have commands that augment text editing. One of the most powerful examples is org mode, which is just another markup language, but there's a lot of commands that makes it an organizer, a time tracker, an authoring platform, a code notebook.
Each mode is a layer of productivity you put on the bare editing experience.
It has always sounded like emacs is extraordinarily powerful and configurable, and that must be great for people who want to do extraordinary things with their text editor. There was a time when I enjoyed tinkering with my environment more, but these days I prefer simple, ordinary tools I can easily understand. I don't really want to think about the tools at all, but focus on the task I'm doing with them. I'm content to let emacs be something other people appreciate.
One of my major motivation for putting in the time is that Emacs is very stable. You can coast for decades on a configuration. I don't mind learning new stuff, but it's grating to for it to be taken away, especially if there's no recourse (proprietary software).
I've always supposed that emacs was for people with inscrutably complex text-editing needs, far beyond the bounds of my "nano is plenty" imagination, but if my cozy little coding environment is the kind of thing people are doing with emacs, I can understand why they would like that.
Really, compared to what I see here, the chief difficulty with emacs is the sheer volume of possible commands, and the heterogeneity of their names and patterns, which I believe is all a result of its development history. But the basics are just as you describe.
Emacs has Elisp commands first, then keyboard shortcuts for them, then maybe (not as a rule) menu items, and rarely dialog boxes. The Turbo Vision approach, from its design philosophy, has menus and dialogs first, then keyboard shortcuts for them.
One approach isn’t strictly better than the other, nor are they mutually exclusive. Ideally you’d always have both. My disagreement is with the “I think Emacs still does all of this” above. Emacs is substantially different in its emphasis, presentation, and its use of dialogs.
Of course, I must say there is a trade off here: you can design for novices or for advanced users, but very often not both.
Also OP apparently has no knowledge of the far better IDEs we had 30-40 years ago including but not limited to:
- Apple MPW, 1986. GUI editor where every window is (potentially) a Unix-like shell, running commands if you hit Enter (or cmd-Return) instead of Return. Also the shell scripting has commands for manipulating windows, running editing actions inside them etc. Kind of like elisp but with shell syntax. There's an integrated source code management system called Projector. If you type a command name, with or without arguments and switches, and then hit option-Return then it pops up a "Commando" window with a GUI with checkboxes and menus etc for all options for that command, with anything you'd already typed already filled out. It was easy to set up Commando for your own programs too.
- Apple Dylan, 1992-1995. Incredible Lisp/Smalltalk-like IDE for Apple's Dylan language
- THINK Pascal and C, 1986. The Pascal version was orginaly an interpreter, I think written for Apple, but then became a lightning-fast compiler, similar to Borland on CP/M and MS-DOS but better (and GUI). The C IDE later became a Symantec product.
- Metrowerks Codewarrior, 1993. Ex THINK/Symantec people starting a Mac IDE from scratch, incorporating first Metrowerks' M68000 compilers for the Amiga, then a new PowerPC back end. Great IDE, great compilers -- the first anywhere to compile Stepanov's STL with zero overhead -- and with a groundbreaking application framework called PowerPlant that heavily leaned on new C++ features. It was THE PowerPC development environment, especially after Symantec's buggy PoS version 6.
- Macintosh Allegro Common Lisp (later dropped the "Allegro"), 1987. A great Mac IDE. A great Lisp compiler and environment. Combined in one place. It was expensive but allowed amazing productivity in custom native Mac application development, far ahead of the Pascal / C / C++ environments. Absolutely perfect for consultants.
Really, it is absolutely incredible how slick and sophisticated a lot of these were, developed on 8 MHz to 33 or 40 MHz M68000s with from 2-4 MB RAM up to maybe 16-32 MB. (A lot of the Mac II line (and SE/30) theoretically supported 128 MB RAM, but no one could afford that much even once big enough SIMs were were available.)
After IDEs finally started being a common thing in UNIX systems, I left Emacs behind back to IDEs.
Still I have almost a decade where Emacs variants and vi were the only option, ignoring stuff like joe, nano, ed, even more limited.
I never really liked any of the typical late-MS-DOS era TUI applications and have no nostalgia for those. I think a small TUI like a OS installer is fine, but I realised it is the command-line I like. Launching into a TUI is not much different from opening a GUI, and both break out of the flow of just typing commands on a prompt. I use DOSbox and FreeDOS all the time, but I almost never spend time in any of the TUI applications.
Despite that, I am currently working on a DOS application running in 40x25 CGA text mode. I guess technically it is a TUI, but at least it does not look much like a typical TUI.
I think that after 25+ years of usage, I'm "used to it" by now.
They finally got good enough in the late 90s. I think it helped that computers finally had enough memory to run both the editor and the program itself.
432 more comments available on Hacker News