Terminal UI
github.comKey Features
Tech Stack
Key Features
Tech Stack
ssh admin.hotaisle.app
Can you tell me more about what do you mean by Neocloud and where are you exactly hosting the servers (do you colocate or do you resell dedicated servers or do you use the major cloud providers)
this is my first time hearing the term neocloud, seems like its focused on AI but I am gonna be honest that is a con in my book and not a pro (I like hetzner and compute oriented compute cloud providers)
Share to me more about neoclouds please and tell me more about it and if perhaps it could be expanded beyond the AI use case which is what I am seeing when I searched the term neocloud
We buy, deploy and manage our own hardware. On top of that, we've built our own automation for provisioning. For example, K8S assumes that an OS is installed, we're operating at a layer below that which enables to machine to boot and be configured on-demand. This also includes DCIM and networking automation.
We colocate in a datacenter (Switch).
Ironic is an open source project in this space if people are curious what this looks like.
While it is a lot of moving parts coordination, I'm not sure I agree with the complexity...
https://docs.openstack.org/ironic/latest/_images/graphviz-21...
A service you have no use for or interest in is “a con in your book”, what?
What _would_ you trust as a source of truth for source code if not a public commit log? I agree that a squash commit’s timestamp in particular ought not be taken as authoritative for all of the changes in the commit, but commit history in general feels like the highest quality data most projects will ever have.
It is indeed not open sourced, as the repo only has a README and a download script. The "open source" they are referring to I think is the similar README convention.
Which makes this comment they made on Reddit especially odd: https://www.reddit.com/r/aws/comments/1q3ik9z/comment/nxpq7t...
> And the folder structure is almost an exact mirror of mine
Even though Rust has patterns on how to organize source code, similar folder structure is unlikely, particularly since the original code is not public so it would have to be one hell of a coincidence. (the funniest potential explanation for this would be that both people used the same LLMs to code the TUI app)
What you're learning here is that there's not really a viable market for simple, easily replicable tools. People simply won't pay for them when they can spin up a Claude session, build one in a few hours (often unattended!), and post it to GitHub.
Real profit lies in real value. In tooling, value lies in time or money saved, plus some sort of moat that others cannot easily cross.
https://docs.bazzite.gg/Installing_and_Managing_Software/
Linux is just a kernel, not everyone agrees on what is “better” and “cleaner” to use with it!
> It's better to simply point at the binaries directly.
Binaries aren't at all signed and can be malicious and do dangerous things.
Especially if it's using curl | bash to install binaries.
But on average brew is much more safer than downloading a binary from the ether where we don't know what it does.
I see more tools use the curl | bash install pattern as well, which is completely insecure and very vulnerable to machines.
Looks like the best way to install these tools is to build it yourself, i.e. make install, etc.
And you're fully auditing the source code before you run make, right? I don't know anyone who does, but you're handing over just as much control as with curl|bash from the developer's site, or brew install, you're just adding more steps...
I mean you can?
But that is the whole point when the source is available, it is easier to audit, rather than binaries.
Even with brew, the brew maintainers have already audited the code, and it the source to install and even install using --HEAD is hosted on brew's CDN.
Realistically, how much are they auditing? I absolutely agree with your sentiment that it's better than a binary, but I think the whole security model we have is far too trusting because of the historically overwhelming number of good-faith actors in our area both in industry and hobbyists
> as long as I have a basic Linux environment, Homebrew, and Steam
https://xeiaso.net/blog/2025/yotld/ (An year of the Linux Desktop)
I guess some post-macOS users might bring it with them when moving. If it works :shrug:
For example I use it for package management for KASM workspaces:
https://gist.github.com/jgbrwn/28645fcf4ac5a4176f715a6f9b170...
Please people, inspect the source to your tools, or don't use them on production accounts.
Is it the best out there? No. But it does work, and it provides me with updates for my tools.
Random curl scripts don't auto-update.
Me downloading executables and dropping them in /bin, /sbin, /usr/bin or wherever I'm supposed to drop them [0] also isn't secure.
[0] https://news.ycombinator.com/item?id=46487921
Also, I find it is usually better to follow up with something like:
'It's better to use Y instead of X BECAUSE of reasons O, P, Q, R & S' vs making a blanket statement like 'Don't use X, use this other insecure solution instead', as that way I get to learn something too.
So one doesn't really need homebrew that has Linux as third class citizen (with the 2nd class empty)
Use Macports, it's tidy, installs into /opt/macports, works with Apple's frameworks and language configuration (for python, java etc), builds from upstream sources + patches, has variants to add/remove features, supports "port select" to have multiple versions installed in parallel.
Just a better solution all around.
On my platform, Homebrew is a preferred method for installing CLI tools. I also personally happen to like it better on Linux than Mac (it seems faster/better).
Because I have eyes and can look at the code for 2 seconds. It's not very difficult to check for the hallmarks of careless slop code.
If you can't tell in a few seconds then you can continue testing it out just like any actual project.
Unfortunately, ratatui requires a lot of verbose code that may be indistinguishable from LLM generated code: https://ratatui.rs/examples/apps/demo/
ESPECIALLY when its from a plan and comments '// STEP 2: ...'
Like here in this posts repo https://github.com/huseyinbabal/taws/blob/2ce4e24797f7f32a52...
This a dead ringer for LLM slop that someone didnt even care enough to go through and clean up.
It's the equivalent of calling something an AI generated images just because the fingers are weird, and requires a judgment more concrete than "I have eyes."
But it's more than LLM enough for anyone who has experience with them to conclude the LLM drove the majority of the output. Hence, slop
If it can quickly and easily identified as LLM code then yes, it is intrinsically slop and of no value. The person who submitted it did not even value it enough to look at/clean it up. Why would anyone else care to look at it.
If it is LLM generated but then HAS BEEN cleaned up then you cant immediately see the LLM and it passes the check anyways.
But read the same link from above: https://github.com/huseyinbabal/taws/blob/2ce4e24797f7f32a52.... LLMs leave temporal comments like "// Now do X", or "// Do X using the new Y", as responses to prompts like "Can you do X with Y instead?".
or below: "// Auto-refresh every 5 seconds (only in Normal mode)". I would guess this comment was during a response to a prompt like: "can you only auto-refresh in Normal mode?"
I like the gem comment from: https://github.com/huseyinbabal/taws/blob/2ce4e24797f7f32a52...
``` // Get log file path
let log_path = get_log_path(); ```
Sometimes there are tautological comments and sometimes not. A lack of consistency is another signal.
No, none of these are a smoking gun. Also none of this means it was completely vibe coded. To me personally, the worrying part is that these patterns signal that perhaps human eyes were never on that section of the code, or at least the code was not thought about deeply. For a toy app, who cares? For something that ingests your AWS creds, I'd pass.
It worked, no issue there, but the amount of commentary I included definitely surprised me.
I guess I really needed the support structure of comments to keep my logic on track back then, whereas now even convoluted map-reduce one liners are things I see as just obvious literate programming.
I did go a long while in my career still writing code that way when I had to share it with people. I don’t think I stopped until the only people reading my code were senior engineers with way more qualifications than I had.
So, I wouldn’t say just from this code that the creator is an LLM.
> ESPECIALLY when its from a plan and comments '// STEP 2: ...'
There are people who actually program that way. The most extreme I know was Bogdan Iancu from OpenSIPS who I've seen create functions, write step-by-step comments for what they will do, then fill out the implementation.
It's just a signal, not a certain thing.
The snark over reimplementing things from the younger crowd that reimplemented databases (mongo is webscale!), operating systems (nice browser), and UI toolkits (make your css that looks like win32!) as if there is a "one true way" to capture syntactically the state of a machine is sad.
You still use cobol and fortran and c or reimplementation of old ideas in the form of ruby and typescript?
Yes yes; we've seen others stand on a soap box and broadcast how the syntax must not be shuffled around. Thanks for reminding us about the giant foots wrath.
New generation of Eric S Raymonds. Don't go down the dark path!
Ratatui itself has a lot of much nicer AI generated code in it since then ;)
We've also done a bunch of things to help drive down some of the boilerplate (not all of it mind you - as it's a library, not a framework like other TUI libs)
However I wouldn't be excited to trust one with my AWS key and read/write access to my infra
Folks, have some manners. It's good for you.
Hardly the same.
I’ve been a long-term k9s user, and the motivation was simply: “I wish I had something like k9s, but for AWS.” That’s a common and reasonable source of inspiration.
A terminal UI for AWS is a broad, well-explored idea. Similar concepts don’t imply copied code. In this case, even the UIs are clearly different—the interaction model and layout are not the same.
The implementation, architecture, and UX decisions are my own, and the full commit history is public for anyone who wants to review how it evolved.
If there’s a specific piece of code you believe was copied, I’m happy to look at it. Otherwise, it’s worth checking what someone actually built before making accusations based on surface-level assumptions.
Creating a tool via a LLM based on a similar idea isn’t quite stealing.
You could probably get 90% of the way there with a prompt that literally just says:
> Create a TUI application for exploring deployed AWS resources. Write it in Rust using the most popular TUI library.
Sorry but ideas (and now-a-days implementations) are cheap. Let the best tool win (or more practically, just use what suites you and don’t worry about it if others prefer another tool over yours. Especially don’t worry about it if someone uses an LLM to reproduce what you already did; that’s just the rising tide of LLM capabilities.)
> $3.33/mo
> Per user, per machine.
Is that really per machine? That seems a bit steep? If I wanted to use it on a laptop and a desktop, I'd need two licenses?
This is such an obviously good open source idea as well. Just add enterprise features for orgs + collaboration.
When a person intentionally does it and spends a month or two - they far more likely will support it as they created this project with some intention in the first place.
With llms this is not the case
How long are you entitled to such support?
What does “support” mean to you, exactly?
If the tool works for you already, why do you need support for it?
Fixed positions, shortcuts, tab-indexed, the order is usually smartly layed out. Zero latency. Very possible to learn how forms are organized and enter data with muscle memory. No stealing focus when you don't expect it.
Optimized for power users, which is something of a lost art nowadays. GUI were good for discoverability for a while but increasingly I think they are neither great for power users nor for novices, just annoying and yanky.
With a 3270 if the mainframe takes a second to give you the next form, that's not a UX problem at all. If your character terminal takes a second per keypress, that's very painful and l a g g y.
But character terminals were much cheaper, worse is better, so it won out.
Unfortunately, I was unable to test in my light-background terminal, since the application crashes on startup.
More broadly, I have concerns about introducing a middleware layer over AWS infrastructure. A misinterpreted command or bug could lead to serious consequences. The risk feels different from something like k9s, since AWS resources frequently include stateful databases, production workloads, and infrastructure that's far more difficult to restore.
I appreciate the effort that went into this project and can see the appeal of a better CLI experience. But personally, I'd be hesitant to use this even for read-only operations. The direct AWS cli/console at least eliminates a potential failure point.
Curious if others have thoughts on the risk/benefit tradeoff here.
It's also deprecated by Hashicorp now.
CDK on AWS itself uses CFN, which is a dog's breakfast and has no visibility on what's happening under the covers.
Just write HCL (or JSON, JSONNET etc) in the first place.
The “middleware layer” concern doesn’t hold up. This is just a better interface for exploring AWS resources, same as k9s is for Kubernetes. If you trust k9s (which clearly works, given how widely it’s used), the same logic applies here.
If you’re enforcing infrastructure changes through IaC, having a visual way to explore your AWS resources makes sense. The AWS console is clunky for this.
The tool misrepresents what is in AWS, and you make a decision based on the bad info.
FWIW I agree with you it doesn’t seem that bad, but this is what came to mind when I read GPs comment
- AWS APIs will often require workflows of api calls rather than simple CRUD ops if you wanna do just CRUD ops you can use cloudcontrol and just make a UI.
- AWS APIs are often n+1 so you need to enrich the list APIs or else it’s not super useful.
- I didn’t see any depagination logic, you often have to balance search/filtering with depagination time of AWS APIs and then for a proper UX you need to enrich the list items with Describe calls. (See n+1 above). When you implement the depagination logic you can reference the botocore implementation which is used by the aws cli depagination logic, there’s some quirks in naming and behavior that you can have fun looking at /s. (Both are open source so ChatGPT and Claude should know about it)
At the implementation level I really think you should just pull in the credential provider from the rust sdk so you can get AWS SSO support.
Otherwise nice weekend project.
49 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.