Key Takeaways
Quite the adage but I have come to realise that I only ever learned to work, not to make money. I make a good living from consulting. But selling your time only gets you so far.
So I'll probably hire. And probably find out all my previous bosses weren't so wrong with their complaints after all.
Here's a good video that makes a case for this. Even if you don't agree, you might find some of the points he makes interesting. But tl;dr, he argues that index funds basically always outperform other methods, so one should primarily invest in things like that.
So who's on the ends of the bell curve?
You're assuming that you need privileged information that is not available, but what I'm trading is basically the big player moves - indicated by movement volume in a stock.
You don't need all the info, there are emergent patterns that result for stocks that make big moves over time.
The most profitable (and effort efficient way) is to routinely invest in a broad basket of stocks over a long period. Ie Voo and Hold.
Personally I like to do primarily tech stocks and mix it up doing swing trading (holding multiple days) with a bit of scalping as well (buy / sells over minutes).
At first I lost a lot of money scalping but now I seem to have a much higher success rate - you start to notice certain patterns in the way stocks move if you watch the charts long enough, and I've been learning to have more conviction in my positions.
Does it follow then that no-one is beating the index consistently?
Of course there are people who are not a statistic. Maybe not everyone is made for it, but that doesn't mean noone is out there beating the market.
Maybe its hard if you're a hedge fund, but I'm talking about individuals with relatively small accounts.
Moreover if you can get an edge that is even 2-3% over a coin flip all you need is risk management to make money.
Not understanding this is how you go broke. I traded for a number of years and did well. It was not hard to regularly beat the market, especially in futures and options.
* Learn complex analysis!
* Get a better workflow for writing my notes to myself (e.g., Obsidian) and for publishing my blog/website (have a marginally-functional Hugo instance right now). Small thing, but the kind of important-but-not-urgent thing that it's easy to put off!
Other than that near-universal constant, I want to try being a bit of a jack of many trades this year: learn full-stack, practice vibe coding, basics of graphics programming (update to the latest ways)
I understand that means master of none, but this is a play around year for me. In theory AI should make it easier to try new things, we shall see about how it works in practice.
This. My control of my focus has been reduced to the point of disability at times (seriously worrying, when in middle age)
> Other than that near-universal constant, I want to try being a bit of a jack of many trades this year
But this, honestly, is at odds with it. It will be difficult to do these two things at once (source: trust me bro, but no really do trust me).
Rather I would suggest a strategy, if you want to learn lots of things: ask yourself, what small set of goals are all those things in service of? What could you gain if they all pointed mostly in one direction, and how will you keep a slow, low-level, long term focus on that direction?
https://blog.rahix.de/design-for-3d-printing/
I really just ADHD'd the hell out of it, I'm afraid, and absorbed everything I read. I was in financial difficulty and things were expensive so it took me a couple of years to get me from "I'd like a 3D printer" to "this 3D printer is affordable but viable and even if I never learn design there are plenty of tools I can make with it that will save me money".
In that time I read everything I could about what I'd need to learn, convinced myself that I was not so clumsy and inept I couldn't maintain a printer. These days printers don't need so much mechanical knowledge to get started.
On the CAD side of things, I learned a bit of OpenSCAD, found it basically helpful to make one simple thing but also frustrating and disappointing, joined really useful non-public Facebook groups where people were working on similar things, decided to get properly into FreeCAD, and dug in with the Mango Jelly Solutions videos on Youtube (which actually are now organised into a course structure, but weren't really then).
The thing that motivated me mostly was having simple real things I wanted to make for a project I was working on (though my brain being what it is, I still haven't got round to that exact project...)
If you have a need for a thing you would like, and you're able to break it down into simpler projects, particularly if they are things you might find useful along the way, it's not very difficult to find the motivation to learn these two things.
Basically because the positive feedback loop is so strong and 3D printing is such a concrete way to learn design because you get to hold your design so quickly: I designed this thing in CAD, I printed this thing, wow it works but I could improve this, I need to learn this new thing in CAD, I printed it, it works but… etc.
Pretty soon you find yourself staring at some real world object on your desk or whatever, realising that its shape is the way it is because of how it could be manufactured, and modelling it in CAD for fun.
I'm still reading the rest of this and your other comment, thanks so much. Inspirational.
The problem is exactly that, yes. If you want a simple shape and maybe to stick a thread on it (one of the first things I printed) then OpenSCAD has the basics and there are really interesting libraries.
But if you get into something complex, you end up building your own scheme and then constantly gardening it. The complexity never gets truly abstracted away because you can never truly work in a higher order way.
FreeCAD is a long way from perfect, but what it is, that you need, is a space where you can reason about geometry in a way that lets you learn. And if you want code-CAD, you can do it with python macros, or limited bits of OpenSCAD in that workbench, or you can use CadQuery/Build123D and generate STEP files for some of it, and then build on those.
I would still say I don't know CAD anywhere near as well as I'd like to. But I know where to start, I've learned the terminology, and I am able to think in CAD in a way I never expected to.
But yeah, thinking in CAD is probably the major step here.
FreeCAD is obviously not a commercial grade CAD package, but it’s not because it is weak conceptually: it’s not dissimilar to Solidworks, Onshape or Fusion. It’s weak in terms of UI flow and its CAD kernel is flawed in some ways (as you probably already know: fillets, chamfers, drafts, thicknesses/shells).
I don’t believe there is so much to learn to get from FreeCAD to one of those packages, at least where core concepts are concerned, so I carry on with what I am doing.
But on the other hand I think one learns a concept best from multiple perspectives, and all of them, essentially, have a free, student or cheap (e.g. Solidworks For Makers) tier, so probably the answer is to do some learning in one or two of those alongside.
There is a good video on YouTube by Deltahedra where he does a Solidworks certification exam using FreeCAD, incidentally.
> Other than that near-universal constant, I want to try being a bit of a jack of many trades this year: learn full-stack, practice vibe coding, basics of graphics programming (update to the latest ways)
Therein lies the problem.
To want to "focus on the task at hand" and then express the desire to "try being a bit of a jack of many trades" is a mutually exclusive goal set.
If you want to improve focusing skills, then it is best to pick one thing from the "many trades" and master only it before beginning another. If the "ability to focus on the task at hand" is not really all that important in the grander scheme of things and topically bouncing around is where you find happiness, then I humbly suggest to not beat yourself up about focusing on "the task at hand."
Either is an equally valid choice which none need judge, since it is your own after all.
This month my focus is on full-stack, and I don't move on to the next project until I get the basics right on that.
> it is best to pick one thing from the "many trades" and master only it before beginning another
Exactly this thanks, but my aim is only "get comfortable", not "master"
I don't do anything anymore these days to advance my career in SWE. Maybe because I am quite jaded because job market sucks, and the job itself sucks (making the rich richer), and any extra time I need to do to advance my career is just doing leetcode monkey grind.
I want to change it this year. I do CRUD apps, and I am very boxed in my brain, thinking that CRUD apps is the only programming there is. I often marveled at people who create database, compilers, emulators, 3D engines, version controls, text editors, etc. Those people are like wizards to me.
I wonder how can I be creative like that? Like, how can you just wake up one day and decide to create magic.
I want to learn how to do those. Any advice is appreciated.
Also I want to do it in Zig because I've never worked with manual memory management language before, and I figured might as well.
I started learning infra via AWS CDK (TypeScript). And by osmosis learned a lot about cloud native application architecture. Which changed my way of creating web apps entirely and rejuvenated my love for software. Still going strong 5 years later. Now with much stronger focus on platform engineering and not working on features much.
Anyway, I got the "Writing An Interpreter In Go" book and I think I'll give it a shot either way. I think I'm at a point in my career where I'm finally tired of building the same things again and again. I've also been thinking about doing a design course in Interaction Design Foundation just to get my mind interested in things again and detach myself from the work I've been doing.
Writing my own compiler would be compelling but I somehow have a problem to do things only for sake of learning. Would love to have the knowledge tho. Anyway happy new year!
Pick a language you love, and put together a text editor, or even just a quick utility to search through all your files for a keyword and show the results in a window. Write your own Clock app for Android, just to fix that little niggling detail that no other app quite gets right.
I think you'll be surprised how easy it is to put things together, once you start.
The point isn't to build something anyone else would care about - don't worry about the polish, you don't need to publish it, you don't even need to use it yourself. The point is just to make something. Although, personally, I now have a collection of random utilities that all make my life a little bit better, and it's nice knowing that any time a simple app like "Clock" or "Calculator" bugs me, I COULD fix it.
I live in a city with well-connected public transport (Singapore) so I don't feel the need to learn. However, this year I travelled to some rural areas in Japan and started to feel the pain of relying solely on public transport which is either extremely sparse, or sometimes non-existent which limits the places I want to visit. That's why I felt like if I obtain this skill, I can explore more places in my travels
Two low-risk and cheap ways to develop relevant driving skills are bumper cars[0] and go-karts[1]. This may appear to be silly at first, but both involve the same hand-eye coordination and decision skills of vehicular driving (though the latter is no where nearly as fun as the others).
A unique simulation bumper cars can provide is in collision avoidance and real-time steering/acceleration/braking skills. The value of this is relative and dependent upon a person using time in a bumper car with intent to hone driving skills.
On a bike, this mostly reduces pedaling; in a car this can reduce unnecessary braking, safer driving distances, which make you a more predictable driver.
I believe 100% that nobody should be allowed behind the wheel of a motor vehicle before obtaining cycling proficiency.
In the park, I made it a hard point not to ride the bumper cars because I thought it would mess with my muscle-memory as the designated driver. If not for that, I really love bumper cars. However, I've found that responsiveness of bumper cars vary a lot per park; it either depends on the maintenance or the maker of the rides. And IME, none of them are really comparable to even the shittiest cars I've driven (e.g., the ones from the driving school, the assigned car for my license test).
But my bigger concern that day was the fact that the bumper car mindset is not the roadcar driver mindset. For learners, the free-for-all chaotic nature of the track is not even a good simulation! Not even if you're driving somewhere like India or China.
Speaking of simulation, I really want an affordable but legit way to practice dealing with outlier driving scenarios. Like, what if my brake fails in the highway, what if I get a flat while doing 100KPH---stuff even the safest, most defensive drivers can't entirely rule out. Anyone know of games that might fit the bill?
Although I have "known" how to drive for a long time, I didn't get my formal license until much later in life than most people, for similar reasons to yours.
Now that I have it, I kick myself for not doing this earlier, but as they say: the best time was ten years ago, the second-best time is now.
Owing to the city life I often go up to six months without driving anywhere, but when I finally get out on the road again it feels great. Country driving is amazing, in any country where people drive safely. It's even pretty nice where they don't. City driving still stresses me out, but I'm determined to get better at it.
Good luck! If you find yourself having trouble getting the license in Singapore, there are other countries where you could get a license more easily, and with that license you could drive in third countries.
Create a blog and post at least 8 times to it over the next 12 months, which would be improving my skills with writing and illustration.
Design at least two boards and get them through the prototype stage into bringup and running.
Become conversational in Ukrainian.
> Become conversational in Ukrainian.
This one caught my eye! What is your motivation?Aside from that, I'd like to shore up the cracks or gaps in my mathematical foundations, and learn more advanced mathematics.
I'm still really confused about thermodynamics so that's another topic that I would like to revisit. I've never neen able to convince myself that our current understanding is correct.
Honestly, I want to read and study more college level textbooks about every single subject.
What's the plan to make sure that progress in AI leads to predominantly positive outcomes for people? All the people I've asked who work at the major AI companies haven't given an answer, except to say that they don't study safety or societal impacts, but know others who do.
If you don't have an answer, can I humbly suggest that you add finding one to your list?
I have bought the Nancy Faber adult piano adventures book 1 too.
Any tips are welcome.
- There are 12 keys on the piano just repeated - A scale can start on any of those 12 keys - The "home" key of the scale get labels with a roman numeral one, I - The rest of the keys in the scale get roman numerals ii,iii,IV,V,vi,vii - The I,IV,V are all upper case to represent major chords, the lower case for minor chords - Most pop songs use I,IV,V from a scale. In C-major scale, C, F, G major chords. - You can start on any key on the piano and if you play the same sequence of I, IV, V, you'll get the same song, just transposed into a different key. (the scales are slightly different due to even temperament for advanced ears)
So, learn songs by the chord structure first. It is easier to remember and you'll start to recognize patterns in other songs and unlock them faster.
this is the blues. just learn how to play a blues progression on the piano and you'll learn what this poster is trying to teach.
I practiced enough to learn to play Satie - Gnossienne No. 1 in the right hand and then sold the piano.
My fav music is Chopin, Satie, Ravel, Debussy, Rachmaninoff, Ligeti on piano.
The distance between starting from zero on the guitar, to anything ever composed for the guitar is a 100X less than starting from zero on piano to Ligeti.
Learning to solo on electric guitar, play the flute, play the alto sax to me makes so much more sense than trying to learn to play piano or classical guitar as an adult. Classical guitar is hard enough. Piano just a whole other level to that.
A monophonic instrument is just going to be so much better bang for the buck in terms of time woodshedding.
Hope that helps a little bit. It gets better sometimes!
> I've written about
If you are willing, can you share a link to any public writing? I'm surprised that we don't see more blog posts shared in HN about people's stuggle with mental health. You definitely see HN posts about it, but I don't see so much blog post sharing.https://ludic.mataroa.blog/blog/on-burnout-mental-health-and...
The bigger hurdle is the intimidation of the gym. It doesn't help that the gym will be packed on January 1st. I deload every January to avoid the gym as much as possible in January. It will be back to normal by mid February. A new lifter would be so much better off waiting until March 1st to join.
Walking 30 minutes a day to get your cardio up will almost certainly help your brain chemicals improve after a few weeks and no new skills needed.
> Thinking about getting a personal trainer, because I try to stay active, but have no idea how to actually work out.
This is a great idea if you have the money for it. Don't feel guilty about just a few sessions to build up a set exercises that works for you. Then you can circle back 2-4 times per year, do a few more sessions to up your game. For me, exercise was a fuckin' game changer for my mental health. Even when I struggle to get out of bed in the morning, missing a workout makes me feel muuuuuuch worse (mentally and physically).Audio programming with C++. I was a professional film/game composer for the first 10+ years of my career, but when I started programming I was mostly interested in solving problems that required web and infrastructure skills. Also, I always looked at C++ as something to tackle once I was a better programmer -- I now think I'm a pretty okay programmer and am ready to take it on. I'd like to eventually do a deep dive into Rust as well, but I'm focusing on C++ first, as the vast majority of audio programming is still done in C++ and likely will be for the foreseeable future, and I think learning Rust will be more valuable once I've run into many of the pain points that it addresses.
Non-technical:
Improve my archery. I started this year and love it.
I think of this completely opposite. C++ and audio is the incumbent relic that will progress one funeral at a time. Librosa and Pyo are just incredible in python but for offline processing.
Rust and audio would be really cool in terms of wasm. Anything VST is IMO a complete waste of time. That was saturated 15 years ago.
So, 2026 is going to be the year I'm going to run this experiment on myself and see what I can accomplish with this way of working.
I have similar years experience and regularly try out AI for development but always find it’s slower for the things I want to build and/or that it produces less than satisfactory results.
Not sure if it’s how I use the models (I’ve experimented with all the frontier ones), or the types of things I’m building, or the languages I’m using, or if I’m not spending enough, or if it’s just my standards are too high for the code that is produced but I usually always end up going back to doing things by hand.
I try to keep the AI focused on small well defined tasks, use AGENT.MD and skills, build out a plan first, followed by tests for spec based development, keep context windows and chats a reasonable length etc, but if I add up all that time I could have done it myself and have better grasp of the program and the domain in the process.
I keep reading how AI is a force multiplier but I’m yet to see it play out for myself.
I see lots of posts talking about how much more productive AI has made people, but very few with actual specifics on setup, models, costs, workflows etc.
I’m not an AI doomer and would love to realize the benefits people are claiming they get.... but how to get there is the question
Initially I was astounded by the results.
Then I wrote a large feature (ad pacing) on a site using LLMs. I learned the LLMs did not really understand what they were doing. The algorithm (PID controller) itself was properly implemented (as there is plenty of data to train on), but it was trying to optimize the wrong thing. There were other similar findings where LLM was doing very stupid mistakes. So I went through a disillusionment stage and kind of gave up for a while.
Since then, I have learned how to use Claude Code effectively. I have used it mostly on existing Django code bases. I think everybody has a slightly different take on how it works well. Probably the most reasonable advice is to just keep going and try different kind of things. Existing code bases seem easier, as well as working on a spec beforehand, requiring tests etc. basic SWE principles.
This is step 3 of “draw the rest of the owl” :-)
> the most reasonable advice is to just keep going and try different kind of things.
This is where I’ve been at for a while now. Every couple of months I try again with latest models and latest techniques I hear people talking about but there’s very little concrete info there that works for me.
Then I wonder if I’d just my spend? I don’t mind spending $30/month to experiment but I’m not going to drop $300/month unless I can see evidence that it’ll be worth it, which I haven’t really seen, but maybe there’s a dependency and you don’t get the result without increased spend?
Some posts I’ve seen claim spending of $1,500/month, which would be worth it if it could increase productivity enough, but there’s very few specifics on workflows and results.
I use Claude every day for everything, it's amazing value for money.
Give it a specific task with the context it needs, that's what I find works well, then iterate from there. I just copy paste, nothing fancy.
I'm sure you can derive some benefit without doing that, but you're not going to see much of a speedup if you're still copy/pasting and manually prompting after each change. If anybody is copy/pasting and saying "I don't get it", yeah you don't.
Fair enough :-)
This reminds me about pigeon research by Skinner. Skinner placed hungry pigeons in a "Skinner box" and a mechanism delivered food pellets at fixed, non-contingent time intervals, regardless of the bird's behavior. The pigeons, seeking a pattern or control over the food delivery, began to associate whatever random action they were performing at the moment the food appeared with the reward.
I think we humans have similar psychology, i.e. we tend to associate superstitions about patterns of what were doing when we got rewards, if they happen at random intervals.
To me it seems we are at a phase where what works with LLMs *(the reward) are still quite random, but it is psychologically difficult for us to admit it. Therefore we try to invent various kinds of theories of why something appears to work, which are closer to superstitions than real repeatable processes.
It seems difficult to really generalize repeatable processes of what really works, because it depends on too many things. This may be the reason why you are unsuccessful when using these descriptions.
But while it seems not so useful to derive descriptions, in my personal experience -- although I had skeptical attitude -- it is possible to make it work, but it really depends on the context. It seems you just need to keep trying various things, and eventually you may find out what works for you. There is no shortcut where you just read a blog post and then you can do it.
Things I have tried succesfully: - modifying existing large-ish Django projects, adding new apps to it. It can sometimes use HTMX/AlpineJS, but sometimes starts doing Javascript. One app uses tenants, and LLM appears to constantly struggle with this. - creating new Django projects -- this was less successful than modifying existing projects, because LLM could not copy practices - Apple Swift mobile and watch applications. This was surprisingly succesful. But these were not huge apps. - python GUI app was more or less succesful - GitHub Pages static web sites based on certain content
I have not copied any CLAUDE.md or other files. Every time Claude Code does something I don't appreciate, I add a new line. Currently it is at 26 lines.
I have made a few skills. They are mostly so that they can work independently in a loop, for example test something that does not work.
I started with the basic plan (I guess it is that $30/month). I only upgraded to $100 Max and later to $180 2xMax because I was hitting limits.
But reason I was hitting limits was because I was working on multiple projects on multiple environments at the same time. The only difference I have seen is that I have hit the limits. I have not seen any difference in quality.
Swift and iOS was something that didn’t work so well for me. I wanted to play around with face capture and spent a day with Claude putting together a small app that showed realtime video of a face and put dots on/around various facial features and printed log messages if the person changed the direction they were looking (up down left right) and played a sound when they opened their mouth.
I’ve done app development before, but it’s been a few years so was a little bit rusty and it felt like Claude was really helping me out.
Then I got to a point I was happy with and I thought I’d go deeper in the code to understand what it was doing and how it was working (not a delegation issue as per another comment, this was a play/learning exercise for me so wanted to understand how it all worked) - and right there in the apple developer documentation was a sample so that did basically the same thing as my app, only the code was far simpler and after reading through the accompanying docs I realized the Claude version had a threading issue waiting to happen that was explicitly warned against in the docs of the api calls it was using.
If I’d gone to the developer docs in the beginning I would have had a better app, and better understanding in maybe a quarter of the time.
Appreciate the info on spend. The above session was on the $30/month version of Claude.
I guess I need to just keep flapping my wings until I can draw the owl.
If you tried it roughly prior to https://developer.apple.com/documentation/xcode-release-note... give it another shot. f you tried it after and found it lacking then this doesn't apply.
I am reading between the lines here, trying genuinely to be helpful, so forgive me if I am not on the right track.
But based on what you write, it seems to me you might have not really gone through the disillusionment phase yet. You seem to be assuming the models "understand" more than they really are capable of understanding, which creates expectations and then disappointment. It seems to be you are expecting CC to work at a level of a senior professional on various roles, instead of a junior professional.
I would have probably approached that iOS app by first investigting various options how the app could be implemented (especially as I don't have deep understanding of the tech), and then explore each option to understand myself what is the best one.
Then I would have asked Claude to create a plan to implement the best one.
During either the approach selection or planning, the threading issue would either come up or not. It might come up explicitly, in which case I could learn it from the plans. It might be implicit, just included in the generated code. Or it might not be included in the plans or in the code, even if it is explicitly stated in the documentation. If the suggested plan would be based on that documentation, then I would probably read it myself too, and might have seen the suggestion.
When reviewing the plan, I can use my prior knowledge to ask whether that issue has been taken into account. If not, then Claude would modify the plan. Of course, if I did not know about the threading issue beforehand, and did not have the general experience about the tech to suspect such as a issue, nor read the documentation and see the recommendation, I could not find the issue myself either.
In this case, the issue would arise at later stage, hopefully while testing the application. I have not written complex iOS apps so personally so I would have not caught it either. I would ask it to plan again how to comprehenively test such an app.
What I meant by standard SWE practices is that there are various stages where the solution is reviewed from multiple angles, so it becomes likely that this kind of issues are found.
In my experience, CC cannot be expected to independently work as a senior professional (architect, programmer, test manager, tester) on any role. But it can act as a junior professional on any of these roles, so it can help somebody with senior guidance to get the 10x productivity boost on any of these areas.
I mean I'm definitely still in the stage of disillusionment, but I'm not treating LLMs as senior or expecting much from them.
The example I gave played out much as you described above.
I used an iterative process, with multiple self-contained smaller steps, each with a planning and discussion stage where I got the AI to identify ways to achieve what I was looking to do and weigh up tradeoffs that I then decided on, followed by a design clarification and finalisation stage, before finally getting it to write code (very hard sometimes to get the AI not to write code until the design has been finalised), followed by adjustments to that code as necessary.
The steps involved were something like:
- build the app skeleton
- open a camera feed
- display the feed full screen
- flip the feed so it responded as a mirror would if you were looking at it
- use the ios apis to get facial landmarks
- display the landmarks as dots
- detect looking in different directions and print a log message.
- play a sound when the user opens their mouth
- etc
Each step was relatively small and self-contained, with a planning stage first and me asking the AI probing/clarifying questions.
The threading issue didn't come up at all in any of this.
Once it came, the AI tied itself in knots trying to sort it out, coming up with very complex dispatching logic that still got things incorrect.
It was a fun little project, but if I compare the output it just wasn't equivalent to what I could get if I'd just started with the Apple documentation (thought maybe it's different now, as per another commenter's reply).
It's also easily completeable in a day if you want to give it a try :-) Apple Developer reference implementation [here](https://developer.apple.com/documentation/Vision/tracking-th...).
> By project manager role, I mean that I am explicitly taking the CC through the various SWE stages and making sure they have been done properly, and also that I iterate on the solution. On each one of the stages, I take the role of the respective senior professional. If I cannot do it yet, I try to learn how to do it. At the same time, I work as a product manager/owner as well, to make decisions about the product, based on my personal "taste" and requirements.
Right, this is what I do. I guess my point is that the amount of effort involved to use English to direct and correct the AI often outweighs the time involved to just do it myself.
The gap is shrinking (I get much better results now that I did a year ago) but still there.
>The threading issue didn't come up at all in any of this. > >Once it came, the AI tied itself in knots trying to sort it out, coming up with very complex dispatching logic that still got things incorrect.""
Yes. These kind of loops have happened to me as well. It sometimes requires clearing of context + some inventive step to help the LLM out of the loop. For example my ad pacing feature required that I recognized that it was trying to optimize the wrong variable. I consider this to be partly what I mean by "LLM is junior" and that "I act as the project manager".
> I guess my point is that the amount of effort involved to use English to direct and correct the AI often outweighs the effort involved to just do it myself.
Really, you think you could have done a complex mobile app alone in one day without knowing the stack well beforehand? I believe this of stuff used to take months from a competent team not long time ago.
> I consider this to be partly what I mean by "LLM is a junior" and that "I act as the project manager".
And this is partly what I mean when I say the time I spend instructing the "junior" LLM could be just as well spent implementing the code myself - because the "project manager" side of me can work with the "senior dev" side of me at the speed of thought and often in parallel, and solving the challenges and the design of something is often where most of the time is spent anyway.
Skills are changing this equation somewhat due to the way they can encode repeatable knowledge, but not so much for me yet especially if I'm trying things out in radically different areas (I'm still in my experimental stage with them).
> Could you really have done a complex mobile app alone in one day without knowing the stack well beforehand?
No, but that's not what happened here.
The mobile app wasn't complex (literally only does the things outlined above) and I've done enough mobile development and graphics/computer vision development before that the stack and concepts involved weren't completely unknown, just the specifics of the various iOS APIs and how to string them together - hence why I initially thought it would be a good use case for AI.
It was also an incredible coincidence that the toy app I wanted to build had an apple developer tutorial that did almost the same thing as what I was looking to build, and so yes, I clearly would have been better off using the documentation as a starting point rather than the AI.
That sort of coincidence won't always exist, but I've thinking lately about another toy iOS/apple watch application, and I checked, and once again there is a developer tutorial that closely matches what I'm looking to build. If I ever get around to experimenting with that, the developer docs are going to be my first port of call rather than an AI.
> I certainly could not have done one year ago what I can do today, with these tools.
Right, and if you look back at my original reply (not to you), this is what I'm trying to understand - the what and how of AI productivity gains, because if I evaluate the output I get it's almost always either something I could have built faster and better, or if not faster then at least better and not so much slower that the AI was enabling a week of work to be done in a day, and month of work to be done in a week (claims from the GP, not you).
I would love to be able to realize those gains - and I can see the potential but just not the results.
Subjectively interacting with an LLM gives a sense of progress, but objectively downloading a sample project and tutorial gets me to the same point with higher quality materials much faster.
I keep thinking about research on file navigation via command line versus using a mouse. People’s subjective sense of speed and capability don’t necessarily line up with measurable outcomes.
LLMs can do some amazing things, but violently copy and pasting stack overflow & randomness from GitHub can too.
The time I save on typing out the program is lost to new activities I otherwise wouldn't be doing.
"Write unit tests with full line and branch coverage for this function:
def add_two_numbers(x, y): return x + y + 1 "
Sometimes the LLM will point out that this function does not, in fact, return the sum of x and y. But more often, it will happily write "assert add_two_numbers(1, 1) == 3", without comment.
The big problem is that LLMs will assume that the code they are writing tests for is correct. This defeats the main purpose of writing tests, which is to find bugs in the code.
Run Cursor in “agent” mode, or create a claude code “unit test” skill. I recommend claude code.
Explain to the LLM that after it creates or modifies a test, it must run the test to confirm if it passes. If it fails it is not allowed to edit the source code, instead it must determine if there is a bug in the test or the source code. If a bug if found in the test it should try again, if there is a bug in the source code it should pause, propose a fix, and consult with you on next steps.
The key insight here is you need to tell it that it’s not supposed to randomly edit the source code to make the test pass.
When I’m writing my own code I can verify the logic as I go and coupled with a strong type system and a judicious use of _some_ tests its generally enough for my code to be correct.
By comparison the AI needs more tests to keep it on the right path otherwise the final code is not fit for purpose.
For example in a recent use case I needed to take a json blob containing an array of strings that contained numbers and needed to return an array of Decimals sorted in ascending order.
This seemed a perfect use case - a short well defined task with clear success criteria so I spent a bunch of time writing the requirements and building out a test suite and then let the AI do its thing.
The AI produced ok code, but it was sorted everything lexicographically before converting to a Decimal rather converting to Decimals first and sorting numerically so 1000 was less than 900.
So I point it out and the AI says going point, you’re absolutely correct and we add a test for this and it goes again and gets the right result but that’s not a mistake I would have made or needed a test for (though you could argue it’s a good test to have).
You could also argue that I should have specified the problem more clearly, but then we come back to the point that if I’m writing every specific detail in English first, it’s faster for me just to write it in code in the first place.
I feel this is a gross mischaracterization of any user flow involving using LLMs to generate code.
The hard part of generating code with LLMs is not how fast the code is generated. The hard part is verifying it actually does what it is expected to do. Unit tests too.
LLMs excel at spewing test cases, but you need to review each and every single test case to verify it does anything meaningful or valid and you need to iterate over tests to provide feedback on whether they are even green or what is the code coverage. That is the part that consumes time.
Claiming that LLMs are faster at generating code than you is like claiming that copy-and-pasting code out of Stack Overflow is faster than you writing it. Perhaps, but how can you tell if the code actually works?
You will certainly understand a program better where you write every line of code yourself, but that limits your output. It's a trade-off you have to want.
In the end the person in charge is liable either way, in different ways.
It doesn’t have to be exactly how I would do it but at a minimum it has to work correctly and have acceptable performance for the task at hand.
This doesn’t mean being super optimized just that it shouldn’t be doing stupid things like n+1 requests or database queries etc.
See a sibling comment for one example on correctness, another one related to performance was querying some information from a couple of database tables (the first with 50,000 rows the next with 2.5 million)
After specifying things in enough detail to let the AI go, it got correct results but performance was rather slow. A bit more back and forthing and it got up to processing 4,000 rows a second.
It was so impressed with its new performance it started adding rocket ship emojis to the output summary.
There were still some obvious (to me) performance issues so I pressed it to see if it could improve the performance. It started suggesting some database config tweaks which provided some marginal improvements but was still missing some big wins elsewhere - namely it was avoiding “expensive” joins and doing that work in the app instead - resulting in n+1 db calls.
So I suggested getting the DB to do the join and just processing the fully joined data on the app side. This doubled throughout (8,000 rows/second) and led to claims from the AI this was now enterprise ready code.
There was still low hanging fruit though because it was calling the db and getting all results back before processing anything.
After suggesting switching to streaming results (good point!) we got up to 10,000 rows/second.
This was acceptable performance, but after a bit more wrangling we got things up to 11,000 rows/second and now it wasn’t worth spending much extra time squeezing out more performance.
In the end the AI came to a good result, but, at each step of the way it was me hinting it in the correct direction and then the AI congratulating me on the incredible “world class performance” (actual quote but difficult to believe when you then double performance again).
If it has just been me I would have finished it in half the time.
If I’d delegated to a less senior employee and we’d gone back and forth a bit pairing to get it to this state it might have taken the same amount and effort but they would’ve at least learnt something.
Not so with the AI however - it learns nothing and I have to make sure I re-explain things and concepts all over again the next time and in sufficient detail that it will do a reasonable job (not expecting perfection, just needs to be acceptable).
And so my experience so far (much more than just these 2 examples) is that I can’t trust the AI to the point where I can delegate enough that I don’t spend more time supervising/correcting it than I would spend writing things myself.
`database-query-speed-optimization` "Some rules of thumb for using database queries:
- Use joins - Streaming results is faster - etc. "
That way, the next time you have to do something like this, you can remind it of / it will find the skill.
In this case the two tables shared 1:1 mapping of primary key to foreign key so the join was fast and exact - but there are situations where that won’t the case.
And yeah this means slowly building out skills with enough conditions and rules and advice.
I laughed more at this than I probably should have, out of recognition.
Maybe that’s just the level I gave up at and it’s a matter of reworking the Claude.md file and other documentation into smaller pieces and focusing the agent on just little things to get past it.
Good luck!
Depression is a strange thing. In my case, the causes are plainly visible to me or any passer-by: I don't have much in the way of connections, assets, or responsibilities. Surely, it wasn't (and isn't) bound-to-be: my upbringing and environment lack little, and when I've had some of any of the three, I've done better for myself.
I want these things, but I abase myself such that I can barely act at all. Maybe it's a tyranny of being a social animal where the humiliated keep themselves low out-of-sight through some natural pack instinct.
As a higher animal, surely there's a way out of it. And of course there is. But it's a tangle: how can you connect to anyone when you feel completely humiliated? When the act of any connection makes you feel ill and behave strangely? How do you build assets and security when you're sickened by responsibility? And why can your instincts –designed to guide and protect you– screw you over so badly? When a bright, sunny day surrounded by loved ones seems like a trip to hell, how do you even start to work through that?
I have a lot of goals, but there seems to be this bottleneck that prevents moving meaningfully on any of them. The thing is: I know to get out the other side, I need connections, responsibility, work, etc. But I seem to be getting worse at it, not better, and the years are just flying by.
I got better. Much better. I'm literally more social than ever and for the first time in my life i feel my cup is full.
If there's one piece of advice I can give is take ACTION, stay in MOTION. Always DO something to get better. You started going to the gym? That's great! Join some classes while you're at it. Don't stay still. That's when it gets you.
It sounds to me like you're already living with some ambition. What do you REALLY want from 2026?
- Launch my own hand-rolled paper trading solution by mid to late 2026. I want to focus on strategies that prevents heavy losses, rather than actively looking for profits. If I succeed, go live in 2027.
- I hope to complete 3 semesters with a B or above in the ongoing Online Masters Degree program I've enrolled for.
- Do more coding with AI.
- Be prepared for job interviews - even though I have no plans to change jobs. This year my rustiness and lack of interview readiness has cost me "dream jobs" (from my POV)
Non-technical skills:
- The usual. Lose weight, eat mindfully, gain strength, learn the language of my country.
I donno, I think I kind of like somebody else's skill objective about trying out shitposting. It's the age deucine (credit @naasking).
- Climb a V8 at my local climbing gym! I presently project V5's, and I think the scale is super-linear (but personally it doesn't feel logarithmic to me). So that would be a significant increase, probably near the edge of what I could really achieve in a year.
- Get our business (mydragonskin.com) to a point where it pays us standard engineer salaries. So far we've been extracting significantly less than our market value.
- Acquire (romantic) partner that I believe will be my person; find "The One"
Technically, apply myself more to projects at my job, learn how to fit in our flow better. I've been using AI to program some goofy projects, and I've found a good medium between vibe-coding and auto-complete, where I make it draw up a plan for every commit, and then I ask it to implement it, and if the generated code is wrong I undo the changes and revise the plan to be more precise. It's relatively easy to verify the plan, not as easy to verify the code, but it's still easy to debug the code and figure out what's wrong.
The burden shifts more to creating small modules with stable interfaces.
Already, I know enough to know that just prompting without a solid foundation is going to be unpleasant in so many ways.
And then, once I’ve proven it out hire real coders.
Python. I played around with it three years ago, and did about 30 Project Euler problems with it, but I've let that lapse. I'll work to pick that up.
I bought my wife a learn-to-draw kit for Christmas, but it's really a gift for both of us.
I really want to get more into microcontrollers, and design some more technical projects. I've been wanting to make a portable point-and-shoot camera for a couple years, though I've never been knowledgeable in that area to do it very well. Though, I'm finally getting to that point.
On a non-electronic-designing front, I'd love to learn more about networking and radios. I'm working on my homelab right now, and just got a nice switch to connect some free 15-year-old office PCs I also have. I'd love to get into AREDN, which is a 802.11 mesh network that can run on amateur radio frequencies.
I also want to write more about my projects on my website (https://radi8.dev,) where hopefully I can share what I work on more often than I currently do.
Really need to get back to practicing archery on a regular basis as well (really need the exercise).
Hopefully I can also find more time for woodworking, and hopefully I can figure out how to calibrate my 3D printers so that I can print PETG and PETG-GF as readily as PLA.
Outside of work, I’m really into Roman history so I’ll keep learning about that.
I’m excited to get my NAS setup and start running my own services. It’s been a long time coming ha.
230 more comments available on Hacker News
Not affiliated with Hacker News or Y Combinator. We simply enrich the public API with analytics.