The AI Job Title Decoder Ring
Key topics
The article 'The AI Job Title Decoder Ring' attempts to clarify various AI-related job titles, sparking a discussion on HN about the ambiguity and marketing nature of these titles, as well as the field's rapid evolution and potential hype.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
1h
Peak period
28
3-6h
Avg / period
7.1
Based on 71 loaded comments
Key moments
- 01Story posted
Aug 21, 2025 at 3:22 PM EDT
5 months ago
Step 01 - 02First comment
Aug 21, 2025 at 4:48 PM EDT
1h after posting
Step 02 - 03Peak activity
28 comments in 3-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 23, 2025 at 7:36 AM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
- I'm not a researcher and not fine tuning or deploying models on GPUs
- I have a math/traditional ML background, but my explanation of how transformers, tokenizers, etc work would be hand-wavy at best.
- I'm a "regular engineer" in the sense I'm following many of the standard SWE/SDLC practices in my org.
- I'm exclusively focused on building AI features for our product, I wear a PM hat too.
- I'm pretty tuned in to the latest model releases and capabilities of frontier models, and consider being able to articulate that information part of my job.
- I also use AI heavily to produce code, which is helpfully a pretty good way to get a sense for model capabilities.
Do I deserve a special job title...maybe? I think there's definitely an argument that "AI Engineering" really isn't a special thing, and considering how much of my day to day is pure integration work with the actual product, I can see that. OTOH, part of my job and my value at work is very product based. I pay a lot of attention to what other people in the industry are doing, new model releases, and how others are building things, since it's such a new area and there's no "standard playbook" yet for many things.
I actually quite enjoy it since there's a ton of opportunity to be creative. When AI first started becoming big I thought about doing the other direction - leveraging my math/ML background to get deeper into GPUs and MLOps/research-lite kind of work. Instead I went in a more producty direction, which I don't regret yet.
what do you think of recent MIT news that 95% gen ai projects don't do anything valuable at all ?
Worth noting that a project that ends up “doing nothing” isn’t the same as a project that had/created no value.
Even some projects that in hindsight were deterministic lemons.
Assuming compute resources continue scaling up, and architectures keep improving, AI change now has an everything, everywhere, all the time, scope. Failing fast is necessarily going to have a substantial dimension.
There, FTFY
Woah hang on, I think this betrays a severe misunderstanding of what engineers do.
FWIW I was trained as a classical engineer (mechanical), but pretty much just write code these days. But I did have a past life as a not-SWE.
Most classical engineering fields deal with probabilistic system components all of the time. In fact I'd go as far as to say that inability to deal with probabilistic components is disqualifying from many engineering endeavors.
Process engineers for example have to account for human error rates. On a given production line with humans in a loop, the operators will sometimes screw up. Designing systems to detect these errors (which are highly probabilistic!), mitigate them, and reduce the occurrence rates of such errors is a huge part of the job.
Likewise even for regular mechanical engineers, there are probabilistic variances in manufacturing tolerances. Your specs are always given with confidence intervals (this metal sheet is 1mm thick +- 0.05mm) because of this. All of the designs you work on specifically account for this (hence safety margins!). The ways in which these probabilities combine and interact is a serious field of study.
Software engineering is unlike traditional engineering disciplines in that for most of its lifetime it's had the luxury of purely deterministic expectations. This is not true in nearly every other type of engineering.
If anything the advent of ML has introduced this element to software, and the ability to actually work with probabilistic outcomes is what separates those who are serious about this stuff vs. demoware hot air blowers.
In other engineering fields correctness-related-guarantees can often be phrased in probabilistic ways, e.g. "This bridge will withstand a 10-year flood event but not a 100-year flood event", but underneath those guarantees are hard deterministic load estimates with appropriate error margins.
And I think that's where the core disagreement between you and the parent comment lies. I think they're trying to say AI generated code-pushers are often getting fuzzy on speccing out the behavior guarantees of their own software. In some ways the software industry has _always_ been bad at this, despite working with deterministic math, surprise software bugs are plentiful, but vibe-coding takes this to another level.
(This is my best-case charitable understanding of what they're saying, but also happens to be where I stand)
I agree, and I think that's the root of the years-long argument of whether programmers are "real" engineers, where "real engineering" implies a level of rigor about the existence of and adherence to specifications.
My take on this is though that this unseriousness really has little to with AI and entirely to do with the longstanding culture of software generally. In fact I'd go as far as to say that pre-LLM ML was better about this than the rest of the industry at-large.
I've had the good fortune to be working in this realm since before LLMs became the buzzword - most ML teams had well-quantified model behaviors! They knew their precision and recall! You kind of had to, because it was very hard to get models to do what you wanted, plus companies involved in this space generally cared about outcomes.
Then we got LLMs, when you can superficially produce really impressive results easily, and the dominance of vibes over results. I can't stand it either, and mostly am just waiting for most of these things to go bust so we can go back to probabilistic systems where we give a shit about quantification.
I think part of the issue with the lack of "real" quantification in the results of LLMs is that the output and problem domain is so ill-defined. With standard neural nets (and other kinds of ML) classifiers, regression models and reinforcement models all had very narrow, domain specific problems they were solving. It was a no-brainer to measure directly how your vision classifier performs against a radiologist in determining whether an image corresponds to lung cancer.
Now we've opened up the output to a wide variety of open-ended domains: natural languages, programming languages, images and videos. Since the output domain is inherently subjective, it's hard to get a good handle on their usefulness, let alone getting people to agree on that. Hence the never-ending discourse around them.
You do this as a process engineer also. You don't have to have a human operator inserting the stator into the motor housing, you could have a robot do it (it would cost a lot more) and be a lot more deterministic.
After the stator is in the housing you don't need to have a human operator close it using a hand tool. You could do it robotically in which case the odds of failure are much lower. That also costs a lot.
You choose to insert probabilistic components into the system because you've evaluated the tradeoffs around it and decided it's worth it.
Likewise you could do sentiment analysis of a restaurant review in a non-probabilistic manner - there are many options! But you choose a probabilistic ML model because it does a better job overall and you've evaluated the failure modes.
These things really aren't that different.
Unless you’re pushing new firmware onto a drone in Ukraine, FDE is stolen valor.
when you decide titles don't matter and let people choose their own, you get some titles that weren't created in total seriousness.
AI is not totally encapsulated by ML. For example, reinforcement learning is often considered distinct in some AI ontologies. Decision rules and similar methods from the 1970s and 1980s are also included though they highlight the algorithmic approach versus the ML side.
There are certainly many terms used and misused by current marketing (especially the bitcoin bro grifters who saw AI as an out of a bad set of assets), but there actually is clarity to the terms if one considers their origins.
Classical ML tasks (e.g. classification, regression ), perception (vision, speech) and pattern recognition, generative AI capabilities (text, image, audio generation), knowledge representation and reasoning (symbolic AI, logic), decision-making and planning (including reinforcement learning for sequential decisions), as well as hybrid approaches (e.g. neuro-symbolic methods, fuzzy logic).
The capability areas outside of classical ML have been overlapped now to a degree by GPT architectures as well as deep learning, but these architectures aren't the whole game.
Machine Learning - stuff I apply with some understanding.
AI - stuff I apply without understanding.
> Because the field is actively evolving, the language we use keeps changing. Brand new titles appear overnight or, worse, one term means three different things at three different companies.
How can you write that and not realise “maybe this is all made up bullshit and everyone is pulling titles out of their asses to make themselves look more important and knowledgeable than they really are, thus I shouldn’t really be wasting my time giving the subject any credence”? If you’re all in on the field and can’t keep up, why should anyone else care?
ML engineer => knows pytorch
AI engineer => knows huggingface
Researcher => implements papers
I know these heuristics are imperfect but I call myself an MLE because it’s closest to my skillset.
Why did tensor_parallel have output += mod instead of output = output + mod? (The += breaks backprop). Nobody tested it! A user had to notice it was broken and make a PR!
So I wonder, trying to learn AI and how to use it, shouldn't the AI itself be the best guide for understanding AI? Maybe not so much with the latest research or latest products, because AI is not yet trained on those, but sooner or later AI should feel as easy a subject as say JavaScript programming.
Their current title: 'overpaid bot therapist' or 'the prompt whisperer'. What a load of bull.
Now with this article clearly defining each of these roles (AI researcher being the most serious out of the rest) everyone now suddenly wants to be one.
"AI" is a vast field which spans beyond deep learning and LLMs. Unless you are very serious and fully interested in actually advancing the field, don't bother.
Why not robotics or electrical engineer? Not cool enough?
Some already became "data scientists" and "ML engineers", I hope this AI wave takes the rest.
I'm tempted to use /s, but then again...