My Workflow Is 70% Ai, 20% Copy-Paste, 10% Panic. What's Yours?
Posted4 months agoActive4 months ago
Techstory
controversialmixed
Debate
80/100
AI in the WorkplaceProductivityJob Security
Key topics
AI in the Workplace
Productivity
Job Security
Being an analyst I need to research about the market and work accordingly. With the help of ChatGPT, perplexity and Gemini, I get done 70% of my research work. The rest of the 30% is just pure brainstorming. Then if I need some graphics then I use Canva for designing them. I get the images from them. Sometimes, I create ppts too using it. If I need any videos then i usually use tool like fliki, Lunabloom Ai or invideo to generate video. These tools give me good quality AI generated videos. Then nowadays, AI is also available on social medias. It makes the job easier for me. So basically, Most of my work is completed by AI. The one thing I need to do properly is to give them proper instructions. How do you go about it?
The author shares their workflow as an analyst, relying heavily on AI tools, sparking a debate among commenters about the implications of AI on job security and productivity.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
54
0-12h
Avg / period
9.6
Comment distribution67 data points
Loading chart...
Based on 67 loaded comments
Key moments
- 01Story posted
Sep 10, 2025 at 6:11 AM EDT
4 months ago
Step 01 - 02First comment
Sep 10, 2025 at 6:19 AM EDT
8m after posting
Step 02 - 03Peak activity
54 comments in 0-12h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 15, 2025 at 10:45 AM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45195543Type: storyLast synced: 11/20/2025, 4:53:34 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
If I have a question I can just ask ChatGPT, perplexity and Gemini.
Which get their knowledge (training data) on relevant topics from analysts. Which increasingly use ChatGPT and the rest to produce them.
Enough loops of this, and analyst writings and ChatGPT responses on market analysis will soon reach the same "useless bullshit" parity.
People these days do everything to avoid actually programing but still they wanna call themselves a programmer
What’s the fireable offense? Does the boss want to stitch those tools together themselves?
If the output is crap- regardless of the tool- that’s a different story, and one we don’t have enough info to evaluate.
It depends how mission critical his brainstorming is for the company. LLMs can brainstorm too.
That means OP’s job may be _safer_, because they are getting higher leverage on their time.
It’s their colleague who’s ignoring AI that I see as higher risk.
AI is a tool for you to create better results not an opportunity to offload thinking to others (like it is now done so often)
Previously, we always had the output of office work tightly associated with the accountability that the output implies. Since the output is visible and measurable, but accountability isn't, when it became possible to generate plausible-looking professional output, most people assumed that that's all there is to the work.
We're about to discover that an LLM can't be chastened or fired for negligence.
Been doing sysadmin since the 90's. Why bother with AI, it just slows me down. I've already scripted my life with automation. Anything not already automated probably takes me a few minutes, and if longer I'll build the automation. Shell scripts and Ansible aren't hard.
Sometimes when I have time to play around I just ask models what stinks in my code or how it could be better with respect to X. It's not always right or productive but it is fun, you should try it!
just what I want, interactivity in my ansible playbook
> It's not always right or productive but it is fun, you should try it!
yey, introducing bugs for literally no reason!
I asked if you tried it, it sounds like you have I guess. I'm sorry you did not find another tool for your toolbox. I did.
I’ve been developing professionally since 1996 and started on Dec Vax and Stratus VOS mainframes in Fortran and C, led the build out of an on prem data center with raised floors etc to hold a SAN with a whopping 3TBs of storage along with other networking gear and server software.
Before I started developing professionally, I did assembly language on 65C02, 68K, PPC and x86 for 10 years.
In between then and now, I’ve programmed professionally in C, C++, VB6, Perl, Python, C#, and JavaScript.
Now all of my work is “cloud native” from development to infrastructure and take advantage of LLMs extensively.
It’s not a mark of honor to brag about you don’t use the latest tools.
Some people aren’t using LLMs to do development. Some people aren’t doing stuff in hyperscaler clouds. Some people don’t work in environments where code is allowed near LLMs. Some people are and some people do. This is perfectly fine and to be expected.
Wait until he needs another job and then comes crying about “ageism” when it’s actually he didn’t keep up with the latest trends.
As opposed as I am to doing any side work, you better believe if I were in an environment that doesn’t allow me to keep up with latest trends, I would be playing with it on the side.
Before the shit show of the current employment market, I would be looking for another job if I saw I was getting behind technically.
And, more importantly, Beavis and Butthead.
You’re the kind of go getter that has upper management written all over you.
And yet, I've realized that a few research and brainstorming sessions with LLMs I thought were really good and insightful were just the LLM playing "yes and" improv with me, and reinforcing my beliefs, regardless whether I was right or wrong.
“Devils advocate”
And you get much better responses
It was a nonstop game of my IDE’s refactoring features, a bunch of `xargs perl -pi -e 's/foo/bar/;', and repeatedly running `cargo check` and `cargo clippy --fix` until it all compiled. It was a 4000+ line change in the end (net 700 lines removed), and it took me all of that 8.5 hours to finish.
Could an AI have done it faster? Who knows. I’ve tried using Cursor with Claude on stuff like this and it tends to take a very long time, makes mistakes, and ends up digging itself further into holes until I clean up after it. With the size of the code base and the long compile times I’m not sure it would have been able to do it.
So yeah, a typical day is basically 70% coding, 20% meetings, and 10% slack communication. I use AI only to bounce ideas off of, as it seems to do a pisspoor job of maintenance work on a codebase. (I rarely get to write the sort of greenfield code that AI is normally better at.)
I've found the same thing, but I have also found that gen AI is pretty good at creating a script to do this. Generally, for very deterministic, repeated changes, I've found having the LLM write code is way better than having it makes a lot of changes. As it has to read more files, the context gets filled, and it starts to get goofy.
This should be concerning.
The rest is actual coding (where using AI typically slows me down), design, documentation, handling production incidents, monitoring, etc.
- reading papers, blogs, articles, searching google scholar, and chatting with perplexity about them to help find other papers
- writing research proposals based on my reading and previous research
- analysing data lately this means asking Claude code to generate a notebook for playing around with the data
- writing codes to explore some data or make some model of data, this also has a lot of Claude code interaction these days
- meetings, slack, email
- doing paper and proposal reviews which includes any or all of the above tasks plus writing my opinion in whatever format is expected
- travelling somewhere to meet with colleagues at a conference or their workplace to do some collaboration that includes any or all of the above plus also giving talks
- organising events that bring people together to do any to all of the above together
I’m a soft money research scientist with a part time position in industry working as a consultant.
I find that it’s easier to write code than to write English statements describing code I want written.
I can’t phone this work in. It has to be creative and also precise.
I know no way to design useful training experiences using AI. It just comes out as slop.
When I am coding, I use Warp. It often suggests bug fixes, ajd I do find that these are worth accepting, generally speaking.
So I used claude4 to search for a solution. It said downgrade to 4.04. TLDR It worked. This whole process took like 30seconds, much faster than manually googling. Yes this is just one anecdote, but LLMs have sped my workflow up a bit.
I keep a list of "rules of engagement" with AI that I try to follow so it doesn't rob me of cognitive engagement with tasks.
The rest of my time I’m writing Ansible or contemplating architecture enhancements, or working on change implementation.
I don’t do a whole lot of marketing right now. Most of my client base has been built through word of mouth/referrals and we are busy enough for our available time right now. We don’t have a lot of design/artistic or collateral needs. We aren’t using AI in the business, but we are using automated data analysis tools for things like logging, alerting, etc. One of my cofounders has been using LLMs in his personal projects though. He’s having fun with it building a very specific local news aggregator for a local topic of interest.
The analytical work we do isn’t really something an LLM would help us with. We’re already experts in our respective fields and we’ve already written code that does what we need. A lot of the analytical work I do is directly applying technical knowledge to specific situations. The questions I have to ask are mostly answered by running tools to find the answer or asking a human, and then making a context-dependent decision based on the responses.
Still I have this feeling that AI is very close to “doing my work” but yet when I step back I see it may be a rather seductive mirage.
Very unclear. Hard to see with the silicon-colored glasses on.
I spend 20 - 30% of my week on administrative paperwork. Making sure people are taking their required trainings. Didn't we just do the cyber security ones? Yes, we did, but IT got hacked and lost all the records that we did, so we need to do it again.
I spend 10 - 20% of my week trying to write documentation that Security tells me is absolutely required but has never gotten me any answers from them on whether they are going to approve any of my applications for deployment. In the last 2 years, I've gotten ONE application deployed and I had to weaponize my org chart to get it to happen.
That leaves me about -10 - 20% of the week to get the vast majority of all of the programming done on our projects. Which I do. If you look at the git log, my name dominates.
I don't use AI to write code because I don't have time to dick around with bad results.
I don't use AI to write any of my documentation or memos. People generally praise my communication skills for being easy to read. I certainly don't have time to edit AI's shitty writing.
The only time I use AI is when someone from corporate asks me to "generate an AI-first strategy for blah blah blah". I think it's a garbage initiative so I give them garbage work. It seems to make them happy and then they go away and I go back to writing all the code by hand. Even then, I don't copy-paste the response, I type it out long while reading it, just in case anyone asks me any questions later. Despite everyone telling me "typing speed isn't important to a software developer," I type around 100WPM, so it doesn't take too long. Not blazing fast, but a lot faster than every other developer I know.
So, forgive me if I don't have a lot of sympathy for you. You sound like half the people in my company, claiming AI makes them more productive, yet I can't see anywhere in any hard artifacts where that productivity has occurred.
Not sure if the Bazel or AI part is worse. :-D I think Bazel.
So what do you learn?
I've just taken a week off to help extended family with a project, and it's reminded me what a good job is.
12.5% Meetings
12.5% Documentation
50% Requirements engineering, i.e. talking and trying to figure out other people
10% Daily time to learn about various AI tools and improve my workflow.
20% Procrastination (this might be way more than what I'm willing to accept. But this is HN, I want to appear smart )
20% Writing detailed description of features and breaking down task lists, writing acceptance tests.
20% AI Coding (Claude Code)
20% Testing + Production