Agentic Patterns
Key topics
The fascinating world of "agentic patterns" is being explored, with a curated list sparking debate around their effectiveness and maturity. As some commenters shared their positive experiences with specific patterns like agentic search and Tool Use Steering via Prompting, others pointed out that the concept of agents has been around for decades, predating the current AI boom. While some see the emergence of established patterns as a sign of progress, others are skeptical, labeling the content as "slop" or expressing frustration with the lack of depth. The discussion highlights the evolving landscape of agentic approaches and the varying perspectives on their potential.
Snapshot generated from the HN discussion
Discussion Activity
Active discussionFirst comment
2h
Peak period
17
18-24h
Avg / period
7.8
Based on 47 loaded comments
Key moments
- 01Story posted
Jan 4, 2026 at 2:24 PM EST
3d ago
Step 01 - 02First comment
Jan 4, 2026 at 4:49 PM EST
2h after posting
Step 02 - 03Peak activity
17 comments in 18-24h
Hottest window of the conversation
Step 03 - 04Latest activity
Jan 7, 2026 at 5:52 AM EST
12h ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
1996: https://web.archive.org/web/19961221024144/http://www.acm.or... > Computer-based agents have gotten attention from computer scientists and human interface designers in recent years
https://github.com/nibzard/awesome-agentic-patterns/commits/...
Unfortunately it isn’t possible to detect whether AI was being used in an assistive fashion, or whether it was the primary author.
Regardless, a skim read of the content reveals it to be quite sloppy!
About an hour ago or so I used Opus 4.5 to give me a flat list with summaries. I tried to post it here as a comment but it was too long and I didn't bother to split it up. They all seem to be things I've heard of in one way another, but nothing that really stood out for me. Don't get me wrong, they're decent concepts and it's clear others appreciate this resource more than I.
Something like if I do a list of dev pattern and I say:
- caffeinated break for algorithmic thinking improvement
When I'm thinking about an algorithmic logic, go to have a coffee break, and then go back to my work desk to work on it again.
Here is one of the first "pattern" of the project I opened for example:
> There’s definitely a tendency to dress up fairly straightforward concepts in academic-sounding language. “Agentic” is basically “it runs in a loop and decides what to do next.” Sliding window is just “we only look at the last N tokens.” RAG is “we search for stuff and paste it into the prompt.” [...] When you’re trying to differentiate your startup or justify your research budget, “agentic orchestration layer” lands differently than “a script that calls Claude in a loop.“
People are calling if-then cron tasks “agents” now
There are so so so many prompts & agentic pattern repositories out there. I'm pretty turned off by this repo flouting the convention of what awsome-* repos are, being the work, rather than linking the good work that's out there. For us to decide with.
A few years ago we had GitHub resource-spam about smart contracts and Web3 and AWESOME NFT ERC721 HACK ON SOLANA NEXT BIG THING LIST.
Now we have repos for the "Self-Rewriting Meta-Prompt Loop" and "Gas Town":
https://steve-yegge.medium.com/welcome-to-gas-town-4f25ee16d...
If you haven't got a Rig for your project with a Mayor whose Witness oversees the Polecats who are supervised by a Deacon who manages Dogs (special shoutout to Boot!) who work with a two-level Beads structure and GUPP and MEOW principles... you're not gonna make it.
It is right? “ Do not use Gas Town.”
Star-farming anno 2026.
Reading this as an avid Codex CLI user, some things make sense and reflect lessons learned along the way. However, the patterns also get stale fast as agents improve and may be counterproductive. One such pattern is context anxiety, which probably reflects a particular model more than a general problem, and is likely an issue that will go away over time.
There are certainly patterns that need to be learned, and relearned over time. Learning the patterns is sort of an anti-pattern, since it is the model that should be trained to alleviate its shortcomings rather than the human. Then again, a successful mindset over the last three years has been to treat models as another form of intelligence, not as human intelligence, by getting to know them and being mindful of their strengths and weaknesses. This is quite a demanding task in terms of communication, reflection, and perspective-taking, and it is understandable that this knowledge is being documented.
But models change over time. The strengths and weaknesses of yesterday’s models are not the same as today’s, and reasoning models have actually removed some capabilities. A simple example is giving a reasoning model with tools the task of inspecting logs. It will most likely grep and parse out smaller sections, and may also refuse an instruction to load the file into context to inspect it. The model then relies on its reasoning (system 2) rather than its intuitive (system 1) thinking.
This means that many of these patterns are temporary, and optimizing for them risks locking human behavior to quirks that may disappear or even reverse as models evolve. YMMV.
https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...
For someone looking for a place to start look for patterns where agents do one or more of the following: plan before acting, critique their own work, escalate to humans, remember what they learned, and/or decompose complex work into parallel subtasks. That will cover most of the actual work you are looking for an agent to assist with.
Some patterns I've used frequently enough to codify into skill /agent combos are:
plan then execute
reflection loop / self critique
episodic memory / state externalization
sub agent spawning + checkpoint/resume
CI feedback loop
structured output
context minimization
tool compartmentalization
graceful degradation
Also if you are really looking to dive in get familiar with the Claude Agent SDK. Right now you can use an Oauth token generated from the Max plan even for remote agent deployments as long as you are not serving inference to other users. The Oauth token functionality is NOT mentioned in their documentation, but it works, I'm not sure how long that will be the case.