Why Is AI Search Still Bad at Trust and Context?
Key topics
Accuracy with sources — either the responses don’t cite sources at all, or when they do, the citations don’t hold up.
Bias — answers are often skewed toward a certain narrative instead of presenting a balanced perspective.
Context memory — tools forget what you asked a few prompts ago, which makes complex queries tedious.
Individually, some products do one or two of these well. But I haven’t found anything that consistently delivers all three at once.
Why is this such a hard problem to solve? Is it a technical limitation, a product decision, or something else? Curious to hear what others here think.
Discussion Activity
Light discussionFirst comment
2h
Peak period
1
2-3h
Avg / period
1
Key moments
- 01Story posted
Aug 27, 2025 at 2:10 PM EDT
5 months ago
Step 01 - 02First comment
Aug 27, 2025 at 4:27 PM EDT
2h after posting
Step 02 - 03Peak activity
1 comments in 2-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Aug 27, 2025 at 4:27 PM EDT
5 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Microsoft copilot should do better because it is attached to a search engine but it sucks at giving citations. Often it gives the right answer and a totally wrong citation which is frustrating.
Google's AI results in the search are so bad I try not to look at them at all. If I ask some simple question like "Can I use characters in my base to do operations in Arknights?" I get a wrong answer, citation or not.
So far as context my take with agentic coding assistants is that if you let the context get longer it will eventually get confused and start going in circles. Often it seems to code brilliantly at the beginning but pretty soon it loses the thread. The answer to that is just start a new session, if there is something you want to carry over from the old session you should cut and paste it into the documentation and tell it to look at it.
So far as bias I'd say that truth is the most problematic concept in philosophy, simply introducing the idea of "the Truth" (worse than just "the truth") impairs the truth. See 9/11 Truther.
Look at Musk's misadventures with Grok. I'd love to see an AI trained with a viewpoint like "principled conservative" but that's not what Musk wants. One moment he's BFF with Donald Trump, next minute Trump is one of Epstein's pedophiles. To satisfy Musk it would have to always know if we are at war with Eurasia or Eastasia today.