Citations and Trust in LLM Generated Responses
Posted3 months agoActive3 months ago
ojs.aaai.orgResearchstory
calmneutral
Debate
0/100
LLMCitationsTrustworthinessArtificial Intelligence
Key topics
LLM
Citations
Trustworthiness
Artificial Intelligence
A research paper explores the issue of citations and trust in LLM-generated responses, with the HN community showing interest in the topic despite limited discussion.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
30s
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Oct 17, 2025 at 7:38 AM EDT
3 months ago
Step 01 - 02First comment
Oct 17, 2025 at 7:38 AM EDT
30s after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 17, 2025 at 7:38 AM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45615525Type: storyLast synced: 11/17/2025, 10:10:21 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Interesting results but their notion of "valid citations" seems off (i.e., what comes out from a search engine is not a guarantee of validity, and increasingly so due to slop, SEO, and spam in general).
"For the second factor: in the valid citation condition, the search engine result(s) were provided directly to the user. In the random citation condition, the actual citations were recorded, but the citation URL(s) shown to the participant were randomly selected from citations of previous participant’s questions."