Linkedin Will Use Member Data, Profiles, and Public Posts to Train AI Models
Posted4 months agoActive4 months ago
linkedin.comTechstory
heatednegative
Debate
80/100
LinkedinAIData Privacy
Key topics
Linkedin
AI
Data Privacy
LinkedIn's updated terms allow the platform to use member data to train AI models, sparking concerns about data privacy and the potential for increased AI-generated content on the platform.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
32m
Peak period
4
2-3h
Avg / period
2.6
Comment distribution21 data points
Loading chart...
Based on 21 loaded comments
Key moments
- 01Story posted
Sep 19, 2025 at 6:01 AM EDT
4 months ago
Step 01 - 02First comment
Sep 19, 2025 at 6:33 AM EDT
32m after posting
Step 02 - 03Peak activity
4 comments in 2-3h
Hottest window of the conversation
Step 03 - 04Latest activity
Sep 19, 2025 at 4:41 PM EDT
4 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45299861Type: storyLast synced: 11/20/2025, 3:38:03 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Though considering the reddit creators were pretending to be different users to fake engagement, it wouldn't surprise me if they also led the charge in the Digg exodus.
But I actually wonder if you need the “social graph” at all at first. You could start by making it a place where people post jobs and apply for them. The network effects will come later. All you need is maybe a simple algorithm that matches users to postings they might qualify for.
You wouldn't cringe at LLM output because there's no social context to the situation depicted. You could identify the words as cringeworthy but the emotional subtext would be missing.
Which is why the argument, "if the LLM produces good output, why does it matter where it came from?" doesn't land for many. Art is more than its surface content, AI is exposing a split between camps that do and don't see it that way.
The cringe will be just as strong as most of the existing content is just copypastas that have gotten some traction previously. Once a human hand invokes the LLM they have taken ownership and "approved this message" so I can still quietly and privately laugh at them.
I would love to see this get litigated.
Is that an appropriate legitimate interest or do we need to split hairs?
But what kind of content? The fake, AI-generated posts made on my behalf? That's probably not the case. Maybe it's a generic LLM training. Or will we constantly be evaluated and profiled by AI for the purpose of automating any future hiring, eliminating the need for human intervention? What a dystopian outlook. I want humans back.
Maybe some of this data is gatekept (I've only been able to view other people profiles' only after logging in) but I wouldn't trust Meta, the company that used stolen e-book libraries to train their LLMs, not to find ways around it.