Source-Optimal Training Is Transfer-Suboptimal
Postedabout 2 months agoActiveabout 2 months ago
arxiv.orgResearchstory
calmneutral
Debate
10/100
Machine LearningTransfer LearningRegularization
Key topics
Machine Learning
Transfer Learning
Regularization
A research paper on arXiv explores the relationship between source task optimization and transfer performance in machine learning, with commenters discussing the implications of its theoretical findings on regularization techniques.
Snapshot generated from the HN discussion
Discussion Activity
Light discussionFirst comment
27s
Peak period
1
0-1h
Avg / period
1
Key moments
- 01Story posted
Nov 13, 2025 at 10:27 AM EST
about 2 months ago
Step 01 - 02First comment
Nov 13, 2025 at 10:28 AM EST
27s after posting
Step 02 - 03Peak activity
1 comments in 0-1h
Hottest window of the conversation
Step 03 - 04Latest activity
Nov 13, 2025 at 10:28 AM EST
about 2 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45916020Type: storyLast synced: 11/17/2025, 6:03:47 AM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Although the proofs are in the world of (L2-SP) ridge regression, experiments were run using an MLP on MNIST and CNN on CIFAR-10 and suggest the SNR-regularization relationship persists in non-linear networks.