Paper2video: Automatic Video Generation From Scientific Papers
Posted3 months agoActive3 months ago
arxiv.orgTechstory
skepticalmixed
Debate
60/100
AI-Generated ContentScientific CommunicationPresentation Tools
Key topics
AI-Generated Content
Scientific Communication
Presentation Tools
The Paper2Video project generates automatic videos from scientific papers, sparking debate about the quality and usefulness of such generated content.
Snapshot generated from the HN discussion
Discussion Activity
Moderate engagementFirst comment
3h
Peak period
6
14-16h
Avg / period
3
Comment distribution24 data points
Loading chart...
Based on 24 loaded comments
Key moments
- 01Story posted
Oct 11, 2025 at 7:32 PM EDT
3 months ago
Step 01 - 02First comment
Oct 11, 2025 at 10:41 PM EDT
3h after posting
Step 02 - 03Peak activity
6 comments in 14-16h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 12, 2025 at 5:04 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
ID: 45553701Type: storyLast synced: 11/20/2025, 7:45:36 PM
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
(and I generally think AI-produced content is slop).
In all seriousness, there could be more utility in this if it helped explain the figures. I jumped ahead to one of the figures in the example video, and no real attention was given to it. In my experience, this is really where presentations live and die, in the clear presentation of datapoints, adding sufficient detail that you bring people along.
For papers, it doesn't have to go that far, but I imagine a polished AI girl (or guy) reading the summary would be more engaging.
Hah, "SteveGPT, present your PowerPoints like Steve Jobs did!"
Add sex and violence to your boring paper reading sessions more exciting!
[1] https://store.steampowered.com/app/858260/Until_You_Fall/
[2] https://xkcd.com/1403/
It also works with research papers.
Here is an explainer of the famous Attention is all you need paper https://www.youtube.com/watch?v=7x_jIK3kqfA
(You can try it here https://magnetron.ai)
Congratulations on this cool idea and results.
Where can I follow the progress or get notified ?
> Where can I follow the progress or get notified ?
I send out product updates once a week or so. Will keep you posted.
1. Using a "painter commenter" feedback loop to make sure the slides are correctly laid out with no overflowing or overlapping elements.
2. Having the audio/subtitles not read word-for-word the detailed contents that are added to the slides, but instead rewording that content to flow more naturally and be closer to how a human presenter would cover the slide.
A couple of things might possibly be improved in the prompts for the reasoning features, eg. in `answer_question_from_image.yaml`:
I'd assume you would likely get better results by asking for the reference first, and then the answer, otherwise you probably have quite a number of answers where the model just "knows" the answer and takes from its own training rather than from the image, which would bias the benchmark.Another thing that improved my personal presentation skills was noting down why I liked a presentation or why I didn’t - what specific things a person did to make it engaging. Just paying attention to that improved my presentation skills enormously
example: Geoff Hinton saying "Forward-forward Algorithm" with a long pause after the first "forward".
(first few seconds in the first demo on https://showlab.github.io/Paper2Video/)