Queueing to Publish in AI and Cs
Key topics
It turns out that if you accept papers based on a fix percentage of submission number, increasing rates of acceptance reduces the pool of unaccepted paper and this larger percentage of the smaller queue ends up giving about the same number of accepted papers overall.
I also have this funnel simulation https://i.postimg.cc/gz88S2hY/funnel2.gif
+ Same number of new produced papers per time unit.
+ Different acceptance rates.
+ But... *same number of accepted papers* on equilibrium! With lower rates you just review more.
The author analyzed the conference publication system in AI and CS using queueing theory, revealing that increasing acceptance rates doesn't necessarily lead to more accepted papers, sparking a discussion on the flaws and potential reforms of the current system.
Snapshot generated from the HN discussion
Discussion Activity
Very active discussionFirst comment
8m
Peak period
31
0-6h
Avg / period
7.7
Based on 54 loaded comments
Key moments
- 01Story posted
Sep 29, 2025 at 3:50 AM EDT
3 months ago
Step 01 - 02First comment
Sep 29, 2025 at 3:58 AM EDT
8m after posting
Step 02 - 03Peak activity
31 comments in 0-6h
Hottest window of the conversation
Step 03 - 04Latest activity
Oct 2, 2025 at 3:23 PM EDT
3 months ago
Step 04
Generating AI Summary...
Analyzing up to 500 comments to identify key contributors and discussion patterns
Want the full context?
Jump to the original sources
Read the primary article or dive into the live Hacker News thread when you're ready.
Revise and resubmit is evil. It gives the reviewers a lot of power over papers that ends up being used for coertion, sometimes subtle, sometimes quite overt. In most papers I have submitted to journals (and I'm talking prestigious journals, not MDPI or the likes), I have been pressured to cite specific papers that didn't make sense to cite, very likely from the reviewers themselves. And one ends up doing it, because not doing it can result in rejection and losing many months (the journal process is also slower), maybe the paper even becoming obsolete along the way. Of course, the "revise and resubmit" process can also be used to pressure authors into changing papers in subtler ways (to not question a given theory, etc.)
The slowness of the process also means that if you're unlucky with the reviewers, you lose much more time. There is a fact that we should all accept: the reviewing process always carries a huge random factor due to subjectivity. And being able to "reroll" reviewers is actually a good thing. It means that a paper that a good proportion of the community values highly will eventually get in, as opposed to being doomed because the initial very small sample (n=3) is from a rejecting minority.
Finally, in my experience reviewing quality is the other way around... there is a small minority of journals with good review quality but the majority (including prestigious ones) it's a crapshoot, not to mention when the editor desk rejects for highly subjective reasons. In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals, and the process is more serious with rejects always being motivated.
However, I think this notion of a paper becoming "obsolete" if it isn't published fast enough speaks to the deeper problems in ML publishing; it's fundamentally about publicizing and explaining a cool technique rather than necessarily reaching some kind of scientific understanding.
>In the conferences I typically submit to (*ACL) the review quality is more consistent than in journals
I got to say, my experience is very different. I come from linguistics and submit to both *ACL as well as linguistics/cognition journals and I think journals are generally better. One of my reviews for ACL was essentially "Looks great, learnt a lot!" (I'm paraphrasing but it was about 3 sentences long, I'm happy for a positive review but it was hardly high quality).
Even in *ACL I find TACL better than what I've gotten for the ACL conferences. I just find with a slow review process a reviewer can actually evaluate claims more closely rather than review in a pretty impressionistic way.
That being said, there are plenty of journals with awful reviewing and editorial boards (cough, cough Nature).
That said, why don't conferences work like journals: if you're rejected you cannot resubmit. Find a new conference. That gets rid of the queuing problem. Yes, you'll have some amazing papers that will not be accepted by a top conference. So what, it happens to everyone. Plenty of influential papers were not published in a top conference in ML/AI/NLP.
There are just too many negative eventualities reinforcing each other in different ways.
The community would be better off working with established journals so that they take reviews from A* conferences as an informal first round, giving authors a clear path to publication. Even though top conferences will always have their appeal, the writing is on the wall that this model is unsustainable.
Submission numbers this year have been absolutely crazy. I honestly don't think it can be solved.
It's like working long years in a family-sized company vs job hopping between megacorps.
The actual science takes the backseat. Nobody has time to just think, you must pump out the next paper and somehow get it through peer review. As a reviewer, you don't get much out of reviewing. It used to be exciting to look at new developments from across the field in ones review stack. Today it's mostly filled with the nth resubmission of something by someone in anxious hurry to just produce something to tick a box. There is no cost to just submitting massive amounts of papers. Anyway, so it's not fun as a reviewer either, you get no reward for it and you take time away from your own research. So people now have to be forced to review in order to submit. These forced reviews do as good a job as you expect. The better case is if they are just disinterested. The worse is if they feel you are a dangerous competitor. Or they only try to assess whether you toiled "hard enough" to deserve the badge of a published paper. Intellectual curiosity etc. have taken the back seat. LLMs just make it all worse.
Nobody is truly incenticized to change this. It's a bit of a tragedy of the commons situation. Just extract as much as you can, and fight like in a war.
It's also like moving from a small village where everyone knows everyone to a big metropolis. People are all just in transit there, they want to take as much as possible before moving on. Wider impacts don't matter. Publish a bunch of stuff then get a well paying job. Who cares that these papers are not quite that scientifically valuable? Nobody reads it anyway. In 6 months it's obsolete either way. But in the meantime it increased the citation count of the PI, the department can put it into their annual report, use the numbers for rankings and for applying for state funding, the numbers look good when talking to the minister of education, it can be also pushed in press releases to do PR for the university which increases public reputation etc. The conferences rise on the impact tables because of the immense cross-citation numbers etc. The more papers, the more citations, the higher the impact factor. And this prestige then moves on to the editors, area chairs etc. and it looks good on a CV.
It mirrors a lot of other social developments where time horizons have shrunk, trust is lower, incentives are perverse and nobody quite likes how it is but nobody has unilateral power to change things in the face of institutional inertia.
Organizing activity at such scale is a hard problem. Especially because research is very decentralized by tradition. It's largely independent groups of 10-20 people centered around one main senior scientist. The network between these is informal. It's very different than megacorps. Megacorps can go sclerotic with admin bloat and paralyzed by middle manager layers. But in the distributed model, there is minimal coordination, it's an undifferentiated soup of these tiny groups, each holding on to their similar ideas and rushing to publish first.
Unfortunately, research is not like factory production, even if bureaucrats and bean counters would wish so. Simply throwing more people at it can make negative impact, analogous to the mythical man-month.
I don't think that the solution has to be that existing conferences and journals accept more.
I think both conferences and journals are broken in this regard. It doesn't help that professors primary jobs these days is to be a social media influencer and attract funding. How the funding is used doesn't seem to matter or impact their careers. What we need is more accountability from senior researchers. They should be at the very least assessing their own students work before stamping their name on the work.
On the flip side it isn't untrue that there are major breakthroughs happening daily at this point in many fields. We just don't have the bandwidth to handle all the information overload.
In the end I left my Phd track before actually finishing it. My conclusion is that I like research(ing stuff) as a verb, but I don't like research as an institute.
And who is the arbiter of that? This is an imperfect but easy shorthand. Like valuing grades and degrees instead of what people actually took away from school.
In an ideal world we would see all this intangible worth in people's contributions. But we don't have time for that.
So the PhD committee decides on exactly that measure whether there are enough published articles for a cumulative dissertation and if that's enough. What's exactly the alternative? Calling in fresh reviews to weigh the contributions?
We already know there is some way to do it because researchers do salami slicing where they take one paper and split it up into multiple papers to get more numbers, out of the same work. Therefore one might for example look at a paper and think, how many papers could one get out of this if they were to take part in salami slicing in order to get at-least some measure of this initially.
But for academic or other high-level research jobs, whoever is doing the valuing is going to look at a lot more than just the venue.
Depends on where. In some countries (e.g. mine, Spain), the notion that evaluation should be "objetive" leads to it degenerating into a pure bean-counting exercise: a first-quartile JCR indexed journal paper is worth 10 points, a top-tier (according to a specific ranking) conference paper is worth 8 points, etc. In some calls/contexts there is some leeway for evaluators to actually look at the content and e.g. subtract points for salami slicing or for publishing in journals that are known to be crap in spite of good quartile, but in others it's not even allowed to do that (you would face an appeal for not following the official scoring scale).
Remember that nobody is a passive actor in the system. Everyone sees the state of conferences and review randomness and the gaming of the system. Senior researchers are area chairs and program chairs. They are well aware and won't take paper counts as the only signal.
Again, papers are needed, but it's really not the only thing.
NeuriPS, CVPR and ICML are solid brands that took decades to build.
Many papers today are representing what would have happened via open source repos in yesteryear. Meaning that there is a lot of work which is useful to someone and having peer reviewed benchmarks etc. is useful to understand whether those people should care. The weakness is that some of this work is the equivalent of shovelware.
The problem is every coauthor wants to increase submissions, LLMs are great at making something that looks OK at first glance, and people have low(er) expectations for a conference paper. A recipe for disaster.
Extrapolate a bit and there are LLM written papers being peer reviewed by LLMs, but fear it not, even if they are accepted the will not be cited because LLMs are hallucinating citations that better support their arguments! And then there is the poor researcher, just a beginner, writing a draft, simple but honest material getting lost in all this line noise, or worse out, feeding it.
First, these models are not good at technical writing at all. They have no sense of the weight of a single sentence, they just love to blather.
Second, they can't keep the core technical story consistent throughout their completions. In other words, they can't "keep the main thing the main thing".
I had an early draft with AI writing, but by the time we submitted our work -- there was not a single piece of AI writing in the paper. And not without trying, I really did some iterations on trying to carefully craft context, give them a sense of the world model in which they needed to evaluate their additions, yada yada.
For clear and concise technical communication, it's a waste of time right now.
People over-correct and feel like they can't use "badly" because there is "feeling badly" discourse [0], but that pertains to "feeling" being a linking verb. "Write" is just your bog standard verb for which "badly", an adverb, is a totally valid modifier.
[0] https://www.merriam-webster.com/grammar/do-you-feel-bad-or-f...
Thus "I feel badly" ... "ok, what did you do?" vs. "I feel poorly" ... "ok, I'll get a bucket."
I just shared that with several friends, and we are all having a good laugh. Thank you very much.
Or if the problem is bad papers, a fee that is returned unless it’s a universal strong reject.
Or if you don’t want to miss the best papers, a fee only for resubmitted papers?
Or a fee that is returned if your paper is strong accept?
Or a fee that is returned if your paper is accepted.
There’s some model that has to be fair (not a financial burden to those writing good papers) and will limit the rate of submissions.
Thoughts?
Go for the jugular. Impact the career of people putting out substandard papers.
Come up with a score for "citation strength" or something.
Any given bad actor with too many substandard papers to his/her credit begins to negatively impact the "citation strength" of any paper on which they are a co-author. Maybe even negatively impacting the "citation strength" of papers that even cite papers authored or co-authored by the bad actor in question?
If, say, the major journals had a system like this in place, you'd see everyone perk up and get a whole lot less careless.
Not sure that enough people understand that the vast vast majority of research papers are written in order to fulfil criteria to graduate with a PhD. It's all PhD students getting through their program. That's the bulk of the literature.
There was a time when nobody went to school. Then everyone did 4 years elementary to learn reading, writing and basic arithmetic. Then everyone did 8 years, which included more general knowledge. Then it became the default to do 12 years to get to the high school diploma. Then it became default to do a bachelor's to get even simple office jobs. Then it's a masters. Then to actually stand out now in a way that a BSc or MSc made you stand out, you need a PhD. PhD programmes are ballooning. Just as the undergrad model had to change quite a bit when it went from 30 highly-motivated nerds starting CS in a year vs. 1000. These are massive systems, the tens or hundreds of thousands of PhD students must somehow be pushed through this system like clockwork. Just for one conference you get tens of thousands of authors submitting similar amount of papers and tens of thousands of reviewers.
You can't simply halt such a huge machine with a few little clever ideas.
The first question is what scientific research is actually for. Is it merely for profitable technological applications? The Greek or the humanistic or the enlightenment ideal wasn't just that. Fundamental research can be its own endeavor, simply to understand more clearly something. We don't only do astronomy for example in order to build some better contraption and understanding evolution wasn't only about producing better medicine. But it's much harder to quantify elegance or aesthetics of an idea and its impact.
And if you say that this should only be a small segment, and most of it should be tech-optimization, I can accept that, but currently science runs also on this kind of aesthetic idealist prestige. In the overall epistemic economy of society, science fills a certain role. It's distinct from "mere" engineering. The Ph in PhD stands for philosophy.
While the growth in the number of new PhDs has been modest, the number of published papers has grown much faster. I would attribute that to changes in administrative culture. Both the government and the universities have become driven by metrics, which means everyone must produce something the administrators can measure.
1 more comments available on Hacker News