At this 12 months’s Worldwide Convention on Machine Studying (ICML2025), Jaeho Kim, Yunseok Lee and Seulki Lee gained an excellent place paper award for his or her work Place: The AI Convention Peer Overview Disaster Calls for Writer Suggestions and Reviewer Rewards. We hear from Jaeho in regards to the issues they had been attempting to deal with, and their proposed creator suggestions mechanism and reviewer reward system.
May you say one thing about the issue that you just tackle in your place paper?
Our place paper addresses the issues plaguing present AI convention peer evaluate programs, whereas additionally elevating questions in regards to the future route of peer evaluate.
The upcoming downside with the present peer evaluate system in AI conferences is the exponential development in paper submissions pushed by rising curiosity in AI. To place this with numbers, NeurIPS obtained over 30,000 submissions this 12 months, whereas ICLR noticed a 59.8% improve in submissions in only one 12 months. This big improve in submissions has created a basic mismatch: whereas paper submissions develop exponentially, the pool of certified reviewers has not saved tempo.
Submissions to among the main AI conferences over the previous few years.
This imbalance has extreme penalties. The vast majority of papers are now not receiving ample evaluate high quality, undermining peer evaluate’s important operate as a gatekeeper of scientific data. When the evaluate course of fails, inappropriate papers and flawed analysis can slip via, doubtlessly polluting the scientific document.
Contemplating AI’s profound societal influence, this breakdown in high quality management poses dangers that reach far past academia. Poor analysis that enters the scientific discourse can mislead future work, affect coverage selections, and finally hinder real data development. Our place paper focuses on this crucial query and proposes strategies on how we will improve the standard of evaluate, thus main to higher dissemination of information.
What do you argue for within the place paper?
Our place paper proposes two main modifications to deal with the present peer evaluate disaster: an creator suggestions mechanism and a reviewer reward system.
First, the creator suggestions system allows authors to formally consider the standard of opinions they obtain. This technique permits authors to evaluate reviewers’ comprehension of their work, establish potential indicators of LLM-generated content material, and set up primary safeguards in opposition to unfair, biased, or superficial opinions. Importantly, this isn’t about penalizing reviewers, however relatively creating minimal accountability to guard authors from the small minority of reviewers who could not meet skilled requirements.
Second, our reviewer incentive system supplies each speedy and long-term skilled worth for high quality reviewing. For brief-term motivation, creator analysis scores decide eligibility for digital badges (akin to “Prime 10% Reviewer” recognition) that may be displayed on tutorial profiles like OpenReview and Google Scholar. For long-term profession influence, we suggest novel metrics like a “reviewer influence rating” – basically an h-index calculated from the following citations of papers a reviewer has evaluated. This treats reviewers as contributors to the papers they assist enhance and validates their function in advancing scientific data.
May you inform us extra about your proposal for this new two-way peer evaluate methodology?
Our proposed two-way peer evaluate system makes one key change to the present course of: we cut up evaluate launch into two phases.
The authors’ proposed modification to the peer-review system.
At present, authors submit papers, reviewers write full opinions, and all opinions are launched directly. In our system, authors first obtain solely the impartial sections – the abstract, strengths, and questions on their paper. Authors then present suggestions on whether or not reviewers correctly understood their work. Solely after this suggestions will we launch the second half containing weaknesses and scores.
This strategy affords three major advantages. First, it’s sensible – we don’t want to alter current timelines or evaluate templates. The second section might be launched instantly after the authors give suggestions. Second, it protects authors from irresponsible opinions since reviewers know their work might be evaluated. Third, since reviewers usually evaluate a number of papers, we will observe their suggestions scores to assist space chairs establish (ir)accountable reviewers.
The important thing perception is that authors know their very own work finest and may shortly spot when a reviewer hasn’t correctly engaged with their paper.
May you discuss in regards to the concrete reward system that you just counsel within the paper?
We suggest each short-term and long-term rewards to deal with reviewer motivation, which naturally declines over time regardless of beginning enthusiastically.
Quick-term: Digital badges displayed on reviewers’ tutorial profiles, awarded primarily based on creator suggestions scores. The aim is making reviewer contributions extra seen. Whereas some conferences listing prime reviewers on their web sites, these lists are onerous to seek out. Our badges could be prominently displayed on profiles and will even be printed on convention title tags.Instance of a badge that might seem on profiles.
Lengthy-term: Numerical metrics to quantify reviewer influence at AI conferences. We advise monitoring measures like an h-index for reviewed papers. These metrics might be included in tutorial portfolios, much like how we at present observe publication influence.
The core thought is creating tangible profession advantages for reviewers whereas establishing peer evaluate as an expert tutorial service that rewards each authors and reviewers.
What do you suppose might be among the execs and cons of implementing this method?
The advantages of our system are threefold. First, it’s a very sensible answer. Our strategy doesn’t change present evaluate schedules or evaluate burdens, making it straightforward to include into current programs. Second, it encourages reviewers to behave extra responsibly, realizing their work might be evaluated. We emphasize that almost all reviewers already act professionally – nonetheless, even a small variety of irresponsible reviewers can severely harm the peer evaluate system. Third, with ample scale, creator suggestions scores will make conferences extra sustainable. Space chairs could have higher details about reviewer high quality, enabling them to make extra knowledgeable selections about paper acceptance.
Nonetheless, there’s robust potential for gaming by reviewers. Reviewers would possibly optimize for rewards by giving overly optimistic opinions. Measures to counteract these issues are undoubtedly wanted. We’re at present exploring options to deal with this difficulty.
Are there any concluding ideas you’d like so as to add in regards to the potential future
of conferences and peer-review?
One rising development we’ve noticed is the rising dialogue of LLMs in peer evaluate. Whereas we consider present LLMs have a number of weaknesses (e.g., immediate injection, shallow opinions), we additionally suppose they’ll ultimately surpass people. When that occurs, we’ll face a basic dilemma: if LLMs present higher opinions, why ought to people be reviewing? Simply because the speedy rise of LLMs caught us unprepared and created chaos, we can’t afford a repeat. We should always begin getting ready for this query as quickly as potential.
About Jaeho
![]() |
Jaeho Kim is a Postdoctoral Researcher at Korea College with Professor Changhee Lee. He obtained his Ph.D. from UNIST underneath the supervision of Professor Seulki Lee. His major analysis focuses on time collection studying, significantly creating basis fashions that generate artificial and human-guided time collection information to scale back computational and information prices. He additionally contributes to enhancing the peer evaluate course of at main AI conferences, together with his work acknowledged by the ICML 2025 Excellent Place Paper Award. |
Learn the work in full
Place: The AI Convention Peer Overview Disaster Calls for Writer Suggestions and Reviewer Rewards, Jaeho Kim, Yunseok Lee, Seulki Lee.
AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.
AIhub
is a non-profit devoted to connecting the AI neighborhood to the general public by offering free, high-quality data in AI.