Transparency in Peer Review: Conference Proceedings

The Source
By: Guest contributor, Tue Sep 12 2017

Author: Guest contributor

The Peer Review Week 2017 celebrates the importance of peer review in maintaining the quality and accuracy of science. Today we shed light on the Peer Review process in Conference Proceedings.

Written by Aliaksandr Birukou

Conference Proceedings can be a great format for publishing important and valuable research and communicating new results much faster than journals. Did you know that conference proceedings are not just a simple compilation of conference papers but also go through rigorous, often-times a stricter peer review process?

Let’s look at an example. The proceedings of the 18th International Conference on Agile Software Development, XP 2017, were published at here. As seen from the preface, there were 46 submissions, out of which 14 full and 6 short papers have been selected for the presentation at the conference. This translates to a 30% acceptance rate for full papers, meaning that only one in three papers made it to the conference – plus, each paper received at least three reviews!

So, where’s the transparency?

Peer Review Indicators

Through the PEERE project, Dr. Mario Malički of University of Split text mined 10,000 prefaces of conference proceedings to extract any information that might pertain to peer review. In particular, the terms used by authors to describe conference peer review processe were searched for. Building upon his work, in late 2015 Springer Computer Science Editorial staff started gathering such information from conference chairs in a systematic manner. This was done for all conferences publishing in Computer Science proceedings series, including the Lecture Notes in Computer Science (LNCS), which has recently celebrated its 10,000th volume:

This information about the peer review process takes into account the following parameters:

  • Type of peer review (single-, double- blind, open, other);
  • Conference management system used to run the peer review process;
  • Number of submissions received, accepted, and rejected;
  • The acceptance rate;
  • The average number of reviews per paper and papers per reviewer, as well as whether external peer reviewers, beyond the program committee, were involved;
  • any other information about the peer review process the conference organizers would like to share (as it is hard to cover all aspects in a small number of standard fields).

Acceptance rate and other indicators in action

Such indicators show how the process of one conference differs from the other and how strict and competitive the peer review is. For instance, the 13th International Conference on Machine Learning and Data Mining in Pattern Recognition, MLDM 2017 had 150 submissions and 31 full papers accepted (an acceptance rate of 20%).  However, for example, 9th Mexican Conference on Pattern Recognition, MCPR 2017, only 29 papers were accepted out of 55 submissions (52% acceptance rate).

These differences reflect various cultures prevalent within the community like some conferences might be more closed with shared quality values and might not take in many submissions from outside, or, a conference might be more international, famous and therefore also sometimes attract more dubious papers.

Making review indicators explicit enables better comparison of the peer review processes across conferences and sub-disciplines. One can now answer research questions like “is it true that pattern recognition uses single-blind review, while the AI community goes for double-blind”? Is acceptance rate in HCI higher than in machine learning? Interestingly enough, such differences are often not explicitly known within a certain community – our analysis has shown that many conferences refer to the peer review process as “THE peer-review process,” assuming it is known to everyone.

Transparency through Description

Description of the peer review processes contributes further to transparency. Staff members from the Springer CS Editorial team are discussing the parameters for describing conference peer review processes within the Conference and Project PIDs group started by CrossRef and DataCite.

Since the group includes important conference publishers and other relevant stakeholders, the goal is to develop a new industry standard for peer review transparency in conference proceedings. Such standard will then be most likely implemented within CrossMark – which will allow everyone to see which peer review process the paper was subject to just by clicking the CrossMark icon.

More information about study can be found in the abstract “Peer Review in Computer Science Conferences Published by Springer” by Mario Malički, Martin Mihajlov, Aliaksandr Birukou, and Volha Bryl, to be presented as a poster at the Peer Review Congress in Chicago. This work was also presented at the APE panel about peer review

We welcome your thoughts about peer reviews in conference proceedings. Find more about Conference Proceedings at Springer here.


Author: Guest contributor

Guest Contributors include Springer Nature staff and authors, industry experts, society partners, and many others. If you are interested in being a Guest Contributor, please contact us via email:

Related Tags: