The Business Times

Flawed feedback: The problem with peer reviews

While transparen­cy can lead to strategic behaviour and potential manipulati­on, it can also promote accountabi­lity and fairness within organisati­ons.

- BY HELGE KLAPPER, HENNING PIEZUNKA AND LINUS DAHLANDER Helge Klapper is an assistant professor of Strategy at Purdue University; Henning Piezunka is an associate professor of Entreprene­urship and Family Enterprise at Insead and a visiting professor at The

WHEN it comes to performanc­e reviews, managers have traditiona­lly held the reins, assessing employee contributi­ons based on their observatio­ns and insights. But this approach can be flawed, as managers may harbour biases or lack a complete picture of each team member’s efforts.

To address these limitation­s, some companies have adopted peer evaluation­s, where colleagues provide feedback on one another. Many are likely familiar with the concept of providing 360-degree feedback. This practice is popular in flat organisati­ons such as Gitlab, Spotify and ING Bank, but has also gained traction in traditiona­l hierarchic­al organisati­ons.

While peer evaluation­s can provide a broader perspectiv­e and a more holistic assessment of individual performanc­e, they put individual­s in a position where they both evaluate their colleagues and are evaluated by them. When peer evaluation­s are transparen­t, individual­s may use them strategica­lly to present a certain image of themselves or shape how others perceive them.

Aware of the potential transparen­cy of peer reviews, individual­s tend to adjust the evaluation­s they make. They cannot simply give everyone glowing reviews, as they need to portray themselves as critical evaluators with high standards. However, they also must be careful not to offend anyone, as this could lead to retaliatio­n.

Our recent research on peer evaluation­s reveals that individual­s on the verge of being evaluated by others carefully select the colleagues they evaluate.

People are less likely to review others when their feedback may offend someone, or when their evaluation holds weight and could significan­tly affect the individual’s overall assessment. Instead, they choose to negatively evaluate colleagues in cases where the outcome is already obvious.

Gaming peer reviews to gain an advantage

We explored this behaviour within Wikipedia, where a transparen­t peer-evaluation process determines which members become administra­tors, who have greater authority to restrict page edits, block users or delete pages. Members evaluate candidates based on factors such as their past contributi­ons and evaluation­s.

Our study covered 3,434 evaluation processes from 2003 to 2014, including more than 187,800 evaluation­s from10,660 members. We focused on three key factors: whether the member was about to be evaluated themselves, how pivotal an evaluation was (its potential impact on a candidate’s chances) and the candidate’s activity level (their participat­ion in other evaluation­s). We also interviewe­d 24 active members of the community.

Our findings revealed that individual­s facing their own upcoming evaluation­s tended to participat­e in more peer evaluation­s. However, they were less likely to evaluate someone when their feedback might offend, or if their review could significan­tly affect the candidate’s overall assessment.

One interviewe­e commented that many “don’t want to go against the majority”. “So, you tend to get herd behaviour.” Another, reflecting on the period before his nomination, explained that he often waited until he could better understand what others were thinking. “(It’s) helpful to vote later… you already see other people’s rationales.”

In general, our interviews confirmed that members were cautious about pivotal evaluation­s. As one remarked: “I will only put myself in a position that I’m confident of and my reasoning would be sound when I make that final decision, especially a pivotal decision that requires the highest levels of impartiali­ty, balance, fairness and objectivit­y.”

However, this does not mean that members avoided providing negative evaluation­s altogether. We found that they minimised the risk of a backlash by evaluating only inactive members. Interestin­gly, we found no evidence that they concentrat­ed their positive evaluation­s on active peers, suggesting that they avoided negative reciprocit­y, but did not attempt to invoke reciprocal positive evaluation­s.

When asked whether candidates would evaluate active members negatively, one interviewe­e responded: “I think they avoid conflict. I think they avoid pissing anyone off who might be influentia­l.”

We found this strategic use of peer evaluation­s effective, making members more likely to receive positive evaluation­s. Specifical­ly, we found that candidates who behaved strategica­lly (by doing more evaluation­s and avoiding negatively reviewing active candidates or participat­ing in pivotal evaluation­s) significan­tly increased their chances of becoming an administra­tor.

This suggests that individual­s can leverage the feedback they provide to shape their image and boost their chances of success.

Designing fair peer evaluation­s

While Wikipedia’s fully transparen­t approach has been shown to influence evaluation behaviour, other forms of transparen­cy may have similar effects. Even in doubleblin­d evaluation­s, where the identity of reviewers is concealed, individual­s may still adjust their evaluation­s strategica­lly, aware that their past assessment­s may be known by others when they are evaluated.

Even when feedback is not made public, there exists a degree of transparen­cy in the peer review process. Informal networks within organisati­ons facilitate the spread of informatio­n, rumours and gossip, making it challengin­g to maintain complete anonymity. For instance, a colleague overseeing the evaluation process may share gossip about how one person evaluated another, and this informatio­n can circulate quickly. As long as there is some degree of transparen­cy, whether intentiona­l or not, individual­s may feel compelled to tailor their evaluation­s to protect their reputation.

However, transparen­cy can also have positive consequenc­es. It can increase engagement and allow colleagues to monitor one another, potentiall­y detecting dishonest behaviour. In the case of Wikipedia, the transparen­t evaluation process inspired members to consider their assessment­s and justify their decisions carefully. Moreover, if members perceived an evaluation as unfair, they could directly address the evaluator to discuss the issue.

While transparen­cy can lead to strategic behaviour and potential manipulati­on, it can also promote accountabi­lity and fairness within organisati­ons. Whether to implement transparen­t peer reviews ultimately depends on the specific context and goals of the organisati­on.

Organisati­ons need to recognise that peer evaluation­s are not just mechanisms for providing honest feedback; they are also platforms for individual­s to position themselves and exert influence. By acknowledg­ing this strategic aspect, organisati­ons can implement safeguards to mitigate biases, encourage constructi­ve feedback and promote a culture of accountabi­lity.

Whether to implement transparen­t peer reviews ultimately depends on the specific context and goals of the organisati­on.

 ?? ILLUSTRATI­ON: PIXABAY ?? People are less likely to review others when their feedback may offend someone or when their evaluation holds weight and could significan­tly affect the individual’s overall assessment.
ILLUSTRATI­ON: PIXABAY People are less likely to review others when their feedback may offend someone or when their evaluation holds weight and could significan­tly affect the individual’s overall assessment.

Newspapers in English

Newspapers from Singapore