What causes performance reviews to fail?
Performance reviews fail primarily because manager ratings are unreliable. Research shows the idiosyncratic rater effect accounts for up to 62% of rating variance, meaning ratings reflect the manager more than the employee. Annual cadence compounds this with recency bias, and forced rankings create competitive rather than collaborative cultures. Here are the five specific failures:
- Manager ratings are biased: 60%+ of ratings reflect managerial idiosyncrasies, not actual employee output.
- Self-reflections go unread: Most managers skim or skip them entirely, missing key context.
- Talent follows power law, not bell curve: Forcing normal distribution misidentifies who's actually performing.
- Advocacy matters more than output: Career advancement depends heavily on who advocates for you in the room.
- Relationships outweigh impact: Manager relationships shape ratings more than documented contributions.
Traditional performance reviews are hurting, not helping, companies and their employees.
There are a number of reasons why traditional performance reviews are hated by employees. Office politics. Subjectivity. Manager ratings that don’t accurately reflect true performance. 64% of employees see traditional annual reviews as a waste, but continuous feedback changes that, according to People Matters. Learn more about performance reviews at Confirm.
What’s really going on? There are certain sad realities with performance reviews that don’t get talked about enough. It's time to shed light on a few of these hidden truths that make reviews superficial instead of productive. A modern performance management approach fixes most of them.
Secret #1
Your manager ratings are wrong: and bell curve thinking makes it worse
Manager ratings don’t accurately reflect employee performance. The College of Management at NCSU found that more than 60% of their rating can be attributed to their own idiosyncrasies. For example, an employee may receive a lower rating because the manager evaluates them based on the manager's perception of their own ability to do the same work. In addition to idiosyncrasies, research conducted by Confirm found that managers under or overrate direct reports about half the time.
With the advent of remote and hybrid work and tools like Slack and Zoom, managers simply don’t have the visibility they used to into the true impact their direct reports make at work. Relying on manager ratings, or cherry-picked 360s, means you’re making talent decisions based on an incomplete view. That's why structured review processes matter. of employee performance.
Secret #2
Managers don’t actually read self-reflections
Whether in Confirm or any other performance tool, we routinely see managers breeze through their direct reports’ self-reflections. Managers miss a valuable opportunity to identify where they can make the most impact as a mentor and coach. But this is only part of the problem.
According to our data, while the average employee spends about 7 1/2 minutes writing a single long-form self-review question, their manager spends an average of 8 seconds reading it. Why do we force employees to complete long self-reflections? While well-intentioned, they’re a waste of time.
Secret #3
Talent follows a power law, not a bell curve
Most companies measure employee performance using bell curves (forced distributions). But research shows talent follows a power law: a small group of exceptional performers (10-15%) produce the majority of impact. Bell curves were designed for industrial-era work, not today\'s collaborative, creative environment.
Secret #4
Your career advancement relies on your manager’s ability to advocate for you
Employees may think their performance leads to promotions. But what they may not know is promotions are largely based on their manager’s ability to advocate for them in calibrations.
An employee with a vocal or influential manager stands a better chance of getting their promotion pushed through. When there’s a limited number of promotions to give out, the employee with a manager who isn’t a great advocate will miss out on advancement opportunities.
Secret #5
- Ratings measure raters as much as ratees: 62% of rating variance reflects the manager, not the employee.
- Annual reviews create recency bias: the last 6 weeks dominate a 12-month assessment.
- Forced rankings destroy teamwork: when employees compete for limited 'high performer' slots, collaboration suffers.
- 360 feedback reduces single-rater bias: peer and network input surfaces what managers can't see.
- Continuous feedback beats annual checkpoints: weekly 1:1 check-ins prevent surprise ratings.
- ONA reveals hidden top performers: employees who enable others rarely show up on individual output metrics alone.
Your relationship with your manager will often matter more than your actual impact
An employee can be crushing it at work, but if they don’t have a great relationship with their manager, guess what? They’re likely not getting promoted.
This is why Sally from marketing who's skilled at managing up always seems to be getting ahead. Or why Joe from accounting who’s best buds with his manager seems to be climbing the ladder quickly.
Is it possible that Sally and Joe are climbing the corporate ladder because they’re performing well and not because of the relationship they have with their manager? Absolutely. But can it also be true that they can get ahead despite poor performance because their manager likes them? Yes. There lies the problem.
Let’s face it: Traditional performance reviews are riddled with problems. Employees hate them and HR leaders don’t like their heavy administration, among other reasons.
The world of work has changed. It's time we measure performance with a focus on fairness and impact, leaving behind the subjectivity, bias, and office politics that has plagued the process, for good.
What to Do If a Manager Lied on Your Performance Review
One of the most damaging forms of performance review failure is when a manager misrepresents or fabricates performance data. If a manager lied on your performance review, it creates real consequences: blocked promotions, inflated termination justifications, and damaged professional reputation.
This is more common than HR admits. Research on "idiosyncratic rater bias" (the #1 dirty secret above) shows that performance ratings reflect the rater's personal style as much as the employee's actual performance. But there's a meaningful difference between biased ratings and falsified ones.
Signs a manager may have misrepresented your performance
- Specific incidents cited in your review that didn't happen or were significantly distorted
- Ratings that contradict documented feedback given during the cycle
- Performance issues raised for the first time in the review (no prior coaching or feedback)
- Your review differs materially from peer feedback you received during the same period
What employees can do
- Document the discrepancy in writing. Respond to the review formally with a factual rebuttal, citing specific evidence (emails, deliverables, third-party feedback).
- Request the evidence behind the claims. Ask your manager or HR: "Can you show me the specific examples that support this rating?" Vague claims don't hold up under scrutiny.
- Involve HR or skip-level management. A legitimate discrepancy between documented performance and a rating warrants escalation. HR's job is to ensure consistency and fairness.
- Request a 360 or multi-rater process. Single-manager ratings are the weakest signal. If your organization has multi-rater capability, ask it to be applied retroactively.
What organizations can do to prevent this
The structural fix for manager-driven review manipulation is multi-rater evidence with calibration. Systems like Confirm's Organizational Network Analysis make it harder to misrepresent performance because the data includes peer and cross-functional signals — not just one manager's account. When ratings are calibrated across managers, outliers stand out immediately.
Frequently Asked Questions
Q: Why do traditional performance reviews fail?
A: Traditional performance reviews fail for five key reasons: (1) Ratings reflect manager bias more than actual performance, (2) Annual timelines create recency bias: you're rated on the last 4–6 weeks, not the full year, (3) Forced rankings pit employees against colleagues, destroying collaboration, (4) Reviews feel like judgment, not development, so employees game them rather than grow, and (5) Calibration is done by the people with the least visibility into actual work.
Q: What does research say about performance reviews?
A: Research consistently shows traditional reviews are broken: 95% of employees are dissatisfied with their review process (Adobe), 45% of HR leaders say reviews don't accurately assess performance (Deloitte), and self-ratings and manager ratings agree only 50% of the time on average. The #1 finding from organizational psychology: ratings tell us more about the rater than the ratee: a phenomenon called idiosyncratic rater effect.
Q: What is the idiosyncratic rater effect in performance reviews?
A: The idiosyncratic rater effect is the finding that performance ratings reflect the rater's personality, biases, and preferences more than the actual performance of the person being rated. Research by Scullen, Mount, and Goff found that 62% of variance in performance ratings is attributable to the rater, not the person being rated. This is why uncalibrated manager ratings are an unreliable basis for compensation, promotion, and development decisions.
Q: How can companies fix performance reviews?
A: Fix performance reviews with these six changes:
- Move to continuous feedback: not merely annual cycles
- Add 360 peer input to reduce single-manager bias
- Calibrate ratings across managers before locking scores
- Separate development conversations from comp decisions
- Use ONA to surface performance signals managers miss
- Train managers on behavioral, evidence-based feedback
Q: Should companies get rid of performance reviews entirely?
A: No: eliminating reviews without a replacement creates accountability and development gaps. The goal is to fix reviews, not remove them. Top-performing organizations are modernizing: replacing annual-only reviews with continuous feedback loops, adding peer and network data, and using AI-assisted calibration to reduce bias. The review process should feel like a trusted mirror, not an arbitrary judgment.
Want to see how Confirm handles this? Request a demo — we'll walk you through the platform in 30 minutes.
If you're looking for calibration software to standardize ratings across your organization, see how Confirm approaches it.
