Part 1 of 5 in our Modern Performance Management series.
Every year, organizations spend roughly 210 hours per manager on annual performance reviews. And most of the people involved, managers, employees, HR, walk away feeling like it was a waste of time.
That's not a management failure. That's a system failure.
Traditional performance reviews were designed in the 1950s for manufacturing environments where jobs were simple, repetitive, and easy to measure. Knowledge work is none of those things. When you run an outdated system on modern work, you get predictably bad results.
The core problems with annual reviews
Recency bias distorts the whole picture
Ask a manager to rate an employee's performance for the past 12 months. What they'll actually recall, with any clarity, is the last 6–8 weeks. The project that went sideways in October? The stretch assignment completed in April? Those are largely gone.
This isn't a character flaw, it's how human memory works. Annual reviews build a structural recency bias into the process.
Ratings measure managers more than employees
Research consistently shows that performance ratings tell you more about the manager doing the rating than the employee being rated. One oft-cited study found that up to 62% of variance in performance ratings was driven by idiosyncratic rater effects, meaning the same employee would receive substantially different scores from different managers.
If you're using ratings to make compensation and promotion decisions, you're largely rewarding people for having a generous manager.
The feedback is too late to be useful
Imagine learning after a full season that you'd been batting wrong. That's what annual feedback does to employees. By the time they hear what wasn't working, months of opportunities to correct it have passed. Feedback that can't be acted on in time isn't developmental, it's just a record.
The process creates anxiety that blocks honesty
When reviews are tied directly to compensation and promotion, both parties start gaming the conversation. Managers soften bad news to avoid conflict. Employees downplay weaknesses to protect their ratings. The result is a polite exchange of half-truths that helps no one.
By the numbers: In a CEB survey, 95% of managers said they were dissatisfied with their company's performance management process. 90% of HR heads said it didn't accurately reflect employee contributions.
What companies have tried, and the results
| Approach | What happened |
|---|---|
| Eliminating ratings entirely (Adobe, 2012) | 30% reduction in voluntary turnover within one year |
| Moving to quarterly reviews (Netflix) | Higher feedback frequency, better alignment, but still too infrequent for fast-moving teams |
| Continuous check-in systems (Microsoft) | Improved employee engagement scores, faster course-correction |
| Calibration-first reviews (Goldman Sachs, others) | Reduced manager bias, more consistent decisions across departments |
What actually works instead
Decouple feedback from compensation decisions
When the conversation is connected to a raise or promotion, neither party can be fully honest. Separate developmental feedback conversations from compensation discussions. They serve different purposes and require different mindsets.
Move to frequent, lightweight check-ins
Weekly or bi-weekly conversations with direct reports, focused on what's going well, what's in the way, and what support is needed, surface issues while there's still time to fix them. They also build the manager-employee relationship that makes hard feedback easier to give and receive.
Use calibration to catch manager bias
Before finalizing any performance decisions, bring managers together to compare notes. When a manager realizes their entire team is rated "Exceeds Expectations" while peers have a normal distribution, it prompts useful self-reflection. Calibration sessions are the best tool available for catching rating inflation, deflation, and halo effects before they become pay decisions.
Related reading: How to Run a Performance Calibration Session
Bring data into the room
Gut feel is unreliable. Organizational network analysis (ONA) data, which shows how employees collaborate, who the real connectors are, who's mentoring others, gives managers a more complete picture than what's visible in a typical reporting relationship.
Related reading: How to Give Great Performance Feedback
The framework that's replacing annual reviews
- Continuous check-ins: Weekly or bi-weekly conversations on goals, blockers, and development
- Mid-year review: A structured look at progress against goals, not a rating, a conversation
- Calibration sessions: Cross-manager alignment before any compensation decisions
- Annual summary: Used for compensation and development planning, informed by the data gathered all year
This structure isn't perfect for every company. But it's materially better than the "one big conversation per year" model for any organization running knowledge work.
Where to start
You don't have to tear down everything at once. Pick one change:
- Add calibration to your next review cycle
- Start weekly check-ins with your team
- Decouple your next developmental feedback conversation from comp discussions
Each of those moves, on its own, makes the process more accurate and more useful than the baseline.
Next in this series: Continuous Feedback vs. Annual Reviews, The Data That Changes Everything
See how Confirm runs modern performance reviews
Confirm helps companies run calibration-based performance reviews with built-in ONA data, so decisions are faster, fairer, and grounded in evidence. Book a demo →
