Blog post

Busting the Myth of "Meets Expectations"

Most employees get a "Meets Expectations" rating. But ONA data reveals that hidden high performers and problem employees are both buried inside that same

Busting the Myth of "Meets Expectations"

2023 was declared the "Year of Efficiency." Companies were trying to do more with less. That meant getting really good at managing the 70-80% of their budgets that pay for talent.

But there's a problem I've heard over and over from leaders: "In our latest performance review, everybody meets or exceeds expectations, and nobody needs improvement. And I know that simply isn't true."

They're right. "Meets Expectations" is a myth. Let's bust it.

Where Does "Meets Expectations" Come From?

When companies run performance reviews, they produce a bell curve of manager ratings. Usually, about two-thirds of employees receive a "Meets Expectations" rating.

But the bell curve isn't perfect. Managers avoid low ratings (1s and 2s) because HR will get involved. They also avoid high ratings (5s). That might create the perception that they're easy graders.

These tendencies are no surprise, manager ratings are about 60% bias. The result is a negatively skewed bell curve with a lot of 3s and 4s.

A Better Way to Measure Performance

We used organizational network analysis (ONA) to pull apart that "Meets Expectations" rating. Employees in a large, global enterprise answered several questions, including:

  • Who do you consider to be a "Gold Star" contributor, making an outstanding impact? Why?
  • Who are you concerned needs additional support or attention? Why?

These questions produced a data set of over 20,000 connections covering 100% of employees. That allowed us to sort the company by its most impactful and concerning employees, without relying on managers.1

The company had also completed a performance cycle six months prior. Adding those manager ratings, we got a full picture of how ONA data compares to traditional ratings.

Unlike the negatively skewed bell curve of manager ratings, these ONA questions create power laws. In other words, the most impactful employees create impact at 10X the next tier. And the most concerning employees create concern at 10X the next tier.

This power law distribution is supported by academic research. Rob Cross, Heidi Gardner, and Alia Crocker found that 3-5% of employees accounted for 20-35% of the most value-added collaborations, and only half of those contributors had been identified using traditional manager reviews. Ernest O'Boyle and Herman Aguinis found similar productivity results across five industries.

Our results support this. We found that 12.7% of employees generated 50% of the impact nominations. We also found that just 6.4% of employees generated 50% of the concern.

Busting the Myth of "Meets Expectations"

Here's where it gets surprising. Let's see how both the top contributors and employees of concern were rated by their managers six months prior.

Five of the top contributors shared the same "Meets Expectations" (3) rating with six of the most concerning employees. The top ten contributors received only one "concern" nomination, yet none of them received the highest "Outstanding" (5) rating from their managers.

The most concerning employees received only two positive "impact" nominations. None received an "Unacceptable" (1) rating.

Nominations were substantiated by employee feedback. Impact included stellar performance and subject matter expertise. Concerns included everything from serious procedure violations to absenteeism.

These results are surprising. But they shouldn't be. Most employees know who around them is making an outstanding impact or falling short. Their leaders are the last to know.

Shifting the Bell Curve

Next, managers assigned their new ratings, this time with ONA data to inform their decisions.

The 90% of employees in "Meets" or "Exceeds Expectations" dropped 12 percentage points to 78%. And the employees in "Needs Improvement" and "Unacceptable" jumped from 5% to 13%.

Even better, the number of "Outstanding" employees more than doubled.

Manager RatingPrevious CycleNew Cycle w/ONAChange
Outstanding (5)2%5%+250%
Exceeds Expectations (4)27%21%-22%
Meets Expectations (3)63%57%-10%
Needs Improvement (2)5%11%+120%
Unacceptable (1)0%2%

At enterprise scale, the 2% of "Unacceptable" performers represent an eight-figure annual cost.

We see at least three reasons for this ratings shift:

  1. With concerns visible to leaders and HR, managers can't ignore or shield underperformers.
  2. Managers have "air cover" to assign top ratings to their best performers instead of hedging.
  3. ONA increases managers' visibility, especially in hybrid and remote settings.

Why It All Matters

The cost of losing top performers is well-documented. So is the cost of keeping underperformers. Organizations leave huge amounts of money on the table by not getting good performance data and acting on it.

But giving people the right manager ratings, and the right feedback, isn't just about efficiency. It's about fairness.

It's unfair to keep a star performer from getting the "Outstanding" rating they deserve. It's equally unfair to keep an underperformer from receiving the critical feedback they need to improve. And it's unfair to the people around them who bear the cost of their underperformance.

We often think of efficiency and fairness as being opposed. They aren't. If you want to build a high-performance organization, you have to build a fair one, too.

See How Confirm Reveals Your Real Top Performers

Traditional performance reviews leave your best people rated "Meets Expectations", right alongside your biggest problems. Confirm's organizational network analysis surfaces who's really driving impact and who needs support, without relying on manager bias.

Request a Demo to see how we help companies build performance systems that are both more accurate and more fair.

1 Two other measurements of influence were also taken. While they corroborate our findings, we're leaving them out of this article for conciseness.

Frequently Asked Questions

What does 'meets expectations' mean in performance reviews?

'Meets expectations' should mean the employee is performing at the expected level for their role and tenure, which is a solid and sustainable performance level. In practice, it is often interpreted as average or disappointing. This misinterpretation drives rating inflation as managers avoid giving 'meets expectations' to avoid demoralizing strong contributors who are performing exactly as expected.

Why do managers avoid giving 'meets expectations' ratings?

Managers avoid 'meets expectations' ratings because the label has been culturally coded as negative or mediocre, employees often receive it as criticism, and it creates awkward conversations. This drives systematic rating inflation where 'exceeds expectations' becomes the de facto standard rating, which devalues truly exceptional performance and distorts compensation decisions.

See Confirm in action

See why forward-thinking enterprises use Confirm to make fairer, faster talent decisions and build high-performing teams.

G2 High Performer Enterprise G2 High Performer G2 Easiest To Do Business With G2 Highest User Adoption Fast Company World Changing Ideas 2023 SHRM