Blog post

How ONA Eliminates Recency Bias in Performance Reviews

Recency bias skews performance ratings toward the last 6 weeks. Learn how Organizational Network Analysis fixes this with year-round collaboration data.

Organizational Network Analysis eliminates recency bias in performance reviews
Last updated: March 2026

How ONA Eliminates Recency Bias in Performance Reviews

Your performance review cycle opens. You sit down to rate someone on their full year. You have 52 weeks to evaluate, and you realize you're mostly pulling from the last six. See how Confirm handles performance reviews.

That's recency bias, and it's the default operating mode for most performance reviews.

The problem isn't laziness or bad intent. It's how memory works. Human brains prioritize recent, vivid events over older, diffuse ones. When you're managing 8-12 people and making dozens of decisions a week, accurately recalling a team member's contribution from February is genuinely hard.

Performance reviews built on manager memory measure the last sprint, not the full year.

What Recency Bias Does to Your Talent Decisions

The damage compounds in predictable ways.

Someone ships critical infrastructure work in Q1 and Q2, then slows down in Q4 to support a new hire. You remember the slow Q4. They get a 3 instead of a 4. They miss a raise and quietly decide the system isn't fair.

Someone else coasts for 10 months but nails a visible presentation in November. You remember the presentation. They get promoted. Six months later, they're struggling in the role.

Neither person did anything wrong. The measurement system failed them.

12%
The final 6-8 weeks of a review period carry disproportionate weight in manager ratings — roughly a 12% slice of a 52-week year driving the majority of the evaluation.

High performers doing steady, less-visible work (maintaining critical systems, building relationships across teams, unblocking colleagues) get systematically underrated. Those who can manufacture visibility near review time get systematically overrated.

Over time, this distorts your talent decisions. Promotions go to the visible, not the valuable. High performers who feel underrated leave. You're not making bad decisions on purpose. You're using a bad measurement system.

What ONA Is (And Why It Fixes This)

Organizational Network Analysis (ONA) maps how information, decisions, and collaboration actually flow through your organization, based on what employees report rather than what managers observe.

Here's how it works in practice.

Once a year, employees fill out an 8-10 minute survey. Not a 360-degree review with leading questions. A network survey: Who do you turn to for expertise? Who helps you accomplish your goals? Who drives important decisions on your team?

Individual responses stay confidential. Answers get aggregated into a network map showing real patterns: who enables others, who's a hub for knowledge, who's becoming isolated, who's an informal leader the org chart doesn't show.

Why peer nomination beats manager memory: When five people on a 12-person team independently name the same person as their go-to expert, that's not a function of what happened last month. That's 12 months of collaboration speaking. ONA data is longitudinal by nature — the network doesn't forget Q1.

A Concrete Example: What ONA Reveals

Take a 150-person software company. The engineering team has a senior engineer, call her Elena, who works on infrastructure. She's not in product meetings. She rarely presents to leadership. She doesn't write Slack posts about her wins.

But she's the person three other teams ping when infrastructure issues arise. She mentors two junior engineers. She solved a critical on-call problem in March that only the people directly involved remember.

In a traditional review, Elena's manager knows her work. But the manager has 10 direct reports, a quarterly planning cycle that dominated Q4, and imperfect recall of a March incident that's now seven months old. Elena gets a 3. She's solid, not standout.

With ONA data, the picture looks different. Elena appears in seven separate responses to "Who do you turn to for technical expertise?" She's named by people across three different teams. Her network footprint shows her as a connector between two groups that otherwise don't collaborate.

Signal source What it shows Elena's result
Manager memory Recent quarters, direct observations Rating: 3 (quiet Q4)
ONA peer data Year-round collaboration, cross-team impact Top quartile network centrality
Calibration result Both signals reviewed together Rating revised to 4

ONA doesn't make the rating decision. It ensures the decision gets made with a fuller picture of what someone contributed across the whole year, not just the last six weeks.

Why This Matters More for Remote and Hybrid Teams

Remote work made recency bias worse.

In an office, you passively absorb information about your team. You overhear conversations. You see who's helping who. You notice the person who shows up in the war room when things go sideways.

Remote work eliminates passive absorption. You see what gets explicitly surfaced to you: meetings, messages, deliverables sent directly. Everything else stays invisible.

For remote workers doing important but low-visibility work (infrastructure maintenance, documentation, mentorship over direct messages) the visibility gap widens. They get rated on a fraction of their actual contribution. ONA captures what remote managers can't directly observe — peer nomination doesn't require the manager to have seen the collaboration.

Three Actionable Takeaways for HR Leaders

1. Run the ONA survey before ratings are finalized, not after

Time it 3-4 weeks before manager ratings are due. Give managers the aggregated network results alongside their own assessments. The goal is for every rating decision to happen with both perspectives visible at the same time.

Network data doesn't override manager judgment. It gives that judgment more to work with.

2. Use network data to flag review gaps, not just confirm them

When someone has high network centrality (named frequently, connected across teams) but a low manager rating, that gap is worth a conversation. It might mean the employee's most visible work was a rough Q4 that masks a strong year. Or it could reveal a real performance issue. Either way, you want the conversation to happen with the data in the room.

Set a concrete threshold: if someone ranks in the top quartile for network mentions but their manager rating falls below the median, require written documentation of reasoning before ratings are locked.

3. Commit to running ONA annually, not once

ONA data becomes more valuable year over year. Are the same people consistently showing up as hubs? Is someone's network centrality dropping sharply (an early flight risk signal)? Are certain teams becoming siloed from the rest of the org? These patterns only emerge over time. A one-time snapshot is interesting. Annual ONA data is a talent intelligence system.

The Business Case

Annual performance reviews built on manager memory will always carry recency bias. That's not a fixable human flaw. It's how memory works under cognitive load.

The fix isn't trying harder to remember. It's adding a measurement system that doesn't forget.

ONA captures year-round collaboration data, surfaces invisible contributors, and gives HR leaders something concrete to push back on when ratings don't reflect actual impact. It doesn't replace manager judgment. It gives that judgment a better foundation.

Your best people aren't always the most visible ones. They deserve to be measured on more than the last six weeks.

Want to see how Confirm builds ONA into the performance review cycle? Schedule a demo and we'll walk you through what the network data reveals about your organization, and how it changes calibration conversations.

If you're looking for calibration software to standardize ratings across your organization, see how Confirm approaches it.

FAQ

What is recency bias in performance reviews?

Recency bias is the tendency to weigh recent events more heavily than older ones. In performance reviews, it means the final weeks of a review period influence ratings far more than the full year of work. It's a natural cognitive shortcut that distorts otherwise fair assessments.

How does ONA reduce bias in performance reviews?

ONA gathers peer nomination data throughout the year, giving managers a data set that isn't filtered through their own memory or proximity. It captures contributions that happened in January just as accurately as what happened last week, because it's based on colleague-reported patterns rather than manager recall.

Is ONA the same as a 360-degree review?

No. A 360 gathers narrative feedback about individual performance. ONA maps relationship and collaboration patterns across the organization. 360s are about "what is this person like as an employee?" ONA is about "who does this person help, connect with, and influence?" They complement each other but measure different things.

How long does an ONA survey take to complete?

Typically 8-10 minutes per employee. It asks a short set of network questions (who you turn to for advice, who helps you get things done) rather than comprehensive evaluation questions. Confirm integrates the survey into Slack to reduce friction further.

See Confirm in action

See why forward-thinking enterprises use Confirm to make fairer, faster talent decisions and build high-performing teams.

G2 High Performer Enterprise G2 High Performer G2 Easiest To Do Business With G2 Highest User Adoption Fast Company World Changing Ideas 2023 SHRM partnership badge — Confirm backed by Society for Human Resource Management