Blog post

Why "Performance-Based" Layoffs Keep Hitting the Wrong People

Most companies don't actually know who their best people are. When you run a layoff with broken performance data, you get broken results—and Meta's 2026 cuts prove it.

Why "Performance-Based" Layoffs Keep Hitting the Wrong People
March 6, 2026

When Meta announced layoffs in January 2026 as "performance-based," the message was clear: only underperformers would go. What actually happened was different. Reports emerged of employees who had received 120% of their performance bonuses—who had, by every official measure, done their jobs well—losing their positions anyway.

This wasn't a scandal. It was a symptom.

The real story is simpler and more uncomfortable: most companies don't actually know who their best people are. They have performance ratings. They have manager evaluations. They have KPI dashboards. What they don't have is a reliable signal for who is genuinely creating value—and who would take meaningful capability with them when they leave.

When you run a layoff with broken data, you get broken results.

The Three Ways Performance Data Fails

Manager ratings are the primary input for most performance decisions. They're also systematically unreliable in ways that compound at exactly the moments you need them most—during calibration sessions, layoff planning, and high-stakes promotion decisions.

The visibility problem. What managers observe is a small, filtered sample of what employees actually do. An engineer who quietly unblocks five colleagues a week, who gets pulled into architectural decisions because people trust their judgment, who holds critical institutional knowledge—none of that shows up in a 1:1. The person who excels at doing visible work in front of managers gets rated higher than the person who creates invisible leverage. Over time, your ratings drift away from performance and toward performance of performance.

The recency problem. Annual reviews compress twelve months of work into whatever the last sixty days looked like. A strong Q3 gets swamped by a rough Q4. A single rough quarter before review season can override a year of strong output. When you're making layoff decisions in January, you're largely working from a December snapshot.

The political alignment problem. Managers rate people they like and people they trust. Those two things often correlate with performance—but not always. Someone who openly disagrees with their manager, or who is managed by someone who doesn't understand their technical domain, or who works in a political subunit that lost influence in recent reorg—that person's rating carries noise that has nothing to do with their output.

None of this is intentional. It's structural. The data collection method produces unreliable results, and no amount of training or calibration fully corrects for it.

What the Data Misses Entirely

Even if manager ratings were perfectly accurate within their domain, they'd still miss something critical: how an employee functions inside the organization's network.

Knowledge work doesn't happen in silos. Value gets created at connection points—when someone bridges information across teams, when a developer is the person five other people come to before making a technical decision, when a manager's informal coaching keeps three people from making expensive mistakes. The person who plays this role is often not the most visible person on a team. They're frequently not the highest-rated.

When that person leaves—through layoff or attrition—the effect is invisible for weeks. Then, gradually, decisions slow down. Bugs that used to get caught early make it to production. Cross-functional projects stall. The institutional knowledge was distributed through the organization's informal network, and now that node is gone.

You can't see this in an annual review. You can only see it after the fact, when the capability has already left.

Companies have known this problem exists for decades. The tool to address it—organizational network analysis, which maps how information and collaboration actually move through an organization—has been around since the 1990s. The challenge has always been cost and operationalization. Running a manual ONA survey across a 500-person company is a multi-month project. The data goes stale immediately. And most HR teams don't have the analytical capacity to act on what they find.

That's changing. Collaboration data that already exists—meeting patterns, communication flows, who connects with whom across team boundaries—can now surface the informal network automatically. The people who are genuine connectors, genuine knowledge brokers, genuine force multipliers for the people around them become visible without anyone having to run a survey.

What This Looks Like in Practice

Here's the gap this creates in how most CEOs approach talent decisions:

In a typical performance-based layoff, a company pulls its bottom 10% by rating. They add managers' informal assessments of who seems engaged. They try to protect certain critical roles. They run through the list in a calibration session where senior leaders advocate for their teams. The people without advocates, or with lower visibility, or who had a rough recent quarter—those are the names that end up on the list.

What's missing is any signal about organizational impact. Who does this person's work flow through? Who comes to them when they're stuck? What would break—or slow, or degrade—if this person were gone?

Companies that have run ONA alongside traditional performance data consistently find meaningful divergence. Some low-rated employees sit at critical network nodes; removing them would create organizational bottlenecks that don't show up in any spreadsheet. Some high-rated employees are largely isolated—their individual output is solid, but they're not creating leverage or amplifying anyone else.

Both types of employee exist in your organization right now. Your current performance data probably can't tell you which is which.

What CEOs Should Actually Do

This isn't an argument against performance accountability. The goal is accurate performance accountability—which requires better data.

A few things that make a real difference:

Make network visibility part of how you evaluate senior leaders. Ask not just "how is this person performing?" but "what is this person's organizational multiplier?" A VP of Engineering who has built a team where knowledge is broadly distributed is more durable than one who has become the single point of failure for architectural decisions. This distinction almost never shows up in reviews.

Before any significant workforce reduction, map your informal network. Understand which employees are connectors. Understand which roles are critical nodes that aren't labeled as such in the org chart. Make that data available to the people making cut decisions, alongside ratings. You don't have to let it override everything—but you should at least see it.

Calibrate your performance system against outcomes, not against itself. If your ratings are accurate, then employees who leave should have lower average ratings than employees who stay. You should be able to look back and find that high-rated employees consistently outperformed low-rated ones. Most companies have never checked this. Those that do often find the correlation is weaker than expected.

Treat institutional knowledge as a structural risk, not a people problem. When a senior employee retires, companies plan for transition. They document. They arrange handoffs. That same care rarely extends to mid-level employees who hold informal organizational knowledge. Mapping that knowledge—identifying it before it walks out the door—is a basic risk management practice that most organizations skip.

The Real Cost of Getting This Wrong

The financial argument is straightforward: a misidentified layoff costs you twice. Once to exit the employee. Again when you discover what they took with them and have to rehire or rebuild. Research from the 1990s found that after significant layoffs at a Fortune 500 tech company, remaining high performers became measurably less creative and filed fewer patents. The visible exit masked the invisible capability loss that spread through the organization.

The strategic argument is sharper. In a period where CEOs are making increasingly consequential talent decisions—who to retain, who to develop, where AI agents replace work versus where human judgment becomes more valuable—the quality of your performance data is a competitive variable. Companies that know who their real performers are can make faster, more confident bets on people. Companies that don't will keep discovering their mistakes after the fact.

The first step is just acknowledging the problem. Most performance management systems were built to create documentation and manage compliance, not to give executives reliable signal about who is creating value. They've been repurposed for decisions they were never designed to support.

You can keep using broken data and refining how you interpret it. Or you can fix the data source.

The companies getting workforce decisions right in 2026 are doing the second thing.


Confirm helps companies identify who's actually creating value through AI-powered organizational network analysis. If you're making significant talent decisions this year, see how it works.

See Confirm in action

See why forward-thinking enterprises use Confirm to make fairer, faster talent decisions and build high-performing teams.

G2 High Performer Enterprise G2 High Performer G2 Easiest To Do Business With G2 Highest User Adoption Fast Company World Changing Ideas 2023 SHRM