Your company probably has plenty of data about employee performance. Calibration scores, rating distributions, goal completion rates, manager assessments. What most HR teams don't have is a clean process for connecting that data to pay decisions.
The result is comp season that feels more art than science. Managers advocate loudly for whoever's top of mind. Calibration results from six months ago don't make it into merit increase spreadsheets. Pay equity gaps accumulate quietly until someone leaves or complains.
This guide is for the VP HR or comp leader who wants to change that. It's about building a closed loop between your performance management system and your compensation philosophy. Make a pay decision and be able to explain it.
Why performance data and comp decisions stay siloed
Most companies run calibrations. They spend hours getting managers into a room, debating ratings, reaching consensus. Then those calibration outputs sit in a spreadsheet (or worse, an HR system that nobody pulls data from), while compensation planning happens separately in a budget worksheet.
The disconnect isn't accidental. It's structural:
- Performance systems track ratings, goals, competencies
- Comp planning systems track current salary, bands, budget
- These systems rarely talk to each other
So when a manager opens their merit increase template in April, they're working from memory about who performed well in Q3 and Q4. The formal calibration results aren't in front of them. Neither is context about where each person sits within their salary band.
That's how you end up with a 4% merit increase for someone in the top quartile of performance who's already at the top of their band, and a 3.8% increase for someone who underperformed but has a manager who negotiates well.
Start with calibration data as your source of truth
The cleanest foundation for data-driven comp is a completed calibration process where outputs are actually recorded and accessible.
That means moving beyond "we talked about everyone" to having a structured record of:
- Final calibration rating for each employee
- Where they fell in the performance distribution
- Manager-submitted evidence or notes that supported the rating
- Any flight risk flags or promotion readiness signals
If you've run good calibrations, you already have this. The question is whether you've made it usable downstream.
The practical step: Before comp planning begins, pull a single table with every employee, their calibration outcome, their current salary, their band midpoint, and their compa-ratio (current salary ÷ band midpoint). This one view immediately shows you where calibration ratings and compensation levels are misaligned.
You'll see people rated high who are underpaid relative to their band. You'll see people rated average who are near the top of their range. Both matter.
| Calibration outcome | Position in band | Compensation signal |
|---|---|---|
| Top performer | Below midpoint | Retention risk: prioritize for larger merit or equity refresh |
| Top performer | At or above midpoint | Recognize with merit; consider spot bonus or promotion |
| Meets expectations | Below midpoint | Standard merit; may need band review |
| Meets expectations | Above midpoint | Smaller merit; red circle flag for next cycle |
| Below expectations | Any position | Hold or minimal merit; no equity refresh |
This table shouldn't be novel. But most HR teams are rebuilding it from scratch every cycle because the data lives in different places.
How to actually eliminate pay inequity with data
Pay equity analysis sounds like a compliance exercise. It's really a data quality problem.
If you can't answer "do people in similar roles, at similar performance levels, with similar tenure get paid similarly?" then you have a data problem, not just a fairness problem.
The fix starts with defining what "similar" means:
- Role and level: Are you comparing people in the same band, or are levels inconsistent across teams?
- Performance: Are calibration ratings comparable across departments, or does one manager rate everyone "exceeds expectations"?
- Tenure in role: Two people at the same level with three years' tenure difference will naturally land differently in band
Once you have clean definitions, you can run a basic regression: given role, level, performance, and tenure, is pay explained by those factors, or do demographic variables show up as explanatory?
The goal isn't statistical perfection. It's finding the unexplained gaps and creating a plan to close them before they become retention losses or legal exposure.
A useful benchmark: if your highest-performing employees in a role have a compa-ratio below 0.90, you have a structural underpayment problem that merit budgets alone won't solve. Equity adjustments need to be a separate line item.
Building a transparent compensation framework
"Transparent comp" has become a loaded term. Some companies publish salary bands publicly. Others share ranges only at offer time. The specifics matter less than having a framework that employees can understand and that managers can actually explain.
Here's what a workable transparency framework looks like in practice:
Clear band structure with defined entry, midpoint, and ceiling. Employees should know where they sit in band. Not everyone needs to know exactly what the band is, but they should understand whether they're at 80%, 100%, or 120% of midpoint, and what that means for their merit opportunity.
Explicit link between performance and merit. Your comp guidelines should say something like: employees calibrated in the top 20% are eligible for merit increases of X-Y%, employees calibrated at standard are eligible for A-B%. Not "we take performance into consideration."
A defined process for equity adjustments. Merit increases aren't the right vehicle for fixing structural underpayment. Having a separate equity adjustment budget with clear criteria for eligibility. That keeps merit decisions clean and prevents the "I'll just give them more merit" workaround.
Manager enablement, not just policy. The most transparent comp framework fails if managers can't explain it. That means giving managers access to their team's compa-ratios, their merit budget, and the calibration context before comp season opens. Not after.
The calibration-to-compensation workflow
Here's what the actual workflow looks like when performance data and comp planning are connected:
Step 1: Close calibrations with recorded outputs. Every employee has a calibration rating, documented evidence, and any relevant context (promotion readiness, flight risk, below-bar documentation) stored in a system that HR can query.
Step 2: Generate the comp planning view. Before managers open their merit worksheets, HR generates a pre-populated view with: name, role, level, calibration rating, current salary, compa-ratio, and merit eligibility range based on calibration outcome.
Step 3: Managers allocate within their budget. They have context and constraints. They can see where their top performers sit in band. They can't give 7% to everyone, but they can make defensible choices.
Step 4: HR audits for equity. Before anything is finalized, run an equity check. Are merit increases correlating with calibration outcomes? Are any demographic patterns visible in the distribution?
Step 5: Approvals with context. When a manager proposes a larger merit increase or an equity adjustment, approvals happen with the full data picture, not just the dollar amount.
Step 6: Communicate with employees. When employees get their comp decisions, they get context. "You were in the top calibration tier, your merit increase reflects that. You're currently at X% of your band midpoint."
The common failure modes to avoid
Running calibrations too far from comp season. If there are five months between your November calibration and your April merit cycle, a lot changes. Recency bias fills the gap. Either run calibrations closer to comp, or build a process to update them.
Manager rating inflation. If every manager rates 70% of their team as "exceeds expectations," calibration data is worthless for comp differentiation. Enforce distribution guidance and hold calibration sessions where ratings are defended.
Treating equity adjustments as a favor. If pay equity gaps exist, fixing them isn't discretionary. Building a formal equity adjustment process with budget and criteria removes the politics.
Comp decisions without merit budgets by band. If your overall merit budget is 3.5%, but you're trying to give 6% to top performers, that money has to come from somewhere. Know your distribution before comp planning starts.
What this looks like when it works
When the performance data and comp data actually talk to each other, a few things happen.
Manager conversations get easier. Instead of "I'm advocating for more merit for this person because they're really good," it's "they're in the top calibration tier, they're at 88% of band midpoint, the guidelines put them at 5-7%, I'm proposing 6%." That's a different kind of conversation.
Pay equity gaps shrink. Not because anyone mandated fairness, but because the data makes disparities visible before they compound. It's harder to underinvest in a high performer when their compa-ratio is sitting in front of the decision-maker.
Retention improves for the right people. When top performers are recognized with comp decisions that reflect their calibration, they have fewer reasons to look around.
And HR has an answer when a comp decision gets questioned. Not "we take many factors into account," but "here's where they calibrated, here's where they sit in band, here's the process we followed."
FAQ
What data do I need to start connecting performance and comp?
At minimum: calibration ratings, current salaries, salary band midpoints, and employee level/role. You don't need a sophisticated system. A well-maintained spreadsheet that pulls from both processes is enough to start. The goal is having both data sets in one view before merit planning begins.
How do I handle performance data that's outdated?
If your most recent calibration is more than four months old, run a lighter-weight check-in before comp season: manager-submitted updates on any significant performance changes (either direction). This keeps the calibration data current without a full re-calibration.
What if our salary bands haven't been updated in years?
Outdated bands make compa-ratio analysis unreliable. Before connecting performance to comp, audit whether your bands still reflect market rates. A single market data purchase (Radford, Mercer, Levels.fyi for tech roles) can give you the benchmark you need to reset bands.
How do we handle top performers who are already at the top of their band?
This is the retention risk scenario. Options include: promote them to the next level if scope justifies it, provide a spot bonus for the cycle, increase the band ceiling if market data supports it, or be direct about the constraint and invest in non-cash recognition. What doesn't work: small merit increases that signal you're not paying attention.
Want to see how Confirm connects calibration data directly to compensation workflows? Book a demo to see how the platform ties performance decisions to pay decisions in one place.
