Confirm vs. Lattice · Talent Calibration

Lattice calibration cleans up ratings. Confirm calibration builds on evidence.

Lattice gives HR teams a table view and a 9-box to align ratings after managers submit them. Confirm gives calibrators ONA evidence, AI profiles, and real-time bias detection before the first discussion starts. Same outcome on paper. Very different results.

1 Session to complete calibration (vs. 2-4 rounds)
40% Reduction in rating bias with ONA evidence
50% Less time spent in calibration sessions

The short version

Lattice includes a talent calibration workflow. It works. Admins can run 9-box sessions, manage calibration groups, and adjust ratings in table view. For companies under 200 employees that need process and compliance, it's fine. The problem shows up when calibration involves cross-functional work, distributed teams, or contested ratings where managers disagree. Lattice has no objective evidence to resolve those moments. Confirm does: ONA data shows who actually drove impact, real-time bias detection flags advocacy and recency bias, and AI profiles give calibrators a prepared view of every employee before the meeting starts. The same calibration session that takes Lattice teams 2-3 weeks takes Confirm teams one afternoon.

Why talent calibration fails — and why Lattice doesn't fix it

Every calibration session starts with the same problem: managers walk in with different standards, different biases, and very little objective evidence. They rely on what they remember, who they're closest to, and how good they are at advocating in a room. The result is predictable: the loudest manager's team gets the best ratings. The most remote employees, the most cross-functional contributors, the people who changed teams mid-year — they get the worst deal.

Lattice's calibration tools are built around process management, not evidence management. The platform tracks who's been calibrated, lets HR adjust outlier ratings, and surfaces distribution views so teams can see where they're stacked. That's useful. But it doesn't change what drives the decisions inside the session.

"Calibration can also be a place where bias is more prominent because people are making decisions together and can fall back on their personal prejudices." — Lattice's own content on running calibration sessions

They're right. And they don't fix it. Lattice publishes guidance on how to run fairer sessions. Confirm detects bias in real time during the session and counters it with data.

The deeper problem: Lattice calibration is a post-review adjustment layer. Managers submit ratings. HR adjusts outliers. The baseline is still what managers wrote and remembered. For companies with 200+ employees running multiple calibration groups across departments, that baseline is full of visibility gaps — work that happened outside a manager's direct view, contributions that never made it into any review, impact that's genuinely invisible without network data.

Lattice talent calibration: what it does well, what it doesn't

Lattice's talent calibration is a real feature, not vaporware. Here's an honest breakdown of where it delivers and where it runs out.

9-Box and table view calibration

Lattice includes both a 9-box performance/potential grid and a table view for calibration sessions. Admins can position employees across the matrix, drag to adjust, and see the full distribution. The redesigned calibration UI launched in 2024 made this cleaner and more usable.

✓ Works well. Solid interface for standard calibration workflows.

Calibration group management

Admins can create calibration groups, assign facilitator roles, and control who can view and adjust ratings for each group. Useful for running parallel calibration sessions across departments without cross-contamination of data.

✓ Works well for process management and role-based access.

Talent review integration

Lattice's calibration step is built into the talent review workflow by default. HR can see performance ratings alongside potential assessments in the same session. The 2024 talent review update added top-down assessments for performance, potential, risk, and impact.

✓ Good integration with the review cycle. One workflow for review and calibration.

Evidence for calibration discussions

Lattice pulls review text, engagement data, and goal completion into calibration views. What it doesn't have: behavioral data from how employees actually work. Cross-functional impact, collaboration patterns, and network contributions are invisible to Lattice unless a manager explicitly wrote them into a review.

✗ Critical gap. Contested ratings have no objective evidence to resolve them.

Bias detection

Lattice does not detect bias during calibration sessions. Their platform acknowledges the risk ("calibration is where bias is more prominent") and publishes guidance on mitigation. But there's no automated detection of recency bias, affinity bias, or advocacy bias happening in real time.

✗ No in-session bias detection. Outcome depends on facilitator skill.

Calibration session preparation

Managers prepare their own calibration materials in Lattice. The platform doesn't auto-generate evidence profiles or summarize an employee's contribution before the session. Calibrators arrive with whatever their managers assembled — or didn't.

✗ No automated pre-session profiles. Prep quality varies by manager.

Confirm talent calibration: purpose-built for the hard part

Confirm was designed specifically for the problem Lattice doesn't solve: making calibration decisions on objective evidence instead of manager advocacy.

🔍

ONA-powered evidence profiles

Before any calibration session starts, Confirm generates an evidence profile for every employee. It draws from Organizational Network Analysis (ONA) data — who they collaborated with, what cross-functional impact they drove, where they were a critical node in the organization — and combines it with review history and AI-generated summaries. Managers arrive informed. Discussions run on facts.

🛡️

Real-time bias detection

Confirm monitors calibration sessions for the three bias patterns that corrupt ratings most: recency bias (weighting recent events too heavily), affinity bias (rating employees similar to yourself higher), and advocacy bias (ratings driven by which manager argues loudest). Each flag includes the ONA data to reset the discussion on objective ground.

🌐

Cross-functional impact visibility

Traditional calibration misses work that crosses team boundaries. The engineer who mentored five people on other teams. The PM who unblocked three departments. ONA surfaces these contributions as objective data, so employees who drive cross-functional impact get credit for it even when their direct manager doesn't know about it.

📊

Demographic disparity detection

After calibration closes, Confirm analyzes rating distributions for unexpected demographic patterns — gender, ethnicity, tenure, location. Disparities get flagged before decisions are finalized, giving HR the data to investigate before ratings become comp and promotion decisions.

Single-session calibration

A typical Lattice calibration spans 2-3 weeks: pre-work, multiple sessions, post-session debates, revisions. Confirm compresses this into a single 2-4 hour session. Evidence is pre-built. Bias flags resolve debates faster. Decisions are documented with rationale in real time.

📝

Audit trail for every decision

Every calibration decision in Confirm is documented with rationale and evidence, creating a legally defensible record. When employees ask why they received a particular rating, HR has a traceable answer based on objective criteria — not a reconstructed memory from a meeting.

Feature-by-feature: Lattice vs. Confirm for talent calibration

A direct comparison on the capabilities that determine calibration quality at scale.

Capability Confirm Lattice
Calibration workflow
9-box talent matrix Performance × potential placement With ONA data overlay Standard 9-box view
Calibration group management Multi-team, role-based access
Single-session completion Finish calibration in one meeting Typical: 2-4 hours Usually 2-3 week cycle
Rating adjustment tracking Log who changed what and when With rationale Change log only
Export and reporting Post-calibration data export
Evidence and data quality
Organizational Network Analysis (ONA) Behavioral data on collaboration and impact Core differentiator Not available
AI evidence profiles per employee Pre-generated before sessions start GPT-4 powered Not available
Cross-functional impact visibility Work outside direct manager's view Via ONA network data Only what managers wrote
Performance and potential data Review history, goal completion, ratings
Bias detection and fairness
Real-time bias detection During the calibration session Recency, affinity, advocacy Not available
Demographic disparity analysis Post-calibration distribution review Gender, ethnicity, tenure, location Basic distribution views
Audit trail with rationale Legally defensible record of each decision Rationale captured per decision Change log only, no rationale
Scale and mid-market fit
Designed for 200–2,000 employees Mid-market sweet spot Core focus Better for 500+ with IT support
Implementation time Contract to first calibration cycle ~5 weeks 6-12 weeks typical
Pricing All-in cost per user per month $8/user, all features included $11+ base; modules add $4-6 each

Who should switch — and who shouldn't

Honest answer: Lattice calibration is adequate for a certain company profile. Here's how to tell which one describes you.

Switch to Confirm if...

  • Your calibration sessions run multiple rounds because managers can't agree on contested ratings
  • You have cross-functional teams where direct managers have limited visibility into real contributions
  • You've seen the same employees win calibration debates year after year — and suspect it's advocacy, not performance
  • Remote or distributed employees consistently rate below co-located peers despite similar output
  • You want HR to be able to defend every calibration decision with documented evidence
  • You're paying $25+/user for Lattice add-ons and want all-in pricing for comparable features

Lattice may be fine if...

  • You're under 150 employees with mostly co-located teams where manager visibility is high
  • Calibration sessions rarely produce contested ratings — managers align quickly on standards
  • You already use Lattice's full platform (engagement, goals, compensation) and the switching cost outweighs the calibration gap
  • You have enterprise IT and HRIT staff to manage a complex deployment
  • Your primary calibration need is distribution tracking and compliance documentation, not session quality

How switching from Lattice calibration works

Moving from Lattice to Confirm doesn't require ripping out your whole stack. Most teams run their first Confirm calibration cycle while still on Lattice, then decide.

1

Export your Lattice data

Confirm imports historical review data, ratings, employee records, and competency frameworks from Lattice. Your performance history transfers with full fidelity. No rebuilding from scratch.

2

Connect your HRIS and tools

Confirm integrates with Workday, BambooHR, Rippling, and most major HRIS platforms. ONA data pulls from Slack, Teams, GitHub, Jira, and Salesforce — the tools your team already uses.

3

Run a pilot calibration cycle

Most customers evaluate Confirm by running one calibration cycle with one team or department while staying on Lattice. See the evidence profiles, the bias flags, the session speed. Compare the outcomes side-by-side.

4

Migrate when ready

Confirm's implementation team handles the full migration. Average time from signed contract to first live calibration cycle: 5 weeks. No IT overhead required from your side.

Common questions

Does Lattice support talent calibration?

Yes. Lattice includes calibration as a step in its talent review workflow — 9-box, table view, calibration groups, facilitator roles. What it lacks is behavioral evidence: no ONA data, no real-time bias detection, no pre-built profiles. Lattice surfaces the ratings. Confirm surfaces what drove them.

What's the main difference for talent calibration?

Lattice calibration is a post-review adjustment process. Confirm calibration is a decision session grounded in behavioral evidence. Before the Confirm session starts, AI profiles are generated for every employee. During the session, bias detection runs in real time. Most teams finish in one afternoon instead of 2-3 weeks.

Why do mid-market companies outgrow Lattice calibration?

As companies scale past 200-500 employees, calibration complexity grows: more cross-functional work, more distributed teams, more managers with inconsistent standards. Lattice handles process and compliance. It wasn't built to surface cross-functional impact, detect bias patterns, or generate objective evidence for every employee. Companies that outgrow Lattice describe the same pattern: sessions get longer, politics replace data, loudest advocates win.

Can I run Confirm calibration alongside Lattice?

Yes. Confirm imports Lattice review data and adds ONA evidence on top. Most teams run one pilot cycle — one team, one round — before deciding. Confirm's implementation team handles the import. You can compare outcomes side-by-side before making any platform decision.

Does Lattice detect bias in calibration sessions?

No. Lattice publishes guidance on running fairer calibration sessions but has no automated bias detection. Confirm detects recency bias, affinity bias, and advocacy bias in real time during the session, with ONA data to redirect discussions.

How long does talent calibration take in Confirm vs. Lattice?

Lattice customers typically report 2-3 week calibration cycles with multiple sessions. Confirm customers complete calibration in a single 2-4 hour session. Evidence profiles are pre-built, bias flags resolve debates faster, and decisions are documented in real time.

See calibration done on evidence, not politics

One demo shows the difference. Most HR teams that see Confirm calibration in action never go back to unstructured rating sessions.