🧠 TL;DR This Week

  • You bought Qualtrics to fix QA. But you didn’t fix QA. You bought a microscope to measure vibes—and forgot to tell your team how to use it.

💬 The Hot Take

The real reason QA feels broken? Not because your tech stack is outdated. Because your process is.

Most contact centers don’t need enterprise-grade analytics. They need:

- Managers who calibrate regularly

- Scorecards that reflect the real job

- Coaching that doesn’t feel like parole hearings

Qualtrics is great. So is Google Forms. The difference isn’t the tool—it’s how you use it.


📉 Metric of the Week

⚠️ 60–70% of “QA improvement” investments go toward reporting, not behavior change.

That’s why your dashboard looks beautiful and your agents still hate QA sessions.

📚 From the Queue “The Survey Said... Nothing”


Let’s talk about the current QA overcorrection: ditching scorecards and coaching in favor of dashboards and data exhaust.

A few years ago, leaders realized their QA was inconsistent—low sample sizes, calibration chaos, and managers spending hours arguing tone instead of building trust. So they pivoted. Enter Qualtrics, Medallia, XM Discover. Full-text analysis. Auto-coded emotion. Heatmaps that make it look like you understand the customer.

Except... you don’t.

Here’s the thing: these platforms are excellent at showing trends. They can surface broad friction themes and identify common agent behaviors worth investigating. But they do not replace QA. And they definitely don’t replace a manager who knows how to coach.

Traditional QA—when done well—isn’t glamorous. It’s a team lead with call access, listening for nuance, using a scorecard as a conversation starter. It’s calibrating with other leaders so feedback is consistent. It’s acknowledging when the customer was wrong—but the rep still had options.

You don’t get that from an aggregate trendline or a color-coded tile that says “Agent was 12% less emotionally resonant.”

Here’s where it really falls apart:
Executives love these platforms because they’re clean, scalable, and promise objectivity. But to the agent? It feels like surveillance. They don’t know where the scores come from. They don’t trust them. And worse, the manager delivering the feedback often doesn’t know either.

You can’t coach someone based on a system you barely understand.

Qualtrics isn’t the problem. XM Discover isn’t the villain. But they’re not magic either. Insight without accountability is just expensive noise. If no one’s using the data to drive real conversations—if the only feedback loop is a score and a shrug—you haven’t solved QA.

You’ve just made it prettier.

🛠️ Ops Corner 🧩 The “No BS QA” Checklist

  • Scorecards reviewed every 6 months

  • Calibrations with reps in the room

  • Every QA session ends with one next action (not a score)

  • Tech stack supports coaching, not just compliance

🔗 The Forward Queue

  1. “Why traditional QA won’t fix your CX gaps”
    Manual QA reviews just 1–2% of interactions. AI platforms can cover 100%, but leaders still fail if coaching doesn’t follow. Too many tools give visibility

  2. “How to keep the human touch in AI QA”
    ARC advocates ‘human-verified AI.’ You need bots for scale—and humans to interpret tone, emotion, and context.

  3. “Why coverage alone isn’t enough”
    Moving from 2% to 100% helps, but without action-driven coaching, QA still feels like surveillance.

✉️ One Ask

What’s your most over-engineered QA fail? Bonus points if it involved Tableau.

Keep reading