Build a Customer Health Score That Predicts Churn Before It Happens
Every customer success team has a health score. Most of them are useless.
The typical health score is a CSM's subjective gut feel mapped to green/yellow/red. It gets updated quarterly (if at all), reflects the CSM's optimism bias, and has zero predictive power. When you compare health scores to actual churn outcomes, the correlation is embarrassingly weak.
A real health score is a data-driven model that predicts future behavior — specifically, whether a customer will renew, expand, or churn. It's built on signals, not sentiment.
Why Most Health Scores Fail
Problem 1: Subjectivity
When a CSM marks an account "green," what does that mean? They had a good QBR? The champion seems happy? There are no open escalations? Different CSMs apply different criteria, making the score inconsistent and unactionable.
Problem 2: Lagging indicators
Most health scores rely on lagging signals — CSAT surveys taken quarterly, NPS collected annually, or contract renewal dates. By the time these indicators turn negative, the decision to churn is already made.
Problem 3: No outcome validation
How often do you compare health scores to actual outcomes? If 30% of your "green" accounts churn and 40% of your "red" accounts renew, your model is broken. Most teams never run this analysis.
Problem 4: Binary thinking
Green/yellow/red is a classification, not a score. It doesn't distinguish between "slightly concerning" and "about to churn." You need granularity to prioritize.
Building a Predictive Health Score
Step 1: Define the outcome you're predicting
Be specific. "Health" is vague. Pick a concrete outcome:
- Churn probability — will this customer cancel within the next 90 days?
- Expansion probability — will this customer increase ARR within the next 90 days?
- Renewal probability — will this customer renew at their upcoming renewal date?
Start with churn prediction. It's the highest-impact outcome and the easiest to validate.
Step 2: Identify your signal categories
A robust health score draws from five signal categories:
Product engagement (weight: 35%)
- Daily/weekly active users as % of licensed seats
- Feature adoption breadth (what % of key features are used)
- Usage trend (increasing, stable, or declining over 30/60/90 days)
- Time spent in product per user
- Key workflow completion rates
Relationship health (weight: 25%)
- Days since last CSM contact
- QBR attendance and engagement
- Executive sponsor engagement level
- Champion risk (role change, departure, reduced engagement)
- Number of active stakeholders (multi-threading depth)
Support experience (weight: 15%)
- Open ticket count and severity
- Average resolution time
- Escalation frequency
- Support satisfaction scores
- Feature request volume (high requests can signal engagement or frustration)
Outcome achievement (weight: 15%)
- Progress against stated goals from onboarding
- ROI metrics tracked (if applicable)
- Business review sentiment
- Adoption milestones achieved
Financial signals (weight: 10%)
- Payment timeliness
- Discount level (heavily discounted accounts churn at higher rates)
- Contract length (annual vs. multi-year)
- Expansion history
Step 3: Score each signal
Convert raw signals into normalized scores (0-100):
| Signal | Score: 0-25 (Critical) | Score: 25-50 (At Risk) | Score: 50-75 (Healthy) | Score: 75-100 (Thriving) |
|---|---|---|---|---|
| DAU/licensed seats | <10% | 10-30% | 30-60% | >60% |
| Feature adoption | <2 features | 2-4 features | 4-7 features | >7 features |
| Usage trend (30d) | Declining >30% | Declining 10-30% | Stable ±10% | Growing >10% |
| Days since CSM contact | >60 days | 30-60 days | 14-30 days | <14 days |
| Open escalations | 2+ critical | 1 critical | Minor only | None |
Apply the category weights to get a composite score.
Step 4: Validate against historical outcomes
This is the step most teams skip — and it's the most important.
Pull your last 12-24 months of customer data:
- Calculate what the health score would have been for each account 90 days before their renewal/churn date
- Compare predicted scores to actual outcomes
- Measure the model's accuracy: what % of customers who scored <40 actually churned? What % who scored >70 actually renewed?
If the correlation is weak, adjust your signal weights. Drop signals that don't predict outcomes. Add signals you're missing.
Target accuracy: A good health score should predict churn with >70% accuracy (true positive rate) while maintaining <30% false positive rate.
Step 5: Set thresholds and triggers
| Score Range | Classification | Action |
|---|---|---|
| 0-30 | Critical | Immediate intervention — executive escalation, save play activated |
| 31-50 | At Risk | Proactive outreach — CSM schedules check-in, reviews account plan |
| 51-70 | Needs Attention | Monitor closely — address specific declining signals |
| 71-85 | Healthy | Standard cadence — continue regular engagement |
| 86-100 | Thriving | Expansion opportunity — explore upsell/cross-sell |
Each threshold should trigger automated workflows: Slack notifications, task creation in CRM, email alerts to account owners.
Operationalizing the Score
Make it visible
The health score should be on every account record in your CRM, visible in every CSM's daily workflow. Display it as a number (not just a color) with the trend arrow showing 30-day direction.
Review weekly
In your CS team standup, review:
- Accounts that dropped >15 points in the past week (why?)
- Accounts in Critical that haven't had intervention started
- Accounts in Thriving that should be queued for expansion conversations
Update continuously
Health scores should refresh daily, not quarterly. The whole point is early warning — a score that updates monthly misses the window for intervention.
Iterate the model
Every quarter, re-validate your model against actual outcomes. Your product changes, your customer base evolves, and your signals' predictive power shifts. A static model degrades over time.
Common Pitfalls
Over-weighting NPS. NPS is a lagging, infrequent signal. A customer can give you a 9 and still churn because their champion left. Don't let one survey score dominate your model.
Ignoring negative signals in "good" accounts. A high-usage account with a departing champion is at serious risk. Your model needs to catch this — single critical signals should be able to override an otherwise healthy composite.
Not accounting for contract timing. An account 11 months into a 12-month contract with declining usage is far more urgent than the same account 2 months in. Weight urgency by renewal proximity.
Building in a vacuum. Your CSMs have real knowledge. Use data to start, but calibrate against CSM input. If the model says an account is healthy but the CSM says something's off, investigate. The model should augment human judgment, not replace it.
A predictive health score is one of the highest-ROI investments a CS team can make. Build it with real data, validate it against real outcomes, and use it to drive action — not just reporting.
Related Articles
Get your free CRM health score
Connect HubSpot. Get your data quality score in 24 hours. No commitment.
Start Free Assessment