How to Calculate Lead Score (2026 Guide)

Learn how to calculate lead score with a step-by-step weighted formula, real examples, and threshold benchmarks. Start scoring leads Sales trusts.

6 min readProspeo Team

How to Calculate a Lead Score That Sales Actually Uses

You built a scoring model. Marketing celebrated. Sales ignored it within two weeks and went back to gut feel.

That's not a people problem - it's a model design problem. Companies using lead scoring see 138% ROI versus 78% without, yet only 44% of organizations bother. The gap isn't effort or tooling. It's that most models collapse company fit, person fit, and buying timing into one number Sales doesn't trust.

You don't need AI. You need 8-12 well-chosen criteria and a baseline close rate.

Three Questions Every Lead Score Must Answer

Think of lead scoring like a restaurant rating - incredible food, terrible service, and closed when you arrive still averages out to three stars. That tells you nothing.

Every score should answer three distinct questions. Is this the right company? Firmographics, tech stack, headcount. Is this the right person? Title, seniority, department. Is it the right time? Engagement recency, intent signals, content consumption.

When you collapse these into a single number, a perfect-fit company with the wrong contact and zero activity looks identical to a mediocre-fit company with a VP who just attended your webinar. That ambiguity is exactly why reps revert to gut feel - and honestly, we can't blame them.

Step-by-Step Lead Scoring Formula

Step 1 - Find Your Baseline Close Rate

Pull your overall lead-to-customer close rate. Let's say it's 2%. Now segment by source. Webinar attendees close at 15% - that 13-percentage-point lift over baseline tells you webinar attendance deserves significant points. Social followers close at 5%, only a 3-point lift, so fewer points. This baseline close rate method turns point assignment from guesswork into math.

Step 2 - Assign Weighted Points

Here's the formula structure we've found works well, adapted from Coefficient's weighted model:

Lead Score = (Industry_Match x 0.20) + (Title_Match x 0.15) + (Email_Engagement x 0.25) + (Intent_Signals x 0.25) + (Negative_Adjustments x 0.15)

Walk through a sample lead - "Sarah Chen, VP Marketing at a 200-person SaaS company."

Criteria Max Pts Sarah Notes
Industry match 20 20 SaaS = ICP
Company size 15 12 In ICP range
Title/seniority 15 15 VP = decision-maker
Email engagement 15 10 Opened 3 of 5
Content downloads 10 10 Pricing guide
Webinar attendance 10 10 Last week
Intent signals 10 7 Researching category
Subtotal 95 84
Negative adjustments -30 0 No deductions
Final Score 84 Sales-ready

Sarah scores 84. Clear handoff. A lead scoring 40 with a matching industry but no engagement and a manager-level title goes to nurture. The math takes five minutes. Choosing the right criteria takes a conversation with Sales leadership.

Step 3 - Add Negative Scoring

Positive signals get all the attention. Negative scoring does the real work. Deduct points aggressively for:

  • Competitor domain email
  • No activity in 90+ days
  • Unsubscribed from emails
  • Unverified or bounced email address
  • Student or intern title
  • 365+ days inactive: remove from scoring entirely

Look, if a big chunk of your list bounces, a big chunk of your scores are fiction. We've seen teams run elaborate models on databases where 30% of emails were dead. Prospeo verifies emails at 98% accuracy on a 7-day refresh cycle, which keeps your scoring model grounded in contacts who actually exist at the companies you think they do.

Step 4 - Set Your MQL Threshold

Your threshold is a function of Sales capacity, not a magic number. If your team can work 50 leads per week and you're sending 200, raise the threshold. The benchmark: MQL-to-SAL conversion should run 70-90% (see more funnel metrics if you want to track this cleanly). Below 70% means your model is too loose and Sales wastes time disqualifying leads you should've filtered.

One non-negotiable: if someone requests a demo, trial, or contact from Sales, send them regardless of score. A hand-raiser with a score of 12 still gets a human conversation.

For enterprise leads at 1,000+ employees with a score above 70, skip the SDR queue entirely and route straight to an AE via Slack. One team we worked with tightened their title filters and lowered their activity threshold - their MQL-to-meeting rate jumped 13%.

Prospeo

Your scoring model just proved Sarah Chen is sales-ready. But if her email bounces, that 84-point score is worthless. Prospeo verifies emails at 98% accuracy on a 7-day refresh cycle - so every lead you score is a lead that actually exists at the company you think they do. At $0.01 per email, cleaning your scoring pipeline costs less than one bad handoff.

Stop scoring dead contacts. Verify your leads before you score them.

Fit vs Intent Weighting by Sales Motion

Not every go-to-market motion should weight fit and intent equally. Getting this right is what separates a model that works from one that gets ignored.

Sales Motion Fit Weight Intent Weight
Inbound 40% 60%
Outbound 70% 30%
PLG 30% 70%
ABM 60% 40%

Outbound over-indexes fit because you're choosing who to contact - you need the right company and person before intent matters. PLG flips this because product usage signals are your strongest conversion predictor. Teams that separate fit from intent and weight by motion report 20-40% higher MQL-to-SQL conversion.

5 Mistakes That Tank Your Model

1. Scoring email opens. Spam filters and security bots auto-open and auto-click links. An "engaged" lead might be a Barracuda server. Score replies and meaningful actions instead.

2. Letting low scores block hand-raisers. Demo request? Goes to Sales. Period. No formula should override explicit buying intent.

3. No score decay. A webinar from 2024 isn't a buying signal in 2026. Cut behavioral points sharply after about 90 days, and zero them after 180.

4. Scoring on data you don't have. If your "Industry" field is blank for a large share of records, those leads get default points for nothing. Audit data completeness before building rules on it (a quick lead status audit helps here too).

5. Building the model without Sales feedback. The consensus on r/b2bmarketing is clear on this one: scoring built in a marketing vacuum gets ignored. Run your first model past three senior reps before launch. Their objections will save you months of wasted effort.

Manual vs Predictive Scoring

Here's our honest take: most teams should stay manual for at least a year.

Use a rules-based lead scorecard if you have fewer than 1,000 leads per year or want full transparency into why a lead qualifies. Gartner found that 62% of AI sales initiatives fail due to excessive expectations and thin data. Predictive works best as a prioritization layer with human guardrails, not as an autonomous oracle. Some teams run scoring logic as SQL views in their data warehouse - a solid middle ground if you have an analytics engineer but aren't ready for enterprise ML.

Cost context: manual scoring runs about $4.17/account in research time at $50/hour, enrichment tools like Clay cost around $229/mo for 3,000 credits, and enterprise predictive platforms start around $10K/mo.

Skip predictive scoring if you don't have at least 12 months of closed-won data and a clean CRM. Without that foundation, you're training a model on noise.

Your model is only as good as your underlying data. If job titles are two years old and emails bounce, no formula will save you. Prospeo enriches CRM records with 50+ data points on a 7-day refresh cycle at 98% email accuracy, so your scores reflect current reality rather than last year's org chart (more on choosing vendors in data enrichment services).

FAQ

What's a good lead score threshold for MQLs?

There's no universal number - set it based on Sales capacity. If your team can work 50 leads per week, adjust the cutoff until inbound volume matches. Aim for a 70-90% MQL-to-SAL acceptance rate. Below 70%, your threshold is too low and reps waste time disqualifying.

How many criteria should a scoring model include?

Start with 8-12 criteria split across company fit, person fit, and behavioral timing. More than 15 adds complexity without improving accuracy. Fewer than 6 usually means you're missing an entire dimension - typically negative scoring or intent signals.

How often should you recalibrate lead scores?

Review your model quarterly by comparing scored predictions against actual closed-won outcomes. If your top-scored leads aren't converting at 2-3x your baseline close rate, your weights need adjustment. Also recalibrate whenever your ICP, product line, or sales motion changes significantly.

How do I keep scoring data accurate?

Stale records are the top reason models fail. Use an enrichment tool with frequent refresh cycles - the industry average sits around six weeks, which means your scores can drift badly between updates. Weekly refreshes and verified emails keep phantom leads from inflating your pipeline.

Prospeo

Step 4 says your MQL-to-SAL conversion should hit 70-90%. That's impossible when your database has gaps in title, industry, and company size - the exact fields your scoring formula depends on. Prospeo enriches leads with 50+ data points at a 92% match rate, filling the blanks that silently tank your model.

Fill every scoring field. Enrich your CRM with 50+ data points per contact.

The best scoring model isn't the most complex one - it's the one Sales trusts. Start with three questions, ten-ish criteria, and clean data. Iterate from there.

B2B Data Platform

Verified data. Real conversations.Predictable pipeline.

Build targeted lead lists, find verified emails & direct dials, and export to your outreach tools. Self-serve, no contracts.

  • Build targeted lists with 30+ search filters
  • Find verified emails & mobile numbers instantly
  • Export straight to your CRM or outreach tool
  • Free trial — 100 credits/mo, no credit card
Create Free Account100 free credits/mo · No credit card
300M+
Profiles
98%
Email Accuracy
125M+
Mobiles
~$0.01
Per Email