Ready to create more pipeline?
Get a demo and discover why thousands of SDR and Sales teams trust LeadIQ to help them build pipeline confidently.

.png)
Only 27% of leads passed to sales are qualified - lead scoring exists to fix this problem
Effective B2B scoring combines firmographic fit criteria and behavioral engagement signals in roughly equal weight
Data quality is the hidden variable - bad data means even perfect scoring criteria produce unreliable results
Get a demo and discover why thousands of SDR and Sales teams trust LeadIQ to help them build pipeline confidently.
Nobody wants to admit their sales team is wasting time on bad leads. But the numbers don't lie. Only 27% of leads passed from marketing to sales are actually qualified, according to McKinsey. That means roughly three out of every four leads your reps touch shouldn't even be there.
The difference between teams that crush quota and those that don't often comes down to one thing: knowing exactly which leads are worth pursuing. A real system for identifying them. That system is lead scoring.
But lead scoring only works if you're scoring the right things. Bad b2b lead scoring criteria means bad leads. Bad leads means wasted pipeline. There are proven criteria you can adapt to your business starting today.
Think of b2b lead scoring as having two separate brains making decisions about your prospects. The first brain asks: "Is this company the right fit for us?" The second brain asks: "Are they actually interested?" Both matter. Both should drive whether you call someone.
Your fit score (sometimes called a firmographic score) measures whether someone's company is worth your time. Is the company the right size? The right industry? In the right geography? Do they use tools you integrate with? These are firmographic criteria and they're largely binary. Either a prospect matches your ICP or they don't.
Engagement score is the behavioral half of the equation. It tracks what your prospect is actually doing. Visiting your website? Opening your emails? Clicking on pricing? Downloading case studies? How many times have they engaged in the last 30 days? This is where actual buying signals show up.
A company can be a perfect fit but show zero engagement. Good long-term play, not a sales-ready lead right now. Flip it and you get someone highly engaged but at a company too small for your product to make sense. Both paths waste conversations.
The problem is that most of this engagement is invisible. A company visits your pricing page three times in a week and you have no idea who they are. Tools like Warmly change that by de-anonymizing up to 65% of company visitors and 15% of individuals. That turns anonymous site traffic into scorable engagement signals, letting you act on intent before a prospect ever fills out a form.
The teams that outperform don't just score leads. They weigh both dimensions appropriately. What percentage of your scoring model should be fit? What percentage engagement? There's no universal answer, but 50/50 or 60/40 (engagement heavier) tends to work for most SaaS companies in B2B.
Here are actual criteria you can use or adapt. These are the building blocks of a real scoring model.
Company size is usually the most predictive firmographic signal. If your product is built for mid-market, a company with 100-500 employees might be worth 20 points. A company with 501-1,000 employees could be worth 30 points. Too small? Zero points.
Revenue matters similarly. If you only work with companies doing $10M+ annually, sub-threshold companies score zero. No exceptions. On a target account list? That's 50 points automatically. You've already pre-qualified these accounts, so leads from them get massive credit.
Job title is surprisingly predictive. A VP of Sales looking at your solution gets 30 points. An SDR looking at it might get 5 (interested, not a buyer). This is where job change tracking becomes gold. Someone just promoted to VP of Sales? That's a trigger event. That's 40 points.
Technology stack matters if you can track it. Using Salesforce? 10 points if you integrate with it. Using a competitor's product? 15 points because they're already in a buying conversation, just with someone else. Having enriched, accurate data on their tech stack puts you ahead of most teams on this signal.
Website visits: 5 points per visit, capped at 20. Someone views your pricing page? That's 20 points by itself. Pricing page visits are strong buying signals. Downloading a top-of-funnel guide? 5 points. Downloading a comparison guide that includes a competitor's name? 25 points.
If you're using Warmly to de-anonymize visitors, you can assign these scores to identified accounts even when there's been no form fill. That's a significant expansion of your scoreable universe.
Email engagement: opens alone are a weak signal. But someone who opens five emails in a row shows intent. Give them 15 points. Click-through to a product page? 20 points. Unsubscribe? Negative 30 points. They're telling you they're not interested.
Demo request or free trial signup: 75+ points. They're basically asking to talk. Same with chatbot conversations. Three separate chats with your bot in a week? 40 points. These are people actively investigating.
Content consumption: time on site matters more than page views. Spend 3+ minutes on your product tour? 20 points. Watch a customer testimonial video all the way through? 15 points. They're getting social proof.
There are two ways to build a b2b lead scoring model. Build it by hand (rule-based) or let machine learning figure it out (predictive).
Rule-based is what we've been talking about. You decide "company size 100-500 employees equals 20 points" and "email click equals 10 points." You make the rules. The benefit: total control and transparency. Your reps understand exactly why someone scored 65 instead of 45. You can adjust quickly. The downside: you're guessing based on instinct, not data. You might miss which combinations of signals actually predict a win.
Predictive lead scoring uses historical data to figure out which prospects actually became customers. The algorithm looks at your closed deals, reverse-engineers the patterns, and tells you what actually predicted a win. Did companies with 50-99 employees convert better than 100-500? The model will weigh accordingly.
The catch with predictive: you need historical data. At least 100 closed deals, preferably more. Accurate CRM records. And there's a transparency problem. If the model says someone scores 78, your rep wants to know why. A machine learning model might not give you a clean answer.
Most mature teams use both. Rule-based scoring for your most obvious signals (target account, job title, industry). Predictive scoring layered on top to catch subtle patterns.
Here's where it gets real. You've got your scores. Now what's the threshold? At what score does a lead become an MQL and get handed to sales?
This depends on your sales capacity and conversion rates. If you have ten sales reps and can handle 200 leads a month, work backwards. If your MQL-to-SQL conversion rate is 30%, you need about 600 MQLs monthly. Adjust your threshold to match that output.
Responding to an inbound request in five minutes or less increases your odds of booking a meeting by 100x, Salesforce found. Speed matters more than being overly picky. A lead that scores 45 but gets called immediately beats a lead that scores 90 and sits in limbo for two days.
Most teams set their MQL threshold between 40-60 points depending on the scoring model. But what actually matters is whether your sales reps are happy with the quality of leads they're getting. If they're complaining about garbage leads, the threshold is too low. If they never get enough leads, it's too high.
Product-Qualified Leads convert at 20-30% rates, two to three times higher than traditional MQLs. That's the bar you're trying to hit.
You can build the perfect b2b lead scoring criteria model and it will still fail if your data is garbage.
You're scoring leads on job title? If half your job titles say "unknown" or are misspelled, your model is running on blanks. Scoring on company size? If your CRM has duplicate company records with different employee counts, you're getting different scores for the same prospect. Scoring on technology stack? If your enrichment data is six months old, you're basing decisions on stale signals.
This is where data quality becomes your biggest lever. Companies using lead scoring see a 77% increase in lead generation ROI. But that number only materializes when your prospect data is accurate and current.
What does your current lead scoring model assume about data quality? Have you actually checked?
Fit score measures whether their company matches your ICP (size, industry, revenue, geography). Engagement score measures their actual interest (website visits, email opens, content downloads, demo requests). Both predict conversion, but they measure different things.
At least quarterly. Track what's actually converting. If your leads scoring 50-60 convert at 30% and leads scoring 60-70 convert at 25%, your model needs adjustment. More frequently if you're launching new products or entering new markets.
Not effectively. You need at least 100 closed deals to train a predictive model properly. If you don't have that, build rule-based scoring first. Use it for 6-12 months, gather data, then layer in predictive.
Yes. Account-based lead scoring focuses on whether a company is strategic (TAL match, high revenue potential). Traditional lead scoring focuses on whether a specific person is engaged. Best practice is doing both.
Start at 50. Track conversion rates by score band for 90 days. If your 50-60 band converts at 20%+ and your sales team can handle the volume, keep it there. Adjust based on actual results.
Building a solid lead scoring model is one of the highest-ROI projects marketing and sales can tackle together. When it works, your reps spend time on prospects that actually want to buy. Your pipeline gets cleaner. Your close rate goes up.
But it only works if your criteria make sense for your business and your data is accurate. Look at your closed wins and reverse-engineer the patterns. Invest in keeping your data fresh and enriched. That's where the competitive advantage lives.
Start using LeadIQ for free to see how accurate contact data and trigger signals improve your lead scoring model. Or book a demo to see how it fits your existing scoring setup. And if you want to feed more first-party behavioral signals into your scoring, check out the Warmly + LeadIQ bundle - de-anonymizing your site visitors is one of the fastest ways to expand your scoreable pipeline without waiting for a form fill.