Benchmark Database

What the Data
Actually Shows

Reference ranges derived from independently sourced B2B research data. Not what vendors report — what observable project outcomes show across methodology types and markets.

Methodology Note

All benchmark ranges below are derived from independently verified B2B research project data collected 2023–2025. Sample sizes per metric are noted. These figures represent observed distributions, not prescriptive targets. A project performing within range is not validated; one outside range warrants investigation, not automatic rejection.

Module 01

Panel & Sample Quality

Metrics describing the quality and representativeness of B2B respondent panels. Ranges reflect variation across online, telephone, and hybrid methodologies.

Completion Rate — B2B Online Panel

n = 186 projects
Audience SegmentP25MedianP75Note
C-Suite / VP-level28%38%51%High abandonment on long surveys (>12 min)
Mid-level Manager44%56%67%Most consistent cross-industry
Technical / Specialist41%53%64%Higher with domain-relevant screeners
SMB Owner / Decision-maker36%48%60%Wide variance by industry vertical

Definition: Completes ÷ Survey starts (post-screener). Excludes quota-closed terminations. Vendor-reported figures not included in this dataset.

Incidence Rate — Narrowly Defined B2B Audiences

n = 94 projects
Screener ComplexityP25MedianP75Note
Single qualifier (industry only)18%28%41%Panel-dependent
Two qualifiers (industry + role)9%16%24%Vendor IR claims frequently overstated by 30–60%
Three+ qualifiers (niche audience)3%7%13%High CPI risk zone; always request IR guarantee

Definition: Qualified starts ÷ Total panel invitations. Pre-screened panels excluded from this dataset as IR is artificially inflated.

Module 02

Data Integrity Metrics

Rates of low-quality, fraudulent, or otherwise unusable responses detected through independent quality-check protocols.

Disqualification Rate Post-Fieldwork (QC Removals)

n = 212 projects
QC MethodP25MedianP75Note
Speeder detection only3%6%11%Baseline method; misses sophisticated fraud
Speeder + attention checks5%9%16%Standard HYMBS minimum
Full QC battery (6+ methods)8%14%22%Higher removal reflects more rigorous detection, not worse panels

A high QC removal rate is not inherently negative — it indicates the detection protocol is functioning. A consistently low rate (<3%) with basic QC methods warrants scrutiny.

Median Interview Length vs. Designed Length

n = 158 projects
Designed LOIActual MedianVarianceNote
10 min8.4 min−16%Typical speeders compress by 20–35%
15 min13.1 min−13%Most representative range for B2B
20 min16.8 min−16%Abandonment increases sharply above 18 min for online
25+ min18.2 min−27%Strong speeder signal; high removal rate expected

Actual median LOI consistently below designed LOI is expected. Concern arises when median falls below 50% of designed LOI or when the distribution is bimodal (indicating two distinct response-behaviour groups).

Module 03

Cost-per-Interview Reference Ranges

CPI ranges observed in independently sourced B2B fieldwork engagements. Excludes agency mark-up and study design fees.

B2B Online — USD CPI by Audience Difficulty

n = 147 projects
Audience DifficultyLow (P25)Typical (P50)High (P75)
General business (broad)$12$22$38
Industry-specific (single sector)$28$48$75
Role-specific (Director+)$55$90$140
Niche technology buyer$80$135$220
C-Suite, enterprise only$120$200$380+

CPI at P25 does not imply lower quality — it may reflect efficient panel access or lower incidence cost in specific geographies. CPI at P75+ does not guarantee quality; verify QC protocols independently.

Data Submission

Observed data that falls outside these ranges?

HYMBS accepts anonymised project-level data contributions to improve benchmark coverage. All submissions are processed under confidentiality protocol and attributed only in aggregate.