methodology

How platforms were tested

The five pay-per-call-specific scoring dimensions and the testing approach behind every review on this site.

Methodology cover graphic

How platforms were tested

Each pay-per-call platform was evaluated through three channels.

  1. A self-serve account where available, with a test campaign in one of four pay-per-call verticals: legal, insurance, home services, or healthcare.
  2. A sales-led trial where self-serve was not on offer.
  3. Operator interviews. 18 pay-per-call operators across publisher and buyer side. The cohort spans networks running from 100 numbers to 5,000-plus tracking numbers.

The five scoring dimensions

Each platform scored on five Pay Per Call Software dimensions, equally weighted.

Per-number economics20%
Ringing-tail flexibility20%
Payout sync depth20%
Marketplace placement20%
Offer management20%

Per-number economics

The cost of provisioning and keeping tracking numbers at pay-per-call network scale: 50, 200, 500, and 1,000 numbers. The dominant variable for most independent operators.

What we actually measured. The published per-number rate when available. The quoted rate when published rates were not on offer. The blended monthly cost at each network size with one full month of typical call volume layered in. Hidden floors, minimum spend rules, and tier-only discounts were also captured.

Ringing tail flexibility

How deep the routing tree can go. Time-of-day rules, caller-area-code conditionals, fallback routing, callback handling, weighted distribution, tag-based routing, and ringback retries all sit inside this dimension.

What we actually measured. Real-time bidding latency on platforms that support RTB. We sent test calls and timed how long the bid loop took to close. Sub-second latency cleared the bar. Anything north of 1.5 seconds got marked down. Network operators who run buyer auctions live on this number.

Payout sync depth

How cleanly call outcomes (qualified, disqualified, paid) sync back to the publisher-side dashboard and back into ad-platform conversion events.

What we actually measured. The lag time between a call ending and the outcome showing up on the publisher dashboard. We logged 50 test calls per platform and timed each sync. We also tracked dispute frequency. The number of test calls where the outcome had to be manually adjusted by the operator after sync. High dispute rates signal weak sync logic and burn operator hours.

Marketplace placement

For platforms that include a marketplace, how strong the offer-discovery and bid-matching is.

What we actually measured. The number of active offers in the marketplace at the time of test. The freshness of the offer feed. The match-rate between a publisher search and a relevant offer. Marketplaces with stale or thin feeds got marked down.

Offer management

How easy it is to create, modify, and route offers across publishers. This includes dynamic payout adjustments based on call quality signals.

What we actually measured. Time to create a new offer. Time to modify an existing offer mid-campaign. Whether dynamic payout sync runs near-real-time or in nightly batches. Offer management that runs in batches feels rigid in production.

What was not scored

Conversation intelligence depth was not scored. It matters for enterprise pay-per-call buyers, not the audience here. Generic CRM integration count was not scored. Brand recognition was not scored. None of these correlate with operator-fit for the audience this site serves.

Refresh cadence

Annual report with quarterly updates when major releases shift the rankings.

Further reading: schema.org Review markup specification · Wikipedia entry on software review