Skip to content
Research Methodology

How we research and what we hold ourselves to.

The research we publish is only valuable if you can trust it. That requires knowing exactly how it was produced, what standards it was held to, and what our relationship to the vendors we cover actually is.

This page describes our research process, our editorial standards, and the structural separation between our commercial and editorial operations. It is not a marketing document. It is a commitment we are making publicly so you can hold us to it.

The Research Process

Six steps from topic selection to publication.

A thorough product comparison takes between three and six weeks from initial scoping to publication. We could publish faster by cutting corners. We choose not to, because the entire value of what we produce depends on the depth of the research behind it.

The process below applies to our comparison and review content — the research that requires the most rigor. Buyer guides and category overviews follow a similar process with adjustments for content type.

01

Category scoping

Before any product research begins, we define the comparison's audience, decision context, and evaluation dimensions. Who is this comparison for? What stage of the buying process are they in? What are the most consequential differences between products in this category? Scoping takes longer than it sounds. A comparison written for a 50-person company evaluating their first HRIS is a different piece of research than a comparison written for a 500-person company replacing an existing platform.

02

Product access and research

We gather information through multiple channels in parallel: direct product access through free trials and demo environments where available, vendor briefings with product teams (not sales teams), public documentation and release notes, user community discussion in forums and Slack groups, and interviews with practitioners who have implemented and actively used the products. We disclose in our methodology section which of these inputs we had access to for any given comparison. If we could not get direct access to a product, we say so.

03

Dimensional evaluation

Each product is assessed against the dimensions we defined in scoping. We write assessments in narrative form with specific supporting evidence — not scores or rankings, but substantive analysis that explains what we found and why it matters. We do not use rating scales (4.2/5) because they imply false precision and obscure the reasoning behind the judgment. A reader needs to understand why we came to a conclusion, not just what the conclusion was.

04

Fit analysis

For each product, we develop a clear characterization of the buyer it is best suited for — the operational contexts, company sizes, team structures, and use cases where it is the strongest choice. This is the most valuable part of a comparison for most readers, and the hardest to write well. It requires genuine category knowledge, not just feature-level familiarity with the products.

05

Editorial review

Every comparison goes through an editorial review by a domain expert who did not write the piece. The reviewer looks for factual errors, oversimplifications, missing context, and conclusions that are not adequately supported by the evidence. This review frequently sends pieces back for additional research. We publish less than we could because of this step. That is the right trade-off.

06

Publication and maintenance

After publication, we monitor comparisons for product updates that affect their accuracy. When a vendor releases a significant feature update, changes their pricing structure, or is acquired, we review and update the relevant content. We date-stamp all content with the last review date. We do not silently update content — significant changes are noted in the article.

Content Types

What we publish and the standard each type is held to.

Different content types serve different buyer needs. Each has its own research standard, and we hold ourselves to those standards consistently.

Category Overviews

Help readers understand the solution landscape before evaluating specific products.

What it covers

What types of products exist in this category? What are the major architectural approaches? What does a buyer need to understand about the category before evaluating specific options?

Editorial standard

Written by analysts with operational experience in the vertical. Updated annually or when the category undergoes significant change.

Product Comparisons

Side-by-side analysis of competing products within a defined category or sub-category.

What it covers

Detailed evaluation of multiple products against a consistent set of dimensions. Includes fit analysis for different buyer types. Does not include products we have not been able to adequately research.

Editorial standard

Direct product access required where available. Vendor briefings conducted for all included products. Editorial review by a second domain analyst before publication.

Individual Product Reviews

Deep-dive analysis of a single platform for readers who have narrowed their shortlist.

What it covers

More granular than what appears in a comparison: specific use case performance, integration ecosystem depth, implementation complexity, pricing dynamics and total cost of ownership.

Editorial standard

Direct product access required. User interviews with current customers conducted where possible. Vendor briefing required. More frequently updated than comparisons.

Buyer Guides

Framework for approaching a software decision in a given category.

What it covers

How to structure an evaluation, what criteria matter most, common mistakes, and how to build a shortlist. Not product-specific. Useful regardless of which products a buyer ends up evaluating.

Editorial standard

Based on practitioner research and analyst judgment. Updated when market conditions or best practices change materially.

Editorial Standards

The commitments we make to our readers.

These are not aspirational values statements. They are specific commitments about how we operate, written so that you can point to them if we fall short.

We write for buyers, not for vendors.

Every piece of content we publish is written to help a professional make a better purchasing decision. It is not written to generate favorable coverage for any vendor, to satisfy an advertising relationship, or to rank for high-volume keywords at the expense of usefulness. When these goals conflict — and sometimes they do — the buyer's interest wins.

We name limitations explicitly.

Every product we cover has weaknesses. We describe them specifically. A comparison that only describes strengths is not useful research — it is a marketing document. If a product has a weak mobile experience, poor customer support, or limited API documentation, we say so directly, not with euphemisms.

We disclose what we don't know.

If we could not get direct access to a product, we say so. If our analysis of a particular dimension is based primarily on public documentation rather than direct testing, we note it. If a vendor declined to brief us, we disclose it. Readers need to know what weight to put on our conclusions, and that requires knowing the limits of our research.

We correct mistakes publicly.

When we are wrong — about a feature, a pricing structure, a company's market position — we correct it and note the correction in the article. We do not quietly revise content and pretend the error did not happen. Getting something wrong is recoverable. Covering it up is not.

We separate advertising from editorial structurally, not just philosophically.

Our commercial and editorial operations are independent. Advertising and sponsorship decisions do not involve editorial team input, and editorial decisions do not involve commercial team input. A vendor can be an advertiser on our platform and still receive critical coverage in our research. We have declined advertising relationships that came with editorial influence as an expectation. We will continue to.

Editorial Independence

Advertising does not influence editorial. Here is how we enforce that.

NorthRadar Media generates revenue through advertising and sponsorships across our properties. We are transparent about this. We are also transparent about the fact that advertising revenue has no influence over editorial content, and we enforce this through structure rather than good intentions.

Our commercial and editorial operations run as separate functions. The editorial team does not know which vendors are advertising on our properties at any given time. The commercial team does not participate in editorial decisions. There are no internal escalation paths where an advertiser can flag concerns about coverage and expect editorial changes in response.

We have turned down advertising relationships where the vendor's expectation was that advertising spend would result in favorable coverage. We will continue to. The commercial value of our advertising products depends entirely on the audience trust we build through editorial quality. Those two things are not in conflict — but only if we protect the editorial process unconditionally.

If you believe we have published content that reflects commercial influence rather than independent analysis, we want to know. Email hello@northradarmedia.com with your concern and the specific content in question. We will investigate and respond.

Content Currency

How we keep content current.

Software categories change. Products add features, change pricing, get acquired, or exit the market. A comparison published 18 months ago may not reflect the current competitive landscape. We take content currency seriously because outdated research actively misleads buyers.

Significant feature update to a covered product
Review and update relevant sections within 30 days
Pricing structure change
Update pricing section within 14 days
Product acquisition or discontinuation
Update immediately with a notice at the top of the article
New entrant in a covered category
Evaluate for inclusion in comparison at next scheduled review
Reader-submitted factual correction
Investigate within 5 business days; update and note correction if valid
Scheduled review cycle
All comparisons reviewed and updated every 12 months at minimum

Questions about our methodology?

If you have questions about how a specific piece of research was conducted, or if you believe something we have published is inaccurate, we want to hear from you.