Quantiiv Logo
uantiiv
Published January 5, 2026

The Right Way to Judge a New Product Launch (And Why Most Brands Get It Wrong)

The Right Way to Judge a New Product Launch (And Why Most Brands Get It Wrong)

“The new chicken sandwich sold 50,000 units in its first month.”

Cool. What does that actually tell you?

Not much.


Trial vs. Repeat

50,000 units to 50,000 different customers is a completely different animal than 10,000 customers coming back 5 times each.

High trial with low repeat? Your marketing worked. The product didn’t. You’ve got an expensive lesson, not a winner.

Modest trial with exceptional repeat? You’ve got a sleeping giant. The product is proven—you just need more people to discover it.

These are opposite problems requiring opposite solutions. But if all you’re looking at is “50,000 units sold,” you’ll never know which problem you’re solving.


The Incrementality Question

This is where I see sophisticated operators separate from everyone else.

When those chicken sandwiches sold, where did the sales come from?

New demand: Customers who wouldn’t have visited otherwise came specifically for the new item.

Wallet expansion: Existing customers added it to their usual order. Check size went up.

Substitution: Existing customers ordered the chicken sandwich instead of their usual burger. Total revenue unchanged.

The first two are wins. The third means you added menu complexity, training requirements, supply chain complications, and kitchen execution challenges… for zero incremental revenue.

The frustrating part? Most brands have no idea which scenario they’re in. They see 50,000 units and assume it’s all incremental. Often, a significant portion is just reshuffling existing demand.


Who’s Actually Buying It?

Customer composition matters more than the top-line number.

Is the new item attracting your most valuable customers—deepening loyalty with the people who matter most? Is it bringing in new customer segments you weren’t capturing before? Or is it primarily attracting one-time experimenters who try it once and never return?

The same 50,000 units can represent any of these scenarios. Without customer-level analysis, you’re flying blind.


The Time Trap

“Let’s look at the first 30 days” sounds reasonable until you realize:

  • The item launched mid-week with no marketing support
  • A major holiday fell in that window
  • You’re comparing it to products that launched with national advertising campaigns
  • Severe weather impacted traffic during week two
  • A competitor launched something similar the same week

Comparing raw numbers across different 30-day windows is comparing apples to oranges. Smart product analysis uses time-in-market normalization—comparing items at equivalent points in their lifecycle, adjusted for known external factors.

And you need to wait for stabilization. New products follow a pattern: launch spike from novelty and promo support, trough as casual triers move on, then stabilization as true demand emerges. Judging a product during the launch spike is like judging a relationship on the first date.


What Generic AI Gets Wrong

A generic analytics tool will tell you how many units sold and maybe graph the trend. It won’t decompose trial from repeat. It won’t measure cannibalization. It won’t profile the buyers or normalize for launch context.

It might even tell you “strong launch performance indicates healthy customer demand”—which is exactly the kind of confident-sounding-but-possibly-wrong conclusion that makes undifferentiated AI dangerous for product decisions.

This is why ROGER breaks down product launches into their component parts: trial velocity, repeat rates, customer composition, incrementality estimates, and trajectory over time. Because anyone can tell you how many units sold. The question is what that number actually means.


What’s the most misleading product launch metric you’ve seen celebrated? We’ve got stories for days.