Marketing

How to run a 30-day pricing experiment that proves customers will pay 30% more for your saas feature

How to run a 30-day pricing experiment that proves customers will pay 30% more for your saas feature

I recently ran a 30-day pricing experiment to test a bold hypothesis: customers would willingly pay 30% more for a specific SaaS feature. I want to walk you through exactly how I set it up, what I measured, and the practical lessons that came out of it so you can run your own experiment with confidence. This is hands-on, no-nonsense guidance based on real experience—what I did, what worked, and where I would refine things next time.

Why test a 30% increase for a single feature?

Price is one of the easiest levers to tweak, but it’s also the one with the most risk. Rather than raise the entire subscription price (which can trigger churn), we focused on a single, high-value feature that users repeatedly asked for and that notably reduced friction for power users. My goal was to isolate willingness to pay for that feature without disrupting baseline subscription revenue.

Define the hypothesis and success metrics

Before any code changes, I wrote a crisp hypothesis: Users who are heavy users of Feature X will still convert at least 60% of the baseline purchase rate when the price is increased by 30%. I chose 60% because even with a reduction in conversion, the higher ARPU would still increase revenue if the drop wasn’t too steep.

Primary metrics I tracked:

  • Conversion rate on the upsell (feature purchase or upgrade)
  • Average revenue per user (ARPU) during the test window
  • Churn and downgrade rate among users exposed to the test
  • Average order value (AOV) for the feature
  • Activation/usage of the feature post-purchase (to ensure perceived value)
  • Segment your audience deliberately

    Segmentation is essential. I didn’t show the new price to everyone. I used three segments:

  • Power users: Users who used the feature category at least 3 times/week in the last 30 days.
  • Potential users: Users who visited the feature page or trialed it but hadn’t purchased.
  • Control group: Random sample of similar users who saw the original price.
  • Targeting power users reduces false negatives—the people most likely to pay are the ones who’ll reveal true willingness to pay.

    Design the experiment

    I ran a parallel A/B test with equal-sized groups for the target segments. The variants were:

  • Control: Current price (baseline).
  • Variant: Baseline price + 30% for the same feature.
  • Key design considerations I implemented:

  • Run the experiment for at least 30 days to capture weekly and monthly billing cycles and to smooth out short-term noise.
  • Ensure randomization at the user level (not the session) so users see a consistent price.
  • Log all touchpoints—emails, in-app modals, billing events, cancellations—to correlate behavior with exposure.
  • Craft the right messaging

    Price reactions are strongly influenced by how you present the change. I used three messaging tactics concurrently:

  • Value-first copy: Focused on outcomes enabled by the feature—time saved, revenue gained, errors avoided.
  • Anchoring: Showed the feature price next to a higher legacy bundle price to make the 30% increase feel reasonable.
  • Transparency: For existing customers, we clearly communicated what they receive and offered a grandfathered option for a short window.
  • Example in-app CTA copy: “Unlock prioritized exports—cut reporting time from hours to minutes. Add Feature X for $12/month (was $9)”.

    Implementation checklist

    Here’s the minimal technical stack I used:

  • Feature flagging tool (LaunchDarkly / Split / homemade) to toggle pricing per user.
  • Analytics (Mixpanel or Segment + data warehouse) to track events and cohort behavior.
  • Billing integration (Stripe) with separate SKU for the standalone feature to attribute revenue precisely.
  • Experiment tracking sheet (or GrowthBook) to log start/end, cohorts, traffic split, and daily metrics.
  • Daily hygiene and monitoring

    Every day I checked:

  • Traffic distribution remained balanced.
  • Conversion funnel for the upsell (impression → click → purchase).
  • Any unexpected spikes in support tickets or cancellations.
  • I set automated alerts for abnormal churn spikes and a daily digest with key KPIs so I didn’t need to stare at dashboards all day.

    Example results table

    Metric Control (Baseline) Variant (+30%) Result
    Conversion rate to feature 12% 9% -25% relative
    AOV for feature $10 $13 +30% nominal
    ARPU (overall) $22 $24 +9% overall
    Churn (30-day) 3.2% 3.5% +0.3pp (monitor)

    Interpretation and statistical confidence

    Even if conversion dropped, revenue per active user rose enough to justify the price bump in the short term. But you must test statistical significance: I calculated confidence intervals on conversions and revenue uplift and used a two-proportion z-test for conversion changes. If you’re not comfortable with stats, use an A/B testing tool that reports significance automatically.

    Be wary of short-term wins due to selection bias or novelty. A 30-day window is often enough to detect clear signals, but I always run a follow-up cohorted analysis at 60 and 90 days to check for delayed churn or regression.

    Lessons I learned

  • Segment matters: Power users tolerated the increase better than casual users. Don’t generalize across your entire base.
  • Messaging changes everything: Users who saw explicit value statements converted at higher rates even at the higher price.
  • Grandfathering reduces backlash: Offering existing customers legacy pricing for a limited time prevented support escalations.
  • Measure usage, not just purchases: If people buy but don’t use the feature, the price increase will only hurt retention later.
  • Next steps to scale the result

    If the experiment shows a sustainable uplift, I recommend:

  • Roll out the new price to the most receptive segments first (power users, enterprise trials).
  • Introduce the feature price as part of bundles with clear value anchors.
  • Monitor cohorts monthly for 6 months to ensure long-term retention holds.
  • Test incremental price points (e.g., +10%, +20%, +40%) to find an optimal elastic point.
  • Running a focused 30-day experiment like this is one of the most efficient ways to discover hidden value in your product without risking your entire customer base. It forces you to tighten messaging, measure rigorously, and put customer value at the center of pricing decisions. If you want, I can share a starter spreadsheet template for segmentation, metrics, and power calculations to help you plan your own experiment—just ask.

    You should also check the following news:

    How to negotiate a cap table change with early investors without losing founder control

    How to negotiate a cap table change with early investors without losing founder control

    When you're an early-stage founder, your cap table is one of the most intimate pieces of your...

    Apr 11