Skip to main content

Experiments - Winner Recommender

Know When to Finish Your Experiment

Written by Aldo Peña
Updated this week

Winner Recommender helps you determine when your experiment has enough data to make a decision.

Instead of guessing whether a test is ready to finish, Vidalytics analyzes your results in real time and provides a clear recommendation — showing which variant is leading, how confident the data is, and whether it’s safe to call a winner.

This removes uncertainty and helps you make faster, more confident decisions.

What It Does

Winner Recommender evaluates your experiment continuously and provides:

  • A leading variant (Control or Variant)

  • The performance lift between variants

  • A confidence level indicating how reliable the result is

  • A status recommendation telling you whether to keep running or finish the test

All recommendations are powered by a Bayesian statistical model, which continuously updates as new data comes in.

Experiment Status Indicators

Each experiment includes a status that reflects how ready it is for decision-making:

  • Too Early – Not enough data collected yet

  • Too Close – No meaningful difference between variants

  • Gaining Confidence – A leading variant is emerging

  • Ready to Finish – Enough data to safely declare a winner

These statuses update automatically as new data is collected.

Confidence-Based Decisioning

Winner Recommender uses a Bayesian statistical approach* to evaluate:

  • Sample size

  • Conversion events

  • Performance differences

Unlike traditional methods that rely on fixed thresholds, Bayesian analysis continuously updates the probability that one variant is better than the other as more data is collected.

This allows for:

  • More intuitive confidence scoring

  • Better decision-making with smaller or evolving datasets

  • Continuous insight instead of “pass/fail” results

📚 Learn more about the Bayesian statistical approach HERE

Metrics Timeline (Trend View)

You can view how your experiment evolves over time:

  • Track performance trends for each variant

  • Identify when a winner starts to emerge

  • Understand stability vs fluctuation in results

This helps avoid decisions based on short-term spikes.

Custom Guardrails (Optional)

You can define thresholds to control when a test is considered “ready”:

  • Minimum test duration

  • Minimum number of viewers

  • Minimum conversions/events

  • Confidence threshold

  • Minimum detectable effect (MDE)

Guardrails apply at the experiment level and help standardize decision-making.

Benefits

  • Eliminates Guesswork: Know when your test has enough data to make a reliable decision.

  • Faster Decision-Making: Avoid running tests too long or ending them too early.

  • Higher Confidence Outcomes: Make decisions based on continuously updated statistical confidence.

  • Better Experiment Completion: Helps teams consistently finish tests instead of abandoning them.

  • Clear Visibility Over Time: Understand how results evolve—not just the final outcome.

Notes & Limitations

  • Available starting on Pro plan and up (same as Experiments).

  • The system provides recommendations only — final decisions are always made by the user.

  • Guardrails apply to the entire experiment (not per variant).

  • Data may take up to 2 hours to fully update after new activity.


For additional questions, feedback or assistance please feel free to reach out directly to our Customer Happiness Team at [email protected]. 😊

Did this answer your question?