Skip to main content

Experiments - Winner Recommender

Know When to Finish Your Experiment

Written by Aldo Peña
Updated over 3 weeks ago

Winner Recommender helps you determine when your experiment has enough data to make a decision.

Instead of guessing whether a test is ready to finish, Vidalytics analyzes your results in real time and provides a clear recommendation — showing which variant is leading, how confident the data is, and whether it’s safe to call a winner.

This removes uncertainty and helps you make faster, more confident decisions.

What It Does

Winner Recommender evaluates your experiment continuously and provides:

  • A leading variant (Control or Variant)

  • The performance lift between variants

  • A confidence level indicating how reliable the result is

  • A status recommendation telling you whether to keep running or finish the test

All recommendations are powered by a Bayesian statistical model, which continuously updates as new data comes in.

Experiment Status Indicators

Each experiment includes a status that reflects how ready it is for decision-making:

  • Too Early – Not enough data collected yet

  • Too Close – No meaningful difference between variants

  • Gaining Confidence – A leading variant is emerging

  • Ready to Finish – Enough data to safely declare a winner

These statuses update automatically as new data is collected.

Confidence-Based Decisioning

Winner Recommender uses a Bayesian statistical approach* to evaluate:

  • Sample size

  • Conversion events

  • Performance differences

Unlike traditional methods that rely on fixed thresholds, Bayesian analysis continuously updates the probability that one variant is better than the other as more data is collected.

This allows for:

  • More intuitive confidence scoring

  • Better decision-making with smaller or evolving datasets

  • Continuous insight instead of “pass/fail” results

📚 Learn more about the Bayesian statistical approach HERE

Guardrails

You can define thresholds to control when a test is considered “ready”.

Minimum test duration: The minimum amount of time your experiment must run before it can be considered for a reliable decision.

Minimum number of viewers: The minimum number of viewers required to ensure your test has enough data to produce meaningful insights.

Minimum conversions/events: The minimum number of conversion actions needed before the results can be considered statistically relevant.

Confidence threshold: The required level of statistical confidence needed before recommending a winner.

Minimum detectable effect (MDE): The smallest performance difference between variants that the system considers meaningful enough to act on.

Note: Guardrails apply at the experiment level and help standardize decision-making. We have default (recommended) values for each one of these elements. These are the ones you'll see set by default. We do not recommend you edit these values unless is extremely necessary.

Benefits

  • Eliminates Guesswork: Know when your test has enough data to make a reliable decision.

  • Faster Decision-Making: Avoid running tests too long or ending them too early.

  • Higher Confidence Outcomes: Make decisions based on continuously updated statistical confidence.

  • Better Experiment Completion: Helps teams consistently finish tests instead of abandoning them.

  • Clear Visibility Over Time: Understand how results evolve—not just the final outcome.

Notes & Limitations

  • Available starting on Pro plan and up (same as Experiments).

  • The system provides recommendations only — final decisions are always made by the user.

  • Data may take up to 2 hours to fully update after new activity.


For additional questions, feedback or assistance please feel free to reach out directly to our Customer Happiness Team at [email protected]. 😊

Did this answer your question?