Skip to main content

Vidalytics Experiments

Run Video-Native A/B Tests Inside Vidalytics

Written by Aldo Peña
Updated this week

The Experiments feature allows you to run video-native A/B tests directly inside Vidalytics to identify which video performs better — based on a single business metric tied to revenue.

Instead of relying on external CRO tools or manually splitting traffic, you can test different video versions (hooks, offers, Smart Vid flows) directly within your existing embed.

Launch tests in minutes. Finish them fast. Roll out winners with confidence.

🚫 Restrictions: Starting on our Pro Plan and up.

Who It’s For

Experiments is built for:

  • High-traffic marketers running VSL-heavy funnels

  • Agencies managing multiple campaigns

  • Teams optimizing view → click → opt-in → purchase performance

  • Businesses that want structured experimentation instead of random edits

Especially useful when funnels plateau and incremental improvements matter.


What It Enables

  • Launch a video A/B test using your existing Control embed

  • Optimize toward a single primary metric (e.g., Click Rate, Opt-In Rate, Conversion Rate)

  • Get a clear directional read without statistical jargon

  • Finish a test and either keep current experience or declare/roll out a winner

  • Build a repeatable, organized testing habit

How to Create an Experiment

Quick video walkthrough on how to create an Experiment in Vidalytics!

  1. Go to Experiments → Create Experiment

  2. Set experiment basics:

    • Name (recommended: “What + where” e.g., Hook Test — Homepage VSL)

    • Optional description or hypothesis

  3. Select variants:

    • Control (existing video)

    • Variant (challenger video)

  4. Choose one success metric:

    • Click Rate

    • Opt-In Rate

    • Conversion Rate

    • Avg % Watched

    • Unmute Rate

    • Bounce Rate

  5. Set traffic allocation (default 50/50)

  6. Save experiment

  7. Use the Control video embed on your page

If the Control video is already embedded, you can start immediately.

Managing the Experiment Lifecycle

Start experiment

  • Begins traffic splitting

  • Collects results where embed is installed

Pause experiment

  • Stops splitting traffic

  • 100% of visitors see Control

  • Data collected remains available

Resume experiment

  • Restarts traffic splitting

  • May take a few minutes to propagate across embeds

Finish experiment

  • Ends the run

  • Locks results

  • Prompts you to select an outcome

Metrics

Timeline (Trend View)

You can view how your experiment evolves over time:

  • Track performance trends for each variant

  • Identify when a winner starts to emerge

  • Understand stability vs fluctuation in results

This helps avoid decisions based on short-term spikes.

Video engagement

You can view and compare both videos side by side, analyzing their performance frame by frame — including audience behavior, conversions, and any available segments within the selected metrics.

This is the default visualization you’ll see in your video analytics, presented as a graph. It’s the most effective way to quickly identify where viewers are dropping off, where they’re taking action, and where conversions are happening.

Declaring a Winner

When finishing the experiment, choose:

No winner

  • Keeps current Control experience live

  • Ends experiment

Declare winner

  • Select winning variant (C or V)

  • Optional: Apply winner to 100% of traffic

Embed swap behavior

If you apply the Variant as 100% winner:

  • Control embed placements now show Variant

  • Any other placements using Variant will now show Control

This embed swap behavior is expected.

Winner Recommender

Winner Recommender helps you determine when your experiment has enough data to make a decision.

Instead of guessing whether a test is ready to finish, Vidalytics analyzes your results in real time and provides a clear recommendation — showing which variant is leading, how confident the data is, and whether it’s safe to call a winner.

Each experiment includes a status that reflects how ready it is for decision-making:

  • Too Early – Not enough data collected yet

  • Too Close – No meaningful difference between variants

  • Gaining Confidence – A leading variant is emerging

  • Ready to Finish – Enough data to safely declare a winner

These statuses update automatically as new data is collected. This removes uncertainty and helps you make faster, more confident decisions.

📚 Learn More about Winner Recommender HERE

Best Practices

  • Run experiments only on high-traffic pages

  • Optimize one variable at a time (e.g., hook, offer, CTA)

  • Let tests gather meaningful data before finishing

  • Document learnings to build a testing habit

System Rules & Limitations

  • A/B only (2 variants).

  • One success metric per experiment.

  • Variants locked after start.

  • Traffic allocation locked after start.

  • Safe fallback: if experiment fails to apply, Control shows.

  • Cannot delete videos used in Running/Paused experiments.

  • Vid Conditions restricted during active test.

  • Email embed restricted.

  • Embed swapping restricted.

  • Resetting stats erases variant data.

  • After Finish: videos can be edited or deleted normally.

Videos used in active tests are highlighted in:

  • My Vids

  • Vid Settings

  • Vid Stats

🤔 Have any questions regarding Experiments? Join our Live Q&A session! Register HERE


If you have questions, feedback, or need assistance, contact our Customer Happiness Team at [email protected].

Did this answer your question?