Skip to main content

Vidalytics Experiments

Run Video-Native A/B Tests Inside Vidalytics

Aldo Peña avatar
Written by Aldo Peña
Updated today

The Experiments feature allows you to run video-native A/B tests directly inside Vidalytics to identify which video performs better — based on a single business metric tied to revenue.

Instead of relying on external CRO tools or manually splitting traffic, you can test different video versions (hooks, offers, Smart Vid flows) directly within your existing embed.

Launch tests in minutes. Finish them fast. Roll out winners with confidence.


Who It’s For

Experiments is built for:

  • High-traffic marketers running VSL-heavy funnels

  • Agencies managing multiple campaigns

  • Teams optimizing view → click → opt-in → purchase performance

  • Businesses that want structured experimentation instead of random edits

Especially useful when funnels plateau and incremental improvements matter.


What It Enables

  • Launch a video A/B test using your existing Control embed

  • Optimize toward a single primary metric (e.g., Click Rate, Opt-In Rate, Conversion Rate)

  • Get a clear directional read without statistical jargon

  • Finish a test and either keep current experience or declare/roll out a winner

  • Build a repeatable, organized testing habit

Availability

  • Plans: Starting on our Pro Plan and up.

  • Required Permissions:

    • View Experiment

    • Add/Edit Experiment

    • View Video Analytics (required to view experiment analytics)

What Experiments Is (Mental Model)

Experiments allows you to:

  1. Choose a Control video (existing embed)

  2. Create a Variant video (challenger)

  3. Select one success metric

  4. Split eligible traffic between both

  5. Compare performance side-by-side

  6. Decide whether to:

    • Keep current experience

    • Declare and roll out a winner

How to Create an Experiment

Quick video walkthrough on how to create an Experiment in Vidalytics!

  1. Go to Experiments → Create Experiment

  2. Set experiment basics:

    • Name (recommended: “What + where” e.g., Hook Test — Homepage VSL)

    • Optional description or hypothesis

  3. Select variants:

    • Control (existing video)

    • Variant (challenger video)

  4. Choose one success metric:

    • Click Rate

    • Opt-In Rate

    • Conversion Rate

    • Avg % Watched

    • Unmute Rate

    • Bounce Rate

  5. Set traffic allocation (default 50/50)

  6. Save experiment

  7. Use the Control video embed on your page

If the Control video is already embedded, you can start immediately.

Managing the Experiment Lifecycle

Start Experiment

  • Begins traffic splitting

  • Collects results where embed is installed

Pause Experiment

  • Stops splitting traffic

  • 100% of visitors see Control

  • Data collected remains available

Resume Experiment

  • Restarts traffic splitting

  • May take a few minutes to propagate across embeds

Finish Experiment

  • Ends the run

  • Locks results

  • Prompts you to select an outcome

Declaring a Winner

When finishing the experiment, choose:

No Winner

  • Keeps current Control experience live

  • Ends experiment

Declare Winner

  • Select winning variant (C or V)

  • Optional: Apply winner to 100% of traffic

Critical Gotcha: Embed Swap Behavior

If you apply the Variant as 100% winner:

  • Control embed placements now show Variant

  • Any other placements using Variant will now show Control

This embed swap behavior is expected.

Viewing Experiment Results

Inside the Experiment Profile you can see:

Summary

  • Key metrics per variant

  • Difference between them

Engagement Graph

  • Audience size

  • Conversions

  • Skips / rewinds

Not Available (Yet)

  • Timeline view

  • Funnel view

  • Filters

  • Segment breakdown

Important note:
Vid Stats uses the same data. During active tests, stats may appear “mixed” because traffic is split between two videos.

Editing Experiments

Draft

Everything editable.

Running / Paused

Only safe fields editable:

  • Name

  • Notes

  • Goal metric

Core test setup is locked.

Finished / Archived

Read-only.

System Rules & Limitations

  • A/B only (2 variants)

  • One success metric per experiment

  • Variants locked after start

  • Traffic allocation locked after start

  • Safe fallback: if experiment fails to apply, Control shows

  • Cannot delete videos used in Running/Paused experiments

  • Vid Conditions restricted during active test

  • Email embed restricted

  • Embed swapping restricted

  • Resetting stats erases variant data

  • After Finish: videos can be edited or deleted normally

Videos used in active tests are highlighted in:

  • My Vids

  • Vid Settings

  • Vid Stats

Not Supported

  • Settings-only experiments

  • A/B/n (3+ variants)

  • Multi-goal optimization

  • Guardrail auto-stop

  • Slack notifications

  • Auto-finish rules

  • Advanced statistical modes (Bayesian, CUPED, etc.)

Edge Cases (Expected Behavior)

  • If Control embed is not live → 0 data

  • If Control is embedded on multiple pages → experiment runs on all

  • Paused → 100% traffic to Control

  • Resume may take minutes to propagate

  • To change variants mid-run → duplicate experiment

  • Deleting video in Draft experiment makes draft unstartable

  • Deleting video in Running/Paused experiment is restricted

  • Applying winner may flip embeds elsewhere

  • Cannot run two experiments on same video simultaneously

Best Practices

  • Run experiments only on high-traffic pages

  • Optimize one variable at a time (e.g., hook, offer, CTA)

  • Let tests gather meaningful data before finishing

  • Document learnings to build a testing habit


If you have questions, feedback, or need assistance, contact our Customer Happiness Team at [email protected].

Did this answer your question?