How to Add AV1 to Your Video Pipeline Without Disrupting What Already Works

Published April 6, 2026

Better compression, lower delivery costs, and a safer path to modern codec adoption with Qencode and Wasabi

At a Glance

Topic
Adopting AV1 encoding in production video workflows

Challenge
Introducing a new codec without risking existing playback or creating operational overhead

Solution
Qencode for AV1 transcoding via API, Wasabi for versioned S3-compatible object storage

Outcome
41–57% bitrate reduction vs. H.264 at equivalent Video Multimethod Assessment Fusion (VMAF), safe parallel rollout, predictable storage costs with no egress fees

Every video team hits this moment

Your H.264 pipeline is stable, but your catalog is rapidly growing. As libraries expand into 4K and 8K, the bandwidth bill keeps growing. At 8K, you’re encoding and delivering 16 times the data of 1080p. Across a full-resolution ladder, the cost multiplies quickly, making it impossible to absorb without better compression.

AV1 can address this challenge on multiple fronts. It’s royalty-free, widely supported, and provides better compression than H.264 at the same quality. These advantages become even more significant as resolution increases, meaning the more your catalog moves toward 4K and 8K, the more costly it becomes not to adopt them.

Most teams that have considered AV1 encounter the same issues: high encoding costs, playback gaps on older devices, and the risk of disrupting a stable pipeline. These concerns were valid when the tooling was less mature, but now that economic factors have finally improved, migration is a practical choice for most teams.

What is AV1 and why does it matter?

AV1 is an open, royalty-free video codec developed by the Alliance for Open Media. It was built to solve for a specific use case: deliver better compression than H.264 and High Efficiency Video Coding (HEVC) without the licensing costs and patent fragmentation that slowed HEVC adoption. 

What is a video codec?

A codec (coder-decoder) is a standard that defines how to encode and decode video data. The encoder compresses raw video by removing redundant information. For example, storing only what changed between frames rather than re-encoding every pixel, and using mathematical transforms to represent visual detail more compactly. The decoder reverses this process to reconstruct frames for playback. 

Codecs balance three variables: bitrate, visual quality, and computational complexity. A more efficient codec delivers the same perceived quality at a lower bitrate, or better quality at the same bitrate, usually at the cost of heavier encoding computation. 

The codec is the specification that defines the rules for the bitstream’s structure. However, the encoder is a specific implementation of that specification. Two different encoders for the same codec can vary significantly in speed and compression quality, because the specification defines what a valid bitstream looks like but not how the encoder must arrive at it. 

How does AV1 differ from H.264 and HEVC?

H.264 has been the dominant codec for over a decade, and widely supported across devices, players, and encoding pipelines. HEVC improved on H.264 with 40% better compression at equivalent quality, but fragmented licensing and royalty requirements limited its adoption in practice. AV1 improves on both, delivering up to 30-50% better compression efficiency than H.264 (depending on content and encoding settings) at equivalent visual quality, with meaningful gains over HEVC as well, resulting in lower infrastructure costs without any legal overhead. 

Why does compression efficiency matter at scale?

At 1080p, better compression is a useful cost-saving. At 4K and 8K, the differences compound because a single source file is already large, and a full resolution ladder can include six or more renditions. Running AV1 alongside existing codecs also means more total renditions in storage. Every percentage point of compression efficiency has a direct line to storage and egress costs at that scale.

Key components of an AV1 production workflow

Proper AV1 adoption can touch every layer of a video pipeline. Understanding the requirements and trade-offs can help teams plan an incremental, testable, and operationally clean rollout. 

Encoding infrastructure: AV1 is significantly more compute-intensive to encode than H.264. Achieving its compression gains requires more processing power per job, which makes infrastructure choices consequential. Cloud-based transcoding scales compute on demand and avoids the need for dedicated hardware investment. Teams running large libraries or high-resolution content should benchmark encoding times and costs before committing to a rollout timeline. 

Transcoding software and APIs: The transcoding layer converts source video into AV1 output renditions. API-based approaches are better suited for teams processing large libraries or maintaining consistent job definitions across many assets because job parameters can be versioned and applied programmatically rather than managed per-job. Automated pipelines can trigger transcoding in response to upstream events (such as a new file upload) and improve turnaround time. A well-designed API reduces the operational burden of large-scale transcoding and gives engineering teams precise control over how their media is processed. 

Storage layer: AV1 adoption means you’ll need more renditions per asset. Running AV1 in parallel with H.264 or HEVC is a great approach, but it increases the total storage footprint. At 4K and above, where individual renditions are already large, that increase is significant. The storage layer needs to support versioned output structures, S3-compatible access for integration with encoding and delivery tooling, and pricing that does not scale unpredictably with egress or API call volume. 

Packaging and manifest logic: After encoding, video renditions are packaged into segments and organized using manifest files that conform to adaptive bitrate streaming protocols (most commonly HTTP Live Streaming [HLS] and Dynamic Adaptive Streaming over HTTP [DASH]). The manifest lists the available renditions, including their properties such as resolution, bitrate, and codec. During playback, the client-side player reads the manifest and uses its adaptive bitrate algorithm to select the appropriate rendition based on network conditions, device capabilities, and any business rules implemented in the player. When supporting AV1 alongside existing codecs such as H.264 or HEVC, each codec requires its own set of renditions listed in the manifest with the correct codec identifiers, so that players and devices that do not recognize AV1 will automatically fall back to a supported codec. 

Playback and device compatibility: AV1 support is broader than ever; however, older devices, certain smart TV platforms, and some legacy players do not support AV1 decoding. Before expanding AV1 as the default for any audience segment, map your device distribution against current AV1 support data and run targeted playback testing. After rollout, monitoring buffering rates, startup times, and error rates for AV1 segments provides the signal needed to validate quality and catch edge cases early. 

What benefits does AV1 actually deliver?

Most AV1 articles cite compression improvements as a vague range. Here are specific numbers. 

Qencode ran an objective analysis across codecs at production settings, targeting perceptual quality equivalent to VMAF scores in the mid-90s. Here is what they found:

Compression efficiency of AV1 vs H.264

  • ≈ 41% lower bitrate at 1080p
  • ≈ 52% lower bitrate at 4K
  • ≈ 57% lower bitrate at 8K

Compression efficiency of AV1 vs H.265

  • ≈ 30% average bitrate reduction across the same resolutions

It’s worth noting that these are single points on the rate-distortion curve, not full Bjøntegaard Delta (BD)-rate studies. Fast-motion sports and noisy user-generated content will show different numbers than clean studio content or animation. That being said, AV1 still objectively uses fewer bits at equivalent quality, with the gap widening at higher resolutions (where it matters most).

AV1 encoding still costs more per minute of output than H.264 encoding; however, the delivery savings compound with every viewing hour. For content with even moderate view counts, the encoding cost increase is recovered quickly, especially for high-traffic content at 4K and above. The break-even point depends on your content delivery network (CDN) pricing and view-count distribution, but with the dropping of AV1 encoding costs from companies like Qencode, it is now reasonable to add AV1 to most of your encoding catalog. 

Device support is now much better than you think

The concern that AV1 will break playback on an important device class is understandable, but the ecosystem has matured to the point where a multi-codec strategy makes AV1 safe to deploy without sacrificing compatibility. 

  • Mobile: On Android, software-based AV1 decoding has been available for several years, and hardware decoding is now appearing widely. Beginning with Android 14, Google requires AV1 hardware decode on devices that meet certain performance thresholds. Safari has supported AV1 software decoding since iOS 17, with hardware decode available on devices equipped with the A17 Pro chip and later. Across both platforms, a large and growing share of phones and tablets can now play AV1 efficiently.
  • Desktop browsers: Chrome, Firefox, Edge, and Safari all support AV1 playback, though actual performance depends on the underlying operating system and hardware. Where hardware decode is available, the experience is seamless; on older systems, browsers may fall back to software decoding or defer to a supported codec. If your audience skews toward desktop, your effective AV1 coverage is already strong.
  • Mobile and desktop chips: Recent-generation processors from major chipmakers include dedicated AV1 hardware decode. Older hardware will continue to rely on H.264 or HEVC, but the installed base of AV1-capable silicon is growing with each device refresh cycle.
  • VR headsets: AV1 support is emerging in the virtual reality (VR) space, as the codec’s efficiency gains are especially valuable given VR video’s high resolutions and bitrates. Recent standalone headsets have begun incorporating AV1 hardware decode, and as the category matures, this is likely to become a baseline expectation. As with other device classes, a multi-codec approach ensures that VR users receive the best quality their hardware can support without excluding devices that have not yet adopted AV1.
  • The gap: Older smart TVs, certain set-top boxes, and legacy embedded players lack AV1 support entirely. These devices have long replacement cycles and will remain in the field for years, which is precisely why AV1 should be deployed alongside existing codecs rather than as a replacement for them.

Architecting AV1 as a parallel layer

The safest approach is to treat AV1 as an additive output, so your existing codec strategy mostly stays the same. AV1 renditions are generated alongside them, stored in a versioned structure, validated independently, and promoted on your schedule.

This workflow becomes even more efficient when you combine Qencode for the encoding and Wasabi for the storage. The following figure shows what it looks like in practice:

What HappensResult
Qencode job triggered via APIDeclarative job definition includes baseline codecs and AV1
Qencode transcodes in parallelH.264/HEVC and AV1 renditions generated from the same source
Outputs written to Wasabi in versioned prefixesEach codec version stored separately, nothing overwritten
Delivery logic serves the right codec per deviceAV1, where supported, baseline codecs as a fallback

Encoding with Qencode

Qencode is a cloud-native transcoding platform with AV1 support via API. You define your outputs declaratively, both your baseline H.264/H.265 renditions and AV1 renditions across the resolution tiers you need, and Qencode processes everything in parallel from the same source.

Because job definitions are declarative and API-driven, you update your encoding ladder once and apply it consistently across your library. Qencode also supports WebM containers for AV1 and can generate thumbnails, captions, and other assets as part of the same job.

Storage on Wasabi

Running AV1 alongside existing codecs means more renditions per asset, and that is the trade-off for a safe, parallel rollout. At 4K and above, each additional rendition set is substantial. This stage is where storage economics matter.

Wasabi connects as an output destination in Qencode with a standard S3-compatible configuration:

"destination": {
  "url": "s3://s3.us-east-1.wasabisys.com/my-bucket/av1-outputs",
  "key": "myWasabiAccountName",
  "secret": "myWasabiAccountKey",
  "permissions": "public-read"
}

Organize outputs with codec-specific prefixes to keep versioning clean and avoid overwrites:

/assets/{asset_id}/h264/1080p/
/assets/{asset_id}/h264/4k/
/assets/{asset_id}/av1/1080p/
/assets/{asset_id}/av1/4k/

Wasabi offers flat-rate pricing with no egress fees. When your CDN pulls AV1 segments for delivery, you are not paying per-request in addition to storage. For teams maintaining multiple codec versions at high resolutions, that predictability directly affects how you plan and budget a rollout. 

One thing worth noting: Wasabi has a 90-day minimum storage duration policy, so if you are encoding short-lived or ephemeral content that you would want to delete quickly, factor that into your cost model. 

Packaging and manifest logic

This layer hides multi-codec complexity from end users, making a seamless, well-executed implementation essential. The recommended approach is to package your renditions in the Common Media Application Format (CMAF) and generate HLS or DASH manifests that list multiple codec variants per resolution tier. The player’s capability detection selects the best-supported codec at runtime: AV1 on capable devices, H.264/H.265 as a fallback on everything else. 

In practice, there are a few ways to structure this. You can use a single HLS master playlist with codec-specific variant streams (using the CODECS attribute so players can filter), or maintain separate master playlists per codec and route at the CDN or application layer. DASH handles this through separate AdaptationSets per codec within the same manifest. The right choice depends on your player stack and CDN configuration, but the key principle is the same: the viewer never has to think about it, and you have not split your catalog. 

Beyond cost: the UX case for lower bitrates

It is tempting to frame AV1 purely as cost optimization, but lower bitrates at equivalent quality also improve the viewing experience, directly affecting engagement metrics.

  • Startup and time-to-first-frame: When the first segments are smaller, users on congested or high-latency connections see the video sooner. That affects session start rates and bounce.
  • Rebuffering: When your encoded bitrates better match the actual throughput distribution of your audience, Adaptive Bitrate (ABR) algorithms spend less time bouncing between rungs, and segments are less likely to miss deadlines and have fewer mid-session stalls.
  • Session length: Streams that start quickly and rarely buffer encourage users to keep watching. That compounds into higher ad impressions or perceived subscription value over time.

The compression gains from AV1 make your experience more robust across a wider range of network conditions, and that argument resonates beyond the infrastructure team.

Before

  • Single codec ladder with rising delivery costs at scale
  • Codec changes feel high-risk and all-or-nothing
  • Storage costs scale unpredictably with multiple renditions
  • Manual processes for reprocessing and version management
  • New codecs require dedicated engineering effort

After

  • AV1, alongside baseline codecs, has better compression efficiency across the resolution ladder
  • Parallel outputs make adoption incremental and reversible
  • Flat-rate storage with no egress fees on Wasabi
  • API-driven jobs with versioned, automated output structure
  • Declarative job definitions apply changes across the library

Validating before you promote

Because AV1 outputs live alongside your baseline codecs in a versioned structure, validation is low-risk. While production delivery continues on your existing codecs, you can:

  • Run VMAF or other perceptual quality comparisons between AV1 and existing renditions at each resolution tier
  • Test playback across your target devices, players, and browser versions
  • Monitor buffering rates and startup times for AV1 segments
  • Compare bandwidth per session between codec versions

Once you are satisfied, update your manifest logic to prefer AV1 where supported. If something looks off, reverting just means updating the manifest or routing logic. If you have also updated CDN cache rules, player configurations, or edge routing, a rollback affects each of those layers as well. If your team can easily map and plan for that, the risk remains extremely manageable. 

Scaling across your library

Once validated on a representative sample, scaling is a matter of running more jobs through the API. Start with your highest-traffic content or your highest resolution tiers, where AV1’s compression advantage over existing codecs is most pronounced, then work through the long tail as encoding budget allows.

Wasabi flat-rate pricing means adding AV1 renditions across a growing library does not create unpredictable storage cost spikes. Plan your rollout based on encoding budget and engineering priorities, and ship on your own schedule.

Getting started

  1. Define a transcoding job that includes your baseline codec outputs and AV1 renditions across the resolution tiers that matter most. 
  2. Start with 1080p and 4K, as these are likely to be most representative of the tradeoffs you may need to make. 
  3. Set your storage destination to the Wasabi Bucket you’d like to use to store AV1 outputs. 
  4. Run the job against a representative sample of your content in your library. 
  5. Compare the outputs against your existing renditions on perceptual quality, file size, and playback performance. 
  6. Once validated, expand to include more resolutions like 720, 1440, and 8K+. 

This way, you keep full control of the timeline with infrastructure that supports you as you continue to grow and scale.

Try it with your content

Qencode is a cloud-native video processing platform and a member of the Alliance for Open Media, with full AV1 support via API at roughly 20x lower encoding cost. Wasabi provides S3-compatible cloud object storage with flat-rate pricing and no egress fees. Together, video teams gain a production-ready foundation for modern codec adoption at any resolution. 

Start adding AV1 to your pipeline with Qencode.


Co-authored by the teams at Qencode and Wasabi.

We love creating powerful solutions that are aligned with the needs of your business.

Please send us a message if you have a question or Schedule a call for a demo to discuss your integration.

Let's talk

First Name
Last Name
Company
Email
Your Message

Contact us with any questions. We'd love to help.

Los Angeles, CA - (HQ)

San Francisco, CA

New York, NY