Adaptive Bitrate in Avalanche – Configuration & Stats

Starting with version 4.10, Spirent Avalanche officially supports Apple HLS (AHLS) as a server for Adaptive Bitrate (we supported the technology in a client-only fashion before then). That means you can emulate Apple HLS servers with Avalanche, generating random content. You can choose which bitrates to advertise in the Manifest, the Media length (duration of the video) as well as the Fragment length (how many seconds of video each fragment holds).

Server-side Setup (1 minute)

The server-side setup is very straightforward. Since ABR is based on HTTP, the tab to configure this is under Server/Transactions (where HTTP content is setup). You will notice a checkbox at the bottom (“HTTP ABR”) that you need to enable in order to start streaming.

Server Type: You can only choose Apple (HLS) here. In the future we will support Microsoft Smooth and Adobe Zeri.

Stream Type: VOD is for video on demand. The streams have got duration and you can use Trick mode on the client-side. Live emulates a live broadcast which by definition as no fixed duration and you cannot use trick mode.

Resource Name: The name of your playlist. If “Sample” is the name, our server will accept “Sample.m3u8” in the Manifest requests.

VOD Media Length: For VOD streams, the duration of the media. This way if you tell your clients to play until EOS (End of Stream) this setting can set the maximum duration of watching a video. The minimum value is 2 seconds. There is no maximum value.

Max Video Fragment Length: How many seconds of video each fragment holds. The “Max.” statement is here because the last fragment can potentially be shorter (ex: 15 seconds media. The first fragment will hold 10 seconds, the second one only 5 seconds.). The value cannot be longer than the Media Length. The minimum value is 2 (or the Media Length).

Then on the right side you can see a list of bitrates. These are the bitrates that we will announce in the Manifest. In the example above I have the maximum supported (1.2Mbps), then half (640 Kbps) and then minimum (64 Kbps). I used large differences because this way we can easily see the shifting.

To finish the setup, create a HTTP Server as usual (Server/Profiles) and choose the appropriate Transaction from the dropdown list. Then create an association that uses that Profile.

Client-side setup (3 minutes)

The client-side is also pretty easy to setup. Simply create an Action List, and then an Adaptive Streaming Profile. The window looks like this:

Server Type: Must match what you have on the other end. In our case it must be Apple because this is what we are emulating. But if you are in a client-only mode, we fully support Microsoft and Adobe’s solutions.

Stream Type: The actions and handling of the fragments are slightly different between VOD and Live streams for Avalanche. So please kindly specify which type you are connecting to. VOD in our case.

Reload Playlist Depth: This applies only to Apple HLS. When you use this technology, the client must download an updated playlist on a regular basis, because the original manifest doesn’t have the full list of fragments (unlike Microsoft Smooth). This parameter tells the client how many fragments before the last it needs to wait before requesting the manifest again. If you set this to 2, for instance, the client reloads the playlist before getting the next-to-last fragment in the current playlist. The maximum value is 5.

Manifest File Download Timeout: This is the maximum allowed time (in seconds) to retrieve the Manifest. The timer starts when the client issues the request. If the Manifest has not been downloaded in the specified amount of time, the session is aborted. The default value is 10, the maximum is 60. Using a value of 0 disables the timeout.

Starting Bitrate: This allows you to choose what the clients do when they are offered the Manifests. They can go for the Maximum available bitrate, the Median, the Minimum, or the one the closest to what you define (“User Defined”) .

Bitrate Shift Algorithm: This defines how the clients will shift. “Constant” means that no shifting will occur. “Normal” is based on the time it takes to download a fragment. For instance, if the next fragment has been downloaded after 80% of the current fragment has been played, a downshift will occur. If less than 50% have been played, an upshift will occur. If between these bounds, no shifting occurs. These values are customizable. “Smart” does pretty much the same, except it tries to calculate the available bandwidth and will go to the bitrate that is the closest. Refer to the on-line help for more details on this.

Once we are happy with the profile, we can just use it in the action list. The syntax is very simple:

as://192.168.1.1/Sample.m3u8 profile=”AdaptiveStreaming_0001″

Note that the IP address or the Playlist name can be a variable. The profile name needs to match what you created under Adaptive Streaming. We can now add the association and start the test!

Runtime Stats

There are a lot of very interesting stats for ABR. We will cover them in this section:

Channels per Interval: How many Channels are generated per second. A channel is not a stream. A channel is part of a stream, for instance it can be a video. If there’s also an audio track, that’s a different channel. A stream can have multiple channels. If your streams only have video, this will match.

Buffering Wait Time: Do you know that annoying time when you need to wait for your player to download enough video to start playing it? This is the buffering wait time. We measure the (min/max/average) time spent waiting for buffering aggregated for all the users, for the last 4 seconds.

Avg. Fragment Response and Download Time: This one is tricky. It measures the response time (time between the GET request and the first byte of data) as well as the download time (the time between the first and last byte of data), and add them up. Then it will check which bitrate the fragment was in. Then it compares the value to the other fragments in that “bucket.” Then it averages out the values of that bucket, and it shows it. It basically tells you the average time it takes to request and fully request fragments in the specified bitrate buckets.

Manifest per Interval: The amount of Manifests that are retrieved every second.

Fragments per Interval: The amount of Fragments that are retrieved every second.

Active Video Channels: Shows how many clients are connected to specific buckets of bitrate (<200kbps, 200-499kbps, 2000+ Kbps, etc.). Very cool to see them moving around as they shift J

Standalone Data: This one is a table, not a graph. It shows several stats:

  • Active Sessions: How many streams are currently active (should match the total amount of Active Video Channels)
  • Buffer Underruns: Shows how many times the users had to wait for the buffer to fill up (people had to “wait for their video”). You want this to be zero.
  • Total Active Channels: Tells you how many channels are active. As explained above, this number can be different than the amount of streams, since a stream can contain multiple channels (audio, video and subtitles, for instance)
  • Total Upshifts/Downshifts: How many times user had to Upshift or Downshift.
  • Total Rate Maintaining: Very interesting stat! This tells you how many fragments were retrieved, amongst all users, before any kind of shifting happened.

Summary and Real-time Stats

As we have seen, the Runtime stats are pretty good. As usual, the Summary and Real-Time ones are based on them. The Summary will show the totals. When you think summary, think “tables.” You will see how many upshift and downshift occurred, how many 200/OK status codes were received, and tons of other things.

Real-Time stat are something you should visualize as “Graphs.” They are very close to the Runtime stats, as they will show everything “per second.” Some of these stats are very interesting and not shown at runtime out of performance concerns.

One very interesting statistic is the “Adaptivity Score.” There are a lot of reasons why measuring Median Opinion Score (MOS) is not relevant in ABR, but is outside of the scope of this article. Since MOS is irrelevant, we came up with our own scoring mechanism. We take a look at the total current bandwidth for all the users, and compare it to the potential maximum available bandwidth (if they were all connected to the highest stream). We then rationalize this over 100 (to get a nice percentage). And that gives you a fairly good percentage of achieved bandwidth.

The exact formula is: (current bitrate / maximum potential bitrate) * 100%.

Summary

That’s pretty much it for this topic. ABR is a really cool technology, well thought-out and perfectly inline with the current needs of the people. Our support is pretty great (as I believe this post illustrates) and I’m really excited about the fact we can cover pretty much any test case where this technology is needed.

Trivia: I was talking with the BBC about ABR. They use this for streaming the Olympic Games. They expected a 3 Tbps of throughput at peaks on their CDNs. Friggin’ awesome  🙂

Advertisements

About acastaner

I'm a Business Development Engineer at Spirent, specialized in Layer 4-7 testing, Virtualization and Automation.
This entry was posted in Tutorial and tagged , , , , , , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s