Avalanche Now Supports ETag and If-None-Match

With the release of version 4.37 this week, Avalanche now supports the ETag and If-None-Match concept. This will be highly useful for Content Delivery Network (CDN) testing as well as caching devices implementing this concept.

Let’s start with a little recap on the said concept.

What are Etag and If-None-Match?

Modern browsers help speeding up the loading of previously visited pages by caching as much static content as possible. If the header image, or the CSS template, or the JavaScript resources don’t change between two visits, it would be pretty dumb to download them each time you visit. It makes pages longer to load, generates more traffic on the network and more load on the servers.

Initially the browsers would “remember” the first time they downloaded a resource (like an image), and let the server know. They send an “If-Modified-Since: <some date>” header (which Avalanche also supports). The server would then compare with the local resource’s modified date (this is an attribute of the file on the file system). If the file was modified later than the client’s version (implying it’s a newer version of the same file), the server would return the resource, otherwise it would return a message saying the resource was not modified (and the browser would use the version it has in its cache).

If-None-Match (INM) does something similar, but in a smarter way. For INM to work, the server needs to “tag” the resources it sends. The first time you visit a website, the browser will retrieve the resources from the server. The server will return the content with an “ETag: <some value>” header. The browser stores the resource in its cache, along with the ETag value. The next time the resource is needed (for instance, the next time you visit the website), the browser will check against the server if the file is still the same: it will send the request to retrieve the resource along with an “If-None-Match” header with the same value as the server sent initially. When the server receives the request, it checks that value against its own. If it matches, it means the files are the same, so it will return a response “304 Not Modified” to let the browser know it can load the file from the cache. If the tags don’t match, it will return a “200 OK” and the new version of the file.

For instance, Bob requests the resource “spirent.png.” The server will return the image with an ETag header, for instance ETag: “123456789”. The next time Bob visits the page, the browser will send a request to the server to download “spirent.png”, with the header “If-None-Match: 123456789”. If the file hasn’t changed on the server, the server will return “304 Not Modified” and Bob’s browser will load the file from cache. Otherwise, the server will return the whole new image, with a different ETag, that the browser will save in its cache. Etc. ad nauseam.

Note: There’s no defined standard describing how to generate ETags. Each implementation can have its own method of doing so. Usually people will implement a collision-resistant hashing algorithm but it’s not required.

How does it work in Avalanche?

First, a caveat, to get that out of the way. Being a test tool for very high performance, Avalanche doesn’t “remember” ETags. Each Simulated User (SimUser) is like a fresh visit, there’s no simulated local browser cache. You can pre-define the ETags in your Action List, as I will show below, but it’s not automatic. It’s also more practical for CDN and caching device providers like this.

Let’s take a look at the server-side under “Transactions.” This is where we’ll define how we want to send the ETags. There are two ways (besides “None”): Incremental and Circulatory. The documentation is pretty clear on what they do, so allow me to quote it:

  • Incremental ETag: The server includes an ETag header in HTTP response messages, which starts with 1 and increments every number of milliseconds that you specify in the Interval field.
  • Circulatory ETag List: The server includes an ETag header in HTTP response messages, which gets assigned from the circulatory ETag List of strings every number of milliseconds that you specify in the Interval field.

In other words, when you choose “Incremental” Avalanche will generate the ETag value itself. Starting at 1 (i.e.: first header would be ETag: “1”) and incrementing by 1 every 1000 milliseconds (1 second). Note that this is since the start of the test, so if you have some delay at the beginning of your test (for ARPing, for instance), you won’t get a value of “1” in the first response (in my back-to-back tests it starts at 3).

When you choose “Circulatory” you can choose your own ETag values, and Avalanche will cycle through them. Using the default 1000 ms value, the returned header value will change every second.

Some Examples

Let’s create a few requests (with no If-None-Match yet) and see how the server behaves. I configured the clients to make 3 requests, at a 2 seconds interval.

Avalanche server-side Transaction configuration

Avalanche server-side Transaction configuration

This is the resulting PCAP:

Avalanche HTTP server returns the ETag values as configured.

Avalanche HTTP server returns the ETag values as configured.

It’s incrementing as expected! Now let’s see what happen when we do match. For this I need to add an “If-None-Match” header. To keep it simple I’ll make this static and the same for all requests, but it could be a variable or random, and different for each request of course. Server side stays the same, and this is the action list:

1 get <ADDITIONAL_HEADER="If-None-Match: \"7\"">
THINK <2000>
1 get <ADDITIONAL_HEADER="If-None-Match: \"7\"">
THINK <2000>
1 get <ADDITIONAL_HEADER="If-None-Match: \"7\"">

Note the escaping backslash characters. This is required because INM values are enclosed in double quotes, but the “ADDITIONAL_HEADER” as well, so we need to tell the backend parser we don’t want to close the quote at that point.

According to the previous PCAP, only the 3rd GET should match since the first response’s ETag is “3”, the second is “5” and the third one is “7.” Let’s see:

If-None-Match headers are sent, and the one we expected to match returns the correct 304 status code.

If-None-Match headers are sent, and the one we expected to match returns the correct 304 status code.

Looks fine! We can see (in blue) that the clients are now sending the “If-None-Match” value we configured in the Action List. The first two responses are a 200 OK since the ETag don’t match. However the third request sends a 304 Not Modified since the ETag match (and we can see there’s no content return, so we saved bandwidth and time).

There could be many more examples, so I’ll stop here. If you have specific questions, as usual feel free to contact me!

Image | Posted on by | Tagged , , , , | Leave a comment

Spirent Launches Avalanche NEXT

TL;DRGo here.

A Brief History of  Time Avalanche

When you look back at the history of Avalanche, you realize it was the first ever web testing stateful test tool – that could scale, anyway. It really picked up because it was the only way to test the first stateful firewalls of the time, mostly from major vendors such as Cisco and Check Point. The tools started in the R&D labs, you know, people who look at the bits and bytes and need a super-flexible tool because they are either developers or dedicated test engineers. And then quickly spread to be used in quality assurance, and by many other customers such as Service Providers.

As it grew Avalanche is probably the most widely used stateful test tool in the world. Some very large enterprises, with dedicated testing teams (such as banks or major car manufacturers, for instance) were also customers, but in much smaller numbers than R&D teams from Network Equipment Manufacturers. But these days, we see the need for Avalanche-like testing rise a lot in the Enterprise market.

We started to see that trend become more prevalent in the last couple of years. More and more Enterprise customers now want to test. The flurry of attacks we’ve seen in the past years might have something to do with that. Enterprises are starting to realize that you need to test as much as possible for many reasons – before buying, before changing the configuration, before upgrading, etc. This is a Good Thing ™. But it comes with challenges.

Challenges of a R&D GUI

When your daily job is to manage a corporate network, deal with security alerts, setup BGP Peers and all these tasks Network Administrators have to achieve, you are likely not expected to be an expert at testing. And when the GUI is made for Developers or expert Performance Testing Engineers, it can get tricky. Do we really need to put the IP Fairness Threshold or the HTTP Transactions per TCP connection values in your face? Probably not.

Another important area is “How do I test, anyway?” Again, is it the job of a Network Administrator to keep up-to-date with the latest Test Methodologies? Probably not. Companies like NSS Labs, ICSA, UNH or EANTC (kind of) share theirs, but there are a lot of them and they don’t tell you how to configure an Avalanche to match these testing advanced methodologies. When you buy from us you’ll always get somebody on the phone to help you, but you shouldn’t have to pick the phone in the first place, should you?

There are more points, but these are the two main ones. In short, the GUI need to be Methodology- or Test-Based, not a blank slate; and it should show only the settings that matter to non-test experts. The GUI should be simple, but not simplistic and still accommodate the needs of everyone. Easy, right?

Enter Avalanche NEXT

But we like a challenge at Spirent, and we took that one on. A few days ago, we announced Avalanche NEXT. It’s a web-based, test case-oriented GUI following the tenants of Responsive Web Design. You browse to the GUI, load a test case (we plan to provide a lot of those, and I wish I could share some of those already in the works!), pick where you want to send the traffic to and from, what kind of traffic and how to mix it, put some thresholds for pass/fail criteria, and then just run. At the end you get a scored based on how far off (or not) the test performed relatively to  your pass/fail criteria. You can also download the latest Applications and Cyber Security threats for the latest in testing twice a month!

My good friend Ankur Chadda has a ~30 minutes presentation of all this, available on Youtube. If you are interested in a shorter, less technical video, visit the Avalanche NEXT product page.

I will post more articles in the future about Avalanche NEXT, as this one is more an explanation as to why we initiated that project in the first place.

Posted in General, Releases | Tagged , , , | Leave a comment

Using the Action List To Create More Dynamic Tests

One of the key feature of Avalanche is the “Simulated User” concept, coupled with Action Lists. The Action Lists represent exactly that: a list of actions that each users assigned to it will execute. You can mix protocols, so it’s easy to simulate a user doing a DNS query, then do some HTTP, SMTP and Peer-to-Peer traffic.

The Action List comes with a macro scripting language. This gives users even more flexibility. I’ll give an example.

This Is What I Want, Do It

Last week I was in Germany working with a Next Generation Firewall vendor for an event we’re running this week. The idea was to get a given amount of source IP addresses (500) randomly going to a pool of 200 servers. The assignment was that the protocol should be HTTP, each user should retrieve 10 pages, and that half of the responses should be 64 bytes while the other half was 25KBytes. This would give us a nice mix of throughput and connection rate.

First, I didn’t really want to type 200 IP addresses. Only the last byte of the IP would change. The source:destination IP pairs should be random, so first we enabled the “Random” option in the Client/Subnets. Suddently, the source IPs were random, good.

Boom! Random source IP addresses. Just like that.

Boom! Random source IP addresses. Just like that.

I was still left with 200 IPs to assign. Fortunately, I didn’t have to, thanks to these few lines of script:

ASSIGN VARIABLE <myFirstOctets “10.1.0.”>
ASSIGN VARIABLE <myDstIp myFirstOctets myLastOctet>

The first variable (“myFirstOctets”) defines the first three bytes of the IP address. The second variable (“myLastOctet”) assigns a value starting from 5 all the way up to 204. This value is incremented for each SimUser that gets assigned to that Action List. Then the last variable (“myDstIp”) is a concatenation of the first two variables. We then applied it like so:

1 GET http://<APPLY myDstIp>?Transaction-Profile=64B

On the wire this goes out like this:

Wireshark view of the request

Wireshark view of the request

(If you’re wondering why the IP layer is in red, read this article)

We have a random source, but an incremental destination. The reason for this is that the firewall vendor needed to ensure each and every server IP was used. Using a random value would not ensure that, but we could have made it random quite easily:

ASSIGN VARIABLE <myFirstOctets “10.1.0.”>
ASSIGN VARIABLE <myDstIp myFirstOctets myLastOctet>

See what I did there? “SEQUENTIAL” was changed to “UNIFORM”, and that’s it. Now the destination IP will be random between the two given bounds.

Posted in Tutorial | Tagged , , | Leave a comment

MATCH(NOT) Now 100% More Flexible

Avalanche is really good at testing firewalls, server-load balancers, UTMs and what-have-you. But it’s often overlooked how good it is to do “one-arm testing”, where only the clients are simulated. This is especially true for Web Application testing, because believe you me, if you can handle 1.2 million new connections per second, you can hurt websites quite a lot.

To keep improving this aspect of the product, Spirent introduced a new feature in the Avalanche 4.30 release: The ability to MATCH or MATCH_NOT against a variable.

TL;DR: Get the SPF and figure out yourself how it works. You’ll need the Application Testing feature for this to work, though.

But let’s first recap what MATCH is, because not many people use it.

Was ist Match?

MATCH and MATCHNOT are functions (sort of) that you can use in the Action List. They work only after level 1 HTTP requests (regardless of the method) and allow you to look for a specific text string in the response of that HTTP request. It’s useful to make sure the action you just did worked as expected. A typical use is something like this:

1 POST http://www.somewebsite.com/account/logon username=john password=weaksauce
MATCH <“Welcome, “,1,BODY>

The above code is simple but let’s review it. I’m doing a level 1 HTTP Post against the /account/logon/ URL of the http://www.somewebsite.com host. I’m posting two variables (username and password), with hard-coded values. In a real scenario these values would be variable (come from a Forms Database for instance), but let’s keep it simple.

Now, the tester is supposed to know what happens when this log-on action succeeds: you get, somewhere in the page, a “Welcome, <username>” (John, in our example). The string that’ll be common to all users is “Welcome,”, and so this is what should be matched. If it doesn’t match, we’ll count this as “Match Failed” in the Avalanche statistics. You can’t match against a regular expression or anything like that. We do performance, and this comes with limitations. In that case, you can match only against a literal string.

The other function is the opposite: MATCHNOT. In that case, you don’t want to match the value, so a “Failed Match Not” means that you, in fact, matched what you didn’t want to. Let’s take the same example again:

1 POST http://www.somewebsite.com/account/logon username=john password=weaksauce
MATCHNOT <“Wrong password or username”,1,BODY>

In the same scenario as before, we now that if we provide incorrect credentials we get the “Wrong password or username” error message. So we can make a check against that string, indicating we don’t want to see it.

Note that you can act on a failed MATCH/MATCHNOT. Your options are to stop the current user (if the credentials are wrong, no need to even try moving down the action list) or stop the whole test. Simply add this;

1 POST http://www.somewebsite.com/account/logon username=john password=weaksauce
MATCH <“Welcome, “,1,BODY>

Or this:

1 POST http://www.somewebsite.com/account/logon username=john password=weaksauce
MATCHNOT <“Wrong password”,1,BODY>

In the first example, we’ll “kill” the Simulated User in case we can’t match “Welcome” in the response following the POST. In the second example, we’ll stop the whole test in case we do match “Wrong password” in the response. STOP USER and STOP TEST are triggered only if the MATCH or MATCHNOT fail.

New in 4.30

One of the new features introduced by the 4.30 release is the ability use a variable for the value to MATCH against. Ok that phrase is a bit convoluted so let’s give an example:

    ASSIGN VARIABLE <myMatchString matchStrings.$1>
    1 get
    MATCH<<APPLY myMatchString>,1,BODY>
    NEXT ROW <matchStrings>

In that example you can see that I’m defining a variable called “myMatchString”, coming from the “matchStrings” Forms Database (you can tell it’s a forms db because of the “.$”). I get whatever value is first in the forms db, make a HTTP GET, and make a match against that variable. Then increment the row of the form db, and go through the loop again.

This is great because what this means is that now, all from a Forms Database, you can specify a list of URLs with that their response should MATCH. For instance we could have a Forms Database like this:

URI,Match Value

We could then make different MATCH for different pages, while still keeping the same action:

ASSIGN VARIABLE <myUri myFormDb.$1>
ASSIGN VARIABLE <myMatchValue myFormDb.$2>
1 GET http://www.somewebsite.com/<APPLY myURi>
MATCH<<APPLY myMatchValue>,1,BODY>

This is just an example but the possibilities are endless! I created a demo SPF showing some of these examples, feel free to play with it. Note that you’ll need the Application Testing license to use MATCH.

Posted in Uncategorized | Tagged , , , , | 2 Comments

Installing Avalanche Anywhere on CentOS 64 bits

A couple of month ago, Spirent released Avalanche Anywhere (as well as STC Anywhere). The goal of these products is to give our customers the ability to put the Avalanche (or STC) backend on any Linux distribution. For support purposes we officially support Red Hat Enterprise Linux (RHEL) and Fedora.

This blog isn’t here for me to make sales pitch, but I will say that it’s great being able to put Avalanche… well… anywhere! Our marketing team is so good at naming things it’s unsettling!

For this article we’ll install Avalanche Anywhere (AvA) on CentOS 6.3. I’m using CentOS because it’s basically RHEL only free.

Preparing System

Installing CentOS itself is outside the scope of this article. This video right here should be more than enough. Linux got confusingly easy to install in the last few years (hey, don’t laugh Systems people, this is a Networks people blog!).

Install CentOS (6.3 minimal install is used in this document). The system needs at least two NIC, one for admin of Anywhere (this can be the same as the main IP of the system), one for load generation (this one should be dedicated to load generation). You need at least 2 GB of RAM (4 GB is recommended) and one core.

The very first thing to do is to disable SE Linux by editing its configuration:

[root@ava-1 /]# vi /etc/selinux/config

In that file, set its mode to « disabled » like here:

# This file controls the state of SELinux on the system.
 # SELINUX= can take one of these three values:
 #     enforcing - SELinux security policy is enforced.
 #     permissive - SELinux prints warnings instead of enforcing.
 #     disabled - No SELinux policy is loaded.
 # SELINUXTYPE= can take one of these two values:
 #     targeted - Targeted processes are protected,
 #     mls - Multi Level Security protection.

(Knowing how to use vi is outside the scope of this article)

We also have to disable iptables. We can do it temporarily like so (recommended):

[root@ava-1 ~]# /etc/init.d/iptables stop

Or permanently (not recommended, unless the system will be dedicated to load testing):

[root@ava-1 ~]# chkconfig iptables off

Once this is installed, several extra packages need to be added:

Note: If you didn’t install CentOS minimal, some of these packages are probably already present on the system.

  1. kernel-devel or kernel-PAE-devel if you use a Physical Address Extension-enabled Kernel (if you’re not sure, you probably don’t use that). If your OS is sitting in a Xen-like VM, you will need kernel-xen-devel instead.
  2. Gcc
  3. Make
  4. Openssh-clients
  5. Expect
  6. Xinetd
  7. Ntp
  8. Glibc
  9. Libstdc++
  10. perl

First we’ll install the packages. Type this:

[root@ava-1 ~]# yum install gcc make openssh-clients expect xinetd ntp glibc.i686 libstdc++-devel.i686 perl

This will check for all these packages and their dependencies. The final summary should look like something akin to this:

Transaction Summary
================================================================= Install      27 Package(s)
 Upgrade       2 Package(s)

Total download size: 56 M
 Is this ok [y/N]:y

Once this is installed, we can move on to install the kernel-devel module.

First upgrade the current kernel:

[root@ava-1 ~]# yum upgrade kernel

Then reboot (“reboot”) and install the kernel-devel:

[root@ava-1 ~]# yum install kernel-devel

Activating Avalanche Anywhere

We now need to transfer the Avalanche Anywhere zip file to the server. There are many ways to do this but the easiest is to do this over SSH. FileZilla allows just that – FTP over SSH – and you don’t need to set anything else to work:


Transfer the file (the name should be something like “ava-4.10.2196-fs-1.1.tgz”), to the /root directory for instance if you are logged in as root as in the example.  Once the file is transferred we need to make it executable:

[root@ava-1 ~]# chmod +x ava-4.10.2196-fs-1.1.tgz.run

And then execute it.

[root@ava-1 ~]# ./ava-4.10.2196-fs-1.1.tgz.run
 Verifying archive integrity... All good.
 Uncompressing build/il/packages/fsimg/stca-4.10.2196-fs-1.1.tgz....
 Install STC Anywhere in your PC

 Stopping xinetd:                                         [  OK  ]
 Starting xinetd:                                         [  OK  ]


Please run admin.py before your first running.

[root@ava-1 ~]#

Avalanche Anywhere is now installed on the OS!

Configuring Avalanche Anywhere

We now need to configure it by running the admin.py script:

[root@ava-1 /]# /mnt/spirent/chassis/bin/admin.py

Do you want to reconfigure the admin.conf? y

Current config:


Please input license server address: []
Please input ntp server address: []
Please input port speed (100M/1G/10G): [100M]1G
Please input the port group size (1~1): [1]
Here is the list of ethernet interfaces:
1) eth0
2) eth1

Please choose the interface for admin (1-2): 1
1) eth0
2) eth1

Please choose the interface for test port No.1 (1-2): 2
Do you want to reconfigure the stca cpu affinity?n
Please restart stca to make the change take effect.

[root@ava-1 /]#

And now we just need to start stca to turn the daemon on!

[root@ava-2 ~]# stca start

Please make sure firewall (iptables) is disables.
Install sasm.ko ...OK
<quite a lot of stuff here, including error messages you can discard>

[root@ava-2 ~]#

That’s about it really! Now we simply need to add this newly added load generator to our GUI (“Administration/Spirent Test Center Chassis”):


Avalanche Anywhere in STC Administration tab.

Avalanche Anywhere in STC Administration tab.

Now to use this backend, it’s same as usual: reserve it and add it into a test. The nice thing is that since the test cases are 100% compatible between any Avalanche load generator (Virtual, STC-based, Appliance, etc.), using Anywhere can be a practical way to design tests on a local PC (or remote VMs). Then you only need to pass the test case onto the team or person that’ll do the actual Big Bad Load Test – and modify the load appropriately.

Posted in Tutorial | Tagged , , , , | 2 Comments

HTML5 Is Now A Thing

HTML5_Badge_128After many years of debate and development (not sure if more development or debate happened), the W3C finally announced that the HTML5 specification is now feature complete. If you are interested (and that’d be a little sad), the full specification is available here. HTML5 is not a W3C Standard just yet as it still needs to pass all the interoperability and performance testing, but the hardest part is done. Work on HTML 5.1 has already started.

This news doesn’t mean a lot for the average user. HTML 5 was a draft for several years, and most browsers more or less implement it. Some of the not-so-implemented features are actually part of CSS 3.0. Like the border-radius. You can see moz-border-radius, webkit-border-radius and probably ie-border-radius. See, when an element (or style in that case) is not standardized yet, some people implement their own version. But since they can’t use the ‘real’ name, they prepend the name with their rendering engine – Firefox’s, Safari/Chrome’s or Internet Explorer in the previous examples. This is why some pages work better with some browsers than others. This is also why I will end up with an ulcer.

In any case, what does it mean for Avalanche ? Well, not much. HTML5 is application data  and Avalanche emulates protocols. Protocols are used to carry application data from one end to another. Avalanche couldn’t care less about the HTML version you’re using. So yes, you can test HTML5-enabled websites with Avalanche, no problem. Just like you can “test Ajax” – Ajax are just HTTP requests being send asynchronously from your browser. We can reproduce that. We don’t care if you use jQuery or Node.js or Prototype – in the end, what goes on the wire does it over HTTP, and we’ll do that.

Now HTTP 2.0 or SPDY 2/3.0 we’d really care about, but those are still drafts so we’ll have time to talk about that later.

Posted in General | Tagged , , | Leave a comment

No MOS in Avalanche Adaptive Bitrate – why?

Spirent has always been on the cutting-edge of video and audio testing. When the hot technology was Microsoft’s MMS, we added that. When RTSP rose, we implemented it with a myriad of player emulation (Microsoft, Apple, Real, BitBand …). When RTMP (Flash) video was the next best thing, we added that too, and so on and so forth.

When presenting the solutions to the customers, we always make it clear that in video/voice testing, you want to look not only at how many streams the system you test can handle, but also the quality of the streams. If your customers are plagued with bad video quality, they will not use your Video on Demand service any more. If they can’t understand what the person on the other side of the IP phone is saying, they will switch to a different VoIP provider.

This is why Avalanche (as well as Spirent Test Center and some of our other products) always implemented Quality of Experience (QoE) metrics. There are network layer metrics: the Media Delivery Index (MDI) -related stats; and the “human-level” metrics: the Mean Opinion Score (MOS). These are pretty much industry standard metrics that are totally relevant when testing RTSP and SIP.

But now we support Adaptive Bitrate (ABR) and…. We don’t provide MOS or MDI. And people are surprised, with reason. I was surprised too, at the beginning, until a discussion on our internal mailing list got me to think more about it. Let’s explore the reasons why we didn’t implement MOS and MDI for ABR, but let’s first recap what MOS means in the context of load testing.

What is MOS, again?

MOS is a score on a scale of 5 that reflects the quality of a video as a human would evaluate it. A score of 5 is “perfect” (and not achievable by design). A score below 1 is “please make it stop, my eyes are bleeding.” A typical good score is somewhere between 4.1 and 4.5.

As soon as a video is encoded, you can use tools to calculate its MOS. A good (usually one you have to pay for) encoder will give you a high score. A bad (that you sometimes also pay for) encoder will give you a bad score. I will not get into the details of how the score is achieved, but in two words it depends on your test methodology. Some people will compare the source and retrieved files (PEVQ). If you use R-Factor you’ll use codec and bitrate information and so on. There are other ways to calculate video quality, even when sticking to MOS.

When a video is streamed, the MOS on the receiving end cannot be higher than the source video. At best, in perfect network conditions, the MOS scores will be equal between the source and retrieved media. This is what you look for when looking at MOS during load tests: you’re looking at the evolution of the MOS (it shouldn’t get lower), not only its absolute value. If the source video MOS score is 2, it’s pretty bad, but if it’s still 2 when it reaches the client, your network is not degrading the quality: your network is good.

What makes MOS go down then? Typically, it’s packet loss. RTSP and Multicast video streaming typically use RTP/UDP for the data stream (RTSP, which is TCP-based, is only used for control). If you’re reading this blog, you know that UDP is an unreliable transport protocol – there’s no re-transmit feature, among other things. (People have been trying to work around that using RTCP, but it adds overhead, which is why RTP wasn’t based on TCP in the first place I think, so it’s not an ideal solution).

Why is it irrelevant then?

As we have just seen, in a live network, a MOS score will decrease due to bad network conditions because the underlying transport protocol (UDP) is unreliable. But Adaptive Bitrate is based on HTTP, which itself is based on TCP! And we know that TCP is a reliable protocol. There will be no packet loss – TCP’s retransmit mechanisms will kick-in to make sure you get that lost packet.

This means your clients video quality score will always be the same as the source, because ABR relies on TCP to make sure there’s no lost data. Therefore, measuring this is irrelevant.

But retransmits bring other problems. First, there is the overhead. Not much can be done about that. ABR is a technology that will favor quality over verbosity.

Then it takes time to re-transmit packets. You’ve got an extra round-trip to make when you re-transmit. On a fairly lossy network, the re-transmission will multiply and slow down the network. How will this manifest (pun intended) for the users? They will not have enough data (fragments) to keep playing the video without interruption. This is known as Buffering Wait Time. You don’t want that.

When this threatens to happen, the ABR client will tend to downshift to a lower bitrate. This is what makes this technology brilliant. As the name implies, it will adapt to the network conditions. This is what you want to look at. As we’ve seen, the video quality is a given. What is not a given, and a very good metric to look at, is the total number of downshifts. Or the total number of buffer underruns. Or the average Buffering Wait Time. And guess what, Avalanche measures all that!

One Metric To Rule Them All

People like to have one metric to simplify the results analysis, and they are right. While this metric cannot be as precise all looking in details at all the stats, it’s important to have it.

In Avalanche we call it the Adaptivity Score. We look at the total bandwidth used by the users, and compare it to the potential maximum bandwidth (that’s the maximum available bitrate multiplied by the number of users). We then rationalize it over 100.

Let’s have an example. If we have 10 users, connecting to an ABR server serving streams at bitrates of 1 Mbps and 500 Kbps. That’s a maximum potential bandwidth of 10 Mbps. If all 10 users are on the 1 Mbps stream, the score will be 100:

(current bitrate / max bitrate) * 100

((10×1 Mbps) / 10 Mbps) * 100 = 100

Now let’s pretend that half of the users go to the 500 Kbps stream.

(((5×0.5 Mbps) + (5x1Mbps)) / 10 Mbps) * 100 = 75

And since we do this calculation at every result sampling interval, you can analyze this after the test has been executed and get a nice graph.

In the example below I used Avalanche to emulate both the client and server of the Apple HLS implementation of ABR. I have a bunch of bitrates in the Manifest, and enabled a bitrate shift algorithm. The users are configured to start at the minimum possible bitrate and work their way up. The video lasts 5 minutes (to allow enough time to shift all the way up).

The first graph shows the Adaptivity Score. The second graph shows which bitrate “buckets” the users are streaming from. We can see that as the users go to higher bitrate channels the score goes higher.

Adaptivity Score over Time

This graphs shows the evolution of the Adaptivity Score over time.

Adaptive Bitrate "Buckets"

This graph shows that not all users start at the maximum bitrate – it takes some time for them to all shift up. This reflects on the Adaptivity Score.

And just for fun, here’s a screenshot of the throughput in that test. That’s almost 30 Gbps on a single device emulating both the clients and servers 🙂

Overall ABR Throughput (Back to Back)

This shows the overall ABR Throughput of a back to back test on a Spirent C-100 appliance.


If there is one thing to take from this article, it’s that in HTTP Adaptive Bitrate we know that thanks to TCP, all of the video data will reach the clients. There will be no lost data. We know that the quality of the video as viewed by the clients will be equal to the source. But the cost of this is that you might have an increased buffering time as packets are potentially re-transmitted.

The second part is that if a Service Provider wants to make sure its customers have the best possible experience, they need to make sure these clients can smoothly stream from the highest available bitrate. That’s your “quality of experience” measurement in ABR : how close to the maximum available bandwidth your clients can reach.

Posted in Uncategorized | Tagged , , , , , | Leave a comment