IPSEC Remote Access Testing using Avalanche and Fortigate – The Sequel

A few years ago I wrote an article on how to test IPSEC on a Fortigate using Avalanche. But the article is largely outdated and only covers Preshared Key Authentication. It so happens that I recently needed to configure a similar test, except this time the authentication mechanism had to be Digital (RSA) Certificates. So let’s get to it.

Test Bed

Let’s recap what we have:

  • Load Generator is a Spirent Avalanche C100-S3 running version 4.75.
  • DUT is a Fortigate 1500D running FortiOS 5.2.2 (I know it’s not the latest version, but it’s the one I have)
  • Devices are connected through a MRV switch and a Velocity topology using 2x 10GbE fibre.

Test Requirements

  • IPSEC in Tunnel mode
  • Digital Certificate Authentication
  • Phase 1: IKEv1, DH-Group 14, AES-256 encryption and SHA-256 hash, Aggressive Mode
  • Phase 2: AES-256 and SHA-256 no need for PFS (customer’s requirement; Avalanche supports P2 PFS just fine).

The Test

I’m going to post screenshots for most of the configuration but will comment for the relevant bits.

Most of the Phase 1 options are accessible directly on the subnet tab. In this case I’m using the second row (highlighted in yellow) but as you can see I also ran some tests using PSK authentication. Make note of the ISAKMP ID Type (KEY_ID) and value (“demo”) as we’ll need this later.

The rest of the IKEv1 configuration, where we specify the IKE mode (Aggressive in our case, we also support Main mode).

Make note of the Phase 2 options here. We’re telling Avalanche to send the current user’s IP address in it’s Initiator’s ID Type and this must match on the server side (see below). The Responder’s ID Type tells Avalanche what kind of response the gateway will answer (this can be arbitrary). This is critical because, as with everything in IPSEC, all the settings have to match 100% for anything to work.

Let’s have a look at the Fortigate GUI for a bit.

The phase 1 options are here. It matches the ones in Avalanche of course. We’ll get to the Certificate later.

Here we get the Proposal options. I kept the defaults except for 3DES (lol). It’s important that the Key Lifetime matches what’s configured in Avalanche (see below – click to zoom).

Make sure the Phase 1 and Phase 2 keys lifetime match. This is a very common configuration error in Avalanche (each device, whether it’s a load generator or an IPSEC gateway, use different default values). Note that this is the area of the GUI where we actually specify the client certificate we want to use, as well as the certificate the gateway will offer (“CA Cert” field).

The Phase 2 options on the Fortigate. The “Local” and “Remote” access are critical. They are the values corresponding to Avalanche’s “Responder’s ID Type” and “Initiator’s ID Type”, respectively.

Let me make that clear:

Avalanche Initiator’s ID Type = Fortinet’s Remote Address
Avalanche’s Responder’s ID Type = Fortinet’s Local Address.

Yes, it’s very annoying that nobody in the industry uses the same terms for the same options everywhere. That’s always going to be your challenge when testing IPSEC. But there you have it.

We are basically telling the Fortigate that clients will come from the subnet, and that it should tell them that it’s on Doesn’t have to be true, just has to match.


All of this was fairly quick to setup because it’s all pretty straightforward once you get the terminology down. It’s really just a lot of options. When trying to start using Certificates (I started by using PSK authentication), things began to become interesting. Mostly because I wasn’t sure how Fortinet works with regard to this so that was learning experience.

It seems the Fortigate will accept a certificate only if it’s been signed by a CA known to the device, as well as explicitly listed as accepted. There’s probably a smarter way to do this (e.g.: using the subjectName or alternativeSubjectName or something like that) but, again, I’m not a Fortinet expert and my time for investigation was limited.

I could use OpenSSL to generate all the keys but instead I used X Certificate and key management (XCA) because I couldn’t remember all the OpenSSL CLI commands to save my life. First I created a Certificate Authority (Spirent_CA):

It was then exported as a PEM file:

Note that the extension is .crt. The FGT GUI refused to import it so at first I thought it was a format problem. Turned out it was just the extension. Renaming to .pem and the certificate was accepted. Not sure if this is by design or a bug but be aware of this (alternatively, I’m pretty sure it would work better by importing from the device’s CLI).

Once imported it shows up in the GUI:

For the next step I need to generate the IPSEC clients’ certificate. I’ll do this from XCA. Right-click on your CA Certificate in the GUI, pick “New”, and you should see this:

Make sure you sign it with the CA you uploaded on the FGT. On the “Subject” tab, fill as appropriate:

Once that’s done we can upload the certificate onto the FGT. There’s a little trick to importing the certificate though. I’m not sure why but the FGT wants both the public and private key otherwise the certificate is refused later on. I’m not sure why but I couldn’t get it to work. So export both your public key (certificate) and private key from XCA in the PEM format (and change the extension from .crt to .pem).

To be 100%, this is what a public key looks like:


And the private key looks like something like this (yes, I modified it before posting a private key online, even if it’s just for a lab 🙂 :

Proc-Type: 4,ENCRYPTED
DEK-Info: DES-EDE3-CBC,588E237559B626B7


If your keys look like this, you’re good to go. Now just make sure that in your IPSEC Tunnel Policy you configured the appropriate certificate (go set it now if you just imported the certificate).

You also need a policy to allow users to VPN in. Here’s mine below, it’s not very secure but good enough for the lab:

Now let’s run a test and see what happens!


Well, that seems to be working 🙂

On the Avalanche side we can see the Attempted vs Open tunnels:

Some throughput statistics:

And so on.

Hope this article helps, if you have any question feel free to comment or contact me offline. I won’t answer Fortinet-related questions though because I claim no expertise whatsoever in that vendor’s technology.

Posted in General, Tutorial | Tagged , , , , , , , | Leave a comment

restic is probably the best backup solution for nerds

Backup (and Backup Management) is a Big Deal and should not be taken lightly.

There are many paid-for solution of various qualities. There are also various backup strategies. I won’t cover those but I’ll just say that you should have both an on-site and an off-site solution. On-site for frequent backups (and high-speed transfers) and off-site in case, you know, your site burns down (or, less dramatically, if somebody steals your backup drive). In all cases the data should be encrypted even before the transfer with a key known only to you.

For my off-site solution I use an external drive where I backup the files monthly and hand it over to a friend but, while one of the best solution, it’s a little bit of a hassle. So instead I wanted to backup into the proverbial Cloud and started looking for programs (free and/or open source but also paid for). I tried those:

  • S3 Backup: AWS kept returning an error message with regards to the Cipher Suite used. I couldn’t start any back up.
  • Cloud Berry: Great features and reasonable price ($30) but horrible UI and it didn’t seem to run in the background at all, or be able to stick to the tray, so no go.
  • Arq: Looks horrible especially for the price they ask ($50)
  • Duplicati: Tried the 2.0 beta and kept running into errors. That’s expected from a beta but I wanted a stable solution.
  • Carbonite: It ignored my audio files and was generally too picky with which files it accepted to back up. I couldn’t find a way to specify “grab everything” so I didn’t like that.
  • restic: The one I picked.

restic is great

I tried restic because a friend I trust a lot on such matters insisted several times that it’s really good. The fact that it was all CLI bothered me because I wanted a built-in scheduling capability. But I found a workaround. Let’s first introduce the software.

restic is a free, open source software written in Go. Its goal is to be fast, efficient and secure. No code audit has been executed against the source as far as I know but the References page of the documentation shows that a great deal of thought was given to the design. People should read that page but, in short, they use strong encryption, signatures and hashes a lot. There can always be implementation errors leading to security flaws, I guess, but at least the design seems solid.

restic is all CLI. You grab the binary on their GitHub repo (since it’s written in Go there are native binaries for tons of different OS and CPU architectures), throw that in your OS’ PATH and off you go.


The workflow is pretty simple. You initiate a “repository” (a target for the backup) and provide an encryption password for it. restic creates the file hierarchy it needs to operate and the repo is ready. The repo can be a local drive (however you mount it – could be SATA, USB, SMB/CIFS, iSCSI, …) or a remote service (Amazon S3 or S3 API-compatible services, OpenStack Swift services, SFTP, or BackBlaze B2). It makes no difference as far as you’re concerned, restic operates the same.

Let’s init a repo on a local drive. I’m using PowerShell on Windows 10 Pro but the commands are identical on all OSes:

PS E:\restic> restic init --repo E:\restic\
enter password for new backend: ****
enter password again: ****
created restic backend 4a37b462c0 at E:\restic\

Please note that knowledge of your password is required to access
the repository. Losing your password means that your data is
irrecoverably lost.

This look less than a second to execute. Remote repositories take longer because of the network round trips but they don’t take more than 2-3 seconds.

Now we can backup some files and target that repo:

PS E:\restic> restic -r E:\restic\ backup C:\Users\Arnaud\bin\
enter password for repository:
scan [C:\Users\Arnaud\bin]
scanned 2 directories, 7 files in 0:00
[0:01] 100.00% 28.929 MiB/s 28.929 MiB / 28.929 MiB 9 / 9 items 0 errors ETA 0:00
duration: 0:01, 19.54MiB/s
snapshot b18aa7fa saved

I’ll add a file to my bin directory and run the command again.

PS E:\restic> restic -r E:\restic\ backup C:\Users\Arnaud\bin\
enter password for repository:
using parent snapshot b18aa7fa
scan [C:\Users\Arnaud\bin]
scanned 2 directories, 8 files in 0:00
[0:00] 100.00% 0B/s 28.934 MiB / 28.934 MiB 10 / 10 items 0 errors ETA 0:00
duration: 0:00, 215.77MiB/s
snapshot 47481576 saved

We can see that the extra file has been grabbed. But what’s really interesting is that restic tells us it’s using a “parent snapshot.” What’s that?


According to restic’s documentation, this is what a snapshot is:

A Snapshot stands for the state of a file or directory that has been backed up at some point in time. The state here means the content and meta data like the name and modification time for the file or the directory and its contents.

restic hashes the content of the file and use this as a basis for comparison. But it doesn’t do the whole file at a time ; files are sliced into (encrypted) blocks. When restic compares it’ll easily detect which bits of a file changed by comparing the snapshots (essentially a diff). That means the integrity of the files is checked but also that only the blocks that changed need to be sent. All the while keeping everything encrypted even before transit. This is fantastic.

As the documentation explains there are snapshots for files and directories but also for the whole repository. It’s possible to list them like so:

PS E:\restic> restic -r E:\restic\ snapshots
enter password for repository:
ID Date Host Tags Directory
b18aa7fa 2017-08-24 14:34:42 CLAVAIN C:\Users\Arnaud\bin
47481576 2017-08-24 14:35:50 CLAVAIN C:\Users\Arnaud\bin

We can see all the snapshots for that specific repo. Of course you can roll back to previous snapshots etc. This is very much like Git or maybe even Docker. I should add that it’s entirely possible to save multiple source directories into one common repository. If I wanted to add my Desktop I could:

PS E:\restic> restic -r E:\restic\ backup C:\Users\Arnaud\Desktop\
enter password for repository:
scan [C:\Users\Arnaud\Desktop]
scanned 1 directories, 3 files in 0:00
[0:00] 100.00% 0B/s 9.264 KiB / 9.264 KiB 4 / 4 items 0 errors ETA 0:00
duration: 0:00, 0.09MiB/s
snapshot 2389bc9c saved

PS E:\restic> restic -r E:\restic\ snapshots
enter password for repository:
ID Date Host Tags Directory
b18aa7fa 2017-08-24 14:34:42 CLAVAIN C:\Users\Arnaud\bin
47481576 2017-08-24 14:35:50 CLAVAIN C:\Users\Arnaud\bin
2389bc9c 2017-08-24 14:45:12 CLAVAIN C:\Users\Arnaud\Desktop

And you can also backup both source destinations at once by simply appending the extra directories:

restic -r E:\restic\ backup C:\Users\Arnaud\Desktop\ C:\Users\Arnaud\bin\

My “real” set of files to back up is a little over 400 GB. Even when nothing is changed between two backups, restic still has to process all those files (precisely to check if anything as changed). This operation takes about two minutes and a half on my fairly decent CPU (Intel i5-4690K @ 3.5 Ghz). This is just the time it takes to split, re-hash and compare the files. I feel this is pretty good, 2:30 min for 400 GB.


One might have the best backup solution in the world, it’s useless if the restoring doesn’t work well. As a reminder, you should simulate a data loss and test your restore process on a regular basis (I myself am guilty of not doing this nearly enough).

I deleted two files from my desktop and will now ask restic to restore them.

PS E:\restic> restic -r E:\restic\ restore 2389bc9c --target C:\Users\Arnaud\
enter password for repository:
restoring <Snapshot 2389bc9c of [C:\Users\Arnaud\Desktop] at 2017-08-24 14:45:12.863773 +0200 CEST by CLAVAIN\Arnaud@CLAVAIN> to C:\Users\Arnaud\
ignoring error for C:\Users\Arnaud\Desktop\desktop.ini: OpenFile: open \\?\C:\Users\Arnaud\Desktop\desktop.ini: Access is denied.
There were 1 errors

There was an error because desktop.ini was locked by Windows. Other than that the other files were restored. Note the –target argument. It’s mandatory. I wish restic would restore to the location saved in the snapshot (“C:\Users\Arnaud\Desktop” as we saw before when listing the snapshots) if the target location is omitted.


Now comes the little downside of restic compared to other solutions. There’s no scheduling included since restic is just a CLI program. Under Linux it’s pretty easy since users can just crontab a script. Windows can do the same but it’s a bit more convoluted than on Linux. We must do the following:

  • Write a simple PowerShell script
  • Put the repository’s password into a text file (*)
  • Create a Task in the Windows’ Scheduler to execute the script

(*) Yes, we’re putting a password in clear-text in a text file. restic’s assumption is that its host system is safe ; it’s therefore fine to store passwords there in files or environmental variables. The encryption is used to ensure nobody on the remote repository can decrypt the files (since nothing can prevent the hosting provider from snooping if they are shady and want to) ; the files are already on your system anyway so having the password is moot.

My script is C:\Users\Arnaud\restic-usbdrive.ps1 and its content is just the backup command from before, plus an extra flag to point to the password file:

restic -r E:\restic\ -p C:\Users\Arnaud\.s1kr3t\restic-usbdrive backup C:\Users\Arnaud\Desktop\ C:\Users\Arnaud\bin\

So of course I need to create that “restic-usbdrive” file now and put the repo password in it. Now we just need to go to Window’s Task Scheduler and set a basic one up.

Choose the options and frequency relevant to you but make sure you pick “Start a program” as the task to perform. Then simply point to your script:

Windows allow for a lot of conditions to run tasks – the frequency but also on events like when you log on or off. It can prevent duplicates, so if the task is long-running another one won’t stop. It can start the task if a scheduled event was missed (for instance the computer was in sleep ; but it can also wake it up if you so choose).


Posted in General, Tutorial | Tagged , , , , , | Leave a comment

Harhar is now open source!

Harhar uses several open source libraries (JSON.NET, NodaTime and HarNet) and was provided for free (as in “free beer”). I have uploaded the Harhar source code on Github so it is now a free (as in “free speech”) and open source software!

I don’t expect thousand of pull requests but I thought maybe it could help people to integrate it in their own automation systems, or maybe convert it to another programming language (Harhar is written in C# but is fully compatible with Linux and MacOS through Mono).

I’ll also be using Github to track issues and features requests so if you find any bug with it, please use that platform. I will try to convert the application note as a proper documentation (also on Github) and use AppVeyor to automate the publication of binaries.

Posted in General | Tagged , , | Leave a comment

Setting up Avalanche Automation on Linux

The Avalanche product comes with a fully-featured TCL API for several operating systems: Windows, Linux and FreeBSD. Coupled with the “GUI to TCL” feature of the Windows thick client (Avalanche Commander) it’s a very powerful automation tool. The typical use case is “write a test case in the GUI, export it to TCL and execute it from a Linux Virtual Machine” (through a cron task for instance or a Continuous Integration process). This article provides a step-by-step guide on how to do just that on CentOS/RHEL 6.x.

Continue reading

Posted in Tutorial | Tagged , , , | Leave a comment

Introducing Harhar

Sometimes, you need to reproduce a typical HTTP user behavior under load. A typical example is the need to test a proxy with specific rules for specific pages – you need to ensure that applying a given rule won’t kill your performance, and the best way to check that is to test. Another reason would be to send a realistic traffic, representative of the typical traffic you see – so if you’re a web services host company, for instance, you test your infrastructures with the correct objects size and amount of requests.

Avalanche, as you might know if you read this blog, is totally able to do that and is even, arguably, the best tool for that. But you need to build the test by hand or by using the Fiddler plugin that I provide for free (source code is available on GitHub). There’s however a limitation with the Fiddler plugin: it will only generate a client Actions List. This means you can only test against real servers (which is fine) but you need to do some extra work if you want Avalanche to emulate the server side.

Enters Harhar

This is why I developed a tool called Harhar. This free (as in beer) tool parses .har (HTTP Archive) files generated by Chrome (Firebug-generated files support is under work) to create Actions Lists but also server objects, with the correct hierarchy. You simply need to upload the content on the Avalanche server-side and run. The tool comes with a detailed tutorial document so make sure to read it.

As a little background, HTTP Archives are files that store every requests from a HTTP client (URIs, cookies, etc) as well as the responses (including the content). The files are defined in a W3C Draft and are actually JSON content.

So feel free to experiment with this tool and let me know if you have any question, feedback or found any bug!

Posted in General | Tagged , , | Leave a comment

Avalanche 4.44 released

A couple of weeks ago, version 4.44 of Avalanche was released. It’s not a major (4.x) release, but some new features made it (on top of bug fixes). Here they are:

  • TCP Selective ACK (SACK) is now a fully supported feature (it was an alpha feature before 4.44). This will help making the TCP behavior of Avalanche closer to modern Operating Systems (Windows Vista+, Linux Kernel 2.16+).
  • DHCPv6 client – that one should be self-explanatory.
  • More of Eliptic Curve ciphers. I’m really excited about those because, even if in the past we supported a bunch of them, they required DSA keys. The new ones support RSA keys, which are much more common. Some more ciphers were implemented, including the popular AES128-GCM-SHA256. Here’s the full list:
    • AES128-GCM-SHA256
    • AES256-GCM-SHA384
    • DHE-RSA-AES128-GCM-SHA256
    • DHE-RSA-AES256-GCM-SHA384
    • ECDHE-RSA-AES128-SHA256
    • ECDHE-RSA-AES256-SHA384

There’s also an exciting new Alpha feature: TCP Vegas! A lot of people were asking for this mode modern Congestion Avoidance mechanism.

Posted in Releases | Tagged , , , | Leave a comment

Testing Server Name Indication (SNI) In The Lab

People have been saying that Internet will run out of IPv4 address space for a long time now, and somehow hosting companies always managed to cope. One of the “coping mechanisms” consists of regrouping several websites behind one IP address – either through NAT or just by hosting several websites on the same web server. In the past, this was not supported by browsers and servers if you wanted to encrypt the traffic: it wasn’t possible to have multiple certificates (and, therefore, domains) on the same IP address. This became a problem so RFC 3546 (“TLS Extensions”) introduced Server Name Indication (SNI).

This article explains how to test SNI in the lab. Avalanche supports this feature since release 4.39. I will not delve into methodologies specifically but instead how to setup an environment and configure Avalanche to use it. Customers will probably need to adjust this to mimic their own architecture, of course. Continue reading

Posted in Tutorial | Tagged , , , , , | Leave a comment