Saturday, June 1, 2019

ESXi Veeam vs FreeNAS compression comparison

I have mentioned previously how I back up all my infrastructure configs to the FreeNAS. I use a dataset with gzip-9 compression since they are mostly text and let snapshots manage revision control for me. works really well. For the VMs themselves, I use Veeam (Community Edition) to back up the guests on my ESXi host to the FreeNAS server. I had some time so I was curious what was the best method within Veeam and FreeNAS to backup so I performed some tests,  totally not scientific.

I took one of my smaller VMs, Pi-Hole, which is a 2 core, 2GB memory, with 10GB of storage and backed it up sequentially with each compression method within Veeam. It is using about 5G of space currently. Here is what each one came out to:

 [email protected]:/mnt/Pool/sysadmin # ll  
 -rw-rw-rw-  1 root wheel 5202056192 May 21 20:58 1_None_Pi-HoleD2019-05-21T205500_02D1.vbk  
 -rw-r--r--  1 root wheel 3728351232 May 21 21:06 2_Dedup_Pi-HoleD2019-05-21T210008_3D6D.vbk  
 -rw-r--r--  1 root wheel 1963535872 May 21 21:12 3_Optimal_Pi-HoleD2019-05-21T210835_3CD8.vbk  
 -rw-r--r--  1 root wheel 1646802432 May 21 21:17 4_High_Pi-HoleD2019-05-21T211400_EC95.vbk  
 -rw-r--r--  1 root wheel 1531771392 May 21 21:21 5_Max_Pi-HoleD2019-05-21T211803_EC3B.vbk  

I then created several FreeNAS datasets with various compression methods from none to gzip with the max -9 compression switch and named them for the method and copied each backup. In other words, I copied '1_None_Pi-HoleD2019-05-21T205500_02D1.vbk' to each of the folders under Test below and compared how FreeNAS compressed the same data using each method.

None setting in Veeam

Looking in the GUI above you can see they all had fantastic compression ratios. FreeNAS determines usage using this formula:

Compressed space = uncompressed space * (1 + compression ratio)

To put another way, if your pool shows 1TB used, 3TB available and a compression ratio of 2.00x you have 2TB of data in the pool that has compressed to 1TB. If the compression ratio stayed at 2.00x you could copy another 6TB to the pool.

So how much did it compress each file with each method? Just look via the command line. 'du -h' will tell you how much its using in human-readable format.

 [email protected]:/mnt/Pool/sysadmin/Test # du -h  
 4.8G  ./none  
 1.9G  ./lz4  
 1.6G  ./gzipfast  
 1.5G  ./gzip-9  
 1.5G  ./gzip-6  

Since these folders contain the same file, just compressed via the dataset, you can add the -A switch to see the apparent-size and see they are the same as far as any users would know.

 [email protected]:/mnt/Pool/sysadmin/Test # du -Ah  
 4.8G  ./none  
 4.8G  ./lz4  
 4.8G  ./gzipfast  
 4.8G  ./gzip-9  
 4.8G  ./gzip-6  

Human readable is nice but you can remove the -h switch to see the actual file size.

 [email protected]:/mnt/Pool/sysadmin/Test # du -A  
 5080134 ./none  
 5080134 ./lz4  
 5080134 ./gzipfast  
 5080134 ./gzip-9  
 5080134 ./gzip-6  

For this exercise we want to know actual used space so I am not using the -A nor -h switch.

 [email protected]:/mnt/Pool/sysadmin/Test # du  
 5083217 ./none  
 1974197 ./lz4  
 1659773 ./gzipfast  
 1546173 ./gzip-9  
 1553781 ./gzip-6  

As expected gzip -9 compressed it greatly from 5G down to 1.5G or about 30% of its original vmx size. This is very close to the Max setting within Veeam. I took this further with each Veeam method to see how much FreeNAS could compress it more.

Dedup Setting in Veeam

 [email protected]:/mnt/Pool/sysadmin/Test # du  
 3643537 ./none  
 1953457 ./lz4  
 1637709 ./gzipfast  
 1526513 ./gzip-9  
 1532541 ./gzip-6  

This one was interesting and really not expected to me. Using dedup in Veeam and compression it beat out all the Veeam backup methods. Think of Dedup as the cousin of compression.

Optimal Setting in Veeam

 [email protected]:/mnt/Pool/sysadmin/Test # du  
 1918857 ./none  
 1875889 ./lz4  
 1640389 ./gzipfast  
 1623297 ./gzip-9  
 1623685 ./gzip-6  

High compression in Veeam

 [email protected]:/mnt/Pool/sysadmin/Test # du  
 1609401 ./none  
 1585361 ./lz4  
 1583137 ./gzipfast  
 1582909 ./gzip-9  
 1582913 ./gzip-6  

Note in the GUI the file size is the same, not much to compress but if you look at the actual size you see a slight variance between all of them. While I did not keep track of CPU usage I am sure the compute cost at this point is pretty high. It sure felt like the copy command was taking longer than previously. Especially since the Veeam system is a VM as well.

Extreme compression in Veeam

 [email protected]:/mnt/Pool/sysadmin/Test # du  
 1496945 ./none  
 1473553 ./lz4  
 1472269 ./gzipfast  
 1472161 ./gzip-9  
 1472161 ./gzip-6  

Just like high compression within Veeam, FreeNAS could not do very much with it so I did not bother with a screenshot as the GUI shows it exactly the same as high.

Gzip -9 results

5080134 Nothing
1623297 Optimize with gzip-9
1582909 High with gzip-9
1546173 None with gzip-9
1526513 Dedup with gzip-9
1472161 Max with gzip-9

So using Veeams max compression along with FreeNAS max compression netted 24M savings (Nothing minus Max with gzip -9) in the end. Since I was not tracking CPU usage this benefit comes at a cost. It sure felt like the copy was taking longer the more compression going on which makes sense actually when you think about it. Veeam backs up VMs overnight when things are most idle so I am not super concerned with CPU usage. Except if I am doing a backup during a scrub! I'll need to revisit the times so they do not overlap actually.

To be more scientifically complete I would have to do it again with more input to get a better picture of it all. How long does each Veeam method take? How long does the cp take on the FreeNAS? How much CPU is used for all this? Is 80% more CPU usage (and power, heat) worth the 24M saving for this one VM? Since both my ESX and FreeNAS hosts use L series procs that plays a part vs higher end procs.

The dedup setting was the most interesting for sure. I might do a similar test again with all my VMs backed up in dedup friendly method via Veeam and create a dataset with dedup enabled to compare to non dedup. I doubt I would do that in real life. I just don't see the cost of memory for dedup on the FreeNAS based on the use case of backup data. Perhaps I will do a part II looking at that, CPU usage and backup time of the various methods.

This made me think of creating some child datasets though for my content. As my FreeNAS stores lots of files such as Windows OS ISOs I use gzip (and 7ZIP) today via the command line to save space. Layover from when it was hosted on Linux. This made me think to change the compression of those files to let the FreeNAS handle it automatically via a child dataset. Save me from extracting the ISO then open the ISO for whatever I need it for. Same for WIMs I host there. These results are pretty similar to wacking a WIM over the head with 7ZIP.

 So was this a waste of time?


No comments:

Post a Comment