Sunday, September 22, 2019

Add storage via LVM

This post is mostly notes just for myself. While I use FreeNAS locally to benefit from all its great functions such as ZFS and snapshots, I also perform a daily backup via rsync to an offsite host. That backup host was about to run out of space. Since it's my backup, it gets all the hand me downs so I barely invest in it.

Up until recently it was an AMD AM2 based system and is now AM3 based and will transition to a higher-end AM3+ when I replace another system with Ryzen. Its chokepoint is the network so I don't need anything super fast, it's just to address the fire and theft type scenarios at my main site. Just needs disk For storage I have a 64GB SSD for the OS and three spinner drives I reallocated from my main FreeNAS a while ago when it ran out of space.

These drives consist of a 4TB, 4TB, and 3TB smooshed together via LVM on an Ubuntu host. This gives me about 10TB of usable space. I just replaced two 2TB drives in the FreeNAS with larger drives so had them lying around ready to go. The case is a 3RU ATX chassis so I have plenty of space for platters.

Since it is commodity hardware-based (easy to fix!) there is no hot-swap so after cabling up the drive and starting it back up I started the add process in a few steps.

Add a Drive

Before beginning, I had to collect some items and let my heart skip a beat.

First was to locate where the drive is assigned to. Its on /dev/sdc.

 sudo fdisk -l  

Next was to get the LVM volume group name, which is vg0, the default.

 sudo vgdisplay  

Finally get the logical volume path, which is /dev/vg0/mystuff

 sudo lvdisplay  

So now I can convert it to a physical volume. I use the entire disk as I have no reason to partition them to use for other purposes.

 sudo pvcreate /dev/sdc  

Next, add the physical volume to the volume group

 sudo vgextend vg0 /dev/sdc  

Allocate the physical volume to the logical volume

 sudo lvextend -l +100%FREE /dev/vg0/mystuff  

Then finally grow the ext4 file system on the logical volume so I can use all this new space.

 sudo resize2fs /dev/vg0/mystuff  

All done, using df -h that mount went from 10TB total to 12TB.

Instead of using fdisk -l you can use cat /proc/partitions to locate the drive device. before or after you can use lsblk to get a pretty layout of your drives

 sda      8:0  0 2.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdb      8:16  0 59.6G 0 disk  
 ├─sdb1    8:17  0 51.7G 0 part /  
 └─sdb5    8:21  0  8G 0 part [SWAP]  
 sdc      8:32  0 1.8T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdd      8:48  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sde      8:64  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  

Replace a drive

While I have space for more drives I will eventually run out so instead of getting a bigger case I can just replace drives with bigger ones. This process is easy as well.

You need to collect all the relevant information first such as the drive locations, physical, and logical volumes. Instead of lvextend you run these steps after running vgextend.

Move the data from the old drive to the new drive. This can be done live but you do have a performance hit to both drives during the process. This can take a considerable amount of time to complete so its advisable to do when the volume(s) are not too busy. Also consider power outage risk during this process. The larger the source drive the longer it will take.

 sudo pvmove /dev/sdc /dev/sde  

Finally, you remove the old drive from the volume.

 sudo vgreduce /dev/sdc  

Once done power down and physically remove. After this, you would run the resize2fs command above to grow the usable space.


-Kevin

Saturday, June 1, 2019

ESXi Veeam vs FreeNAS compression comparison

I have mentioned previously how I back up all my infrastructure configs to the FreeNAS. I use a dataset with gzip-9 compression since they are mostly text and let snapshots manage revision control for me. works really well. For the VMs themselves, I use Veeam (Community Edition) to back up the guests on my ESXi host to the FreeNAS server. I had some time so I was curious what was the best method within Veeam and FreeNAS to backup so I performed some tests,  totally not scientific.

I took one of my smaller VMs, Pi-Hole, which is a 2 core, 2GB memory, with 10GB of storage and backed it up sequentially with each compression method within Veeam. It is using about 5G of space currently. Here is what each one came out to:

 root@freenas:/mnt/Pool/sysadmin # ll  
 -rw-rw-rw-  1 root wheel 5202056192 May 21 20:58 1_None_Pi-HoleD2019-05-21T205500_02D1.vbk  
 -rw-r--r--  1 root wheel 3728351232 May 21 21:06 2_Dedup_Pi-HoleD2019-05-21T210008_3D6D.vbk  
 -rw-r--r--  1 root wheel 1963535872 May 21 21:12 3_Optimal_Pi-HoleD2019-05-21T210835_3CD8.vbk  
 -rw-r--r--  1 root wheel 1646802432 May 21 21:17 4_High_Pi-HoleD2019-05-21T211400_EC95.vbk  
 -rw-r--r--  1 root wheel 1531771392 May 21 21:21 5_Max_Pi-HoleD2019-05-21T211803_EC3B.vbk  

I then created several FreeNAS datasets with various compression methods from none to gzip with the max -9 compression switch and named them for the method and copied each backup. In other words, I copied '1_None_Pi-HoleD2019-05-21T205500_02D1.vbk' to each of the folders under Test below and compared how FreeNAS compressed the same data using each method.

None setting in Veeam



Looking in the GUI above you can see they all had fantastic compression ratios. FreeNAS determines usage using this formula:

Compressed space = uncompressed space * (1 + compression ratio)

To put another way, if your pool shows 1TB used, 3TB available and a compression ratio of 2.00x you have 2TB of data in the pool that has compressed to 1TB. If the compression ratio stayed at 2.00x you could copy another 6TB to the pool.

So how much did it compress each file with each method? Just look via the command line. 'du -h' will tell you how much its using in human-readable format.

 root@freenas:/mnt/Pool/sysadmin/Test # du -h  
 4.8G  ./none  
 1.9G  ./lz4  
 1.6G  ./gzipfast  
 1.5G  ./gzip-9  
 1.5G  ./gzip-6  

Since these folders contain the same file, just compressed via the dataset, you can add the -A switch to see the apparent-size and see they are the same as far as any users would know.

 root@freenas:/mnt/Pool/sysadmin/Test # du -Ah  
 4.8G  ./none  
 4.8G  ./lz4  
 4.8G  ./gzipfast  
 4.8G  ./gzip-9  
 4.8G  ./gzip-6  

Human readable is nice but you can remove the -h switch to see the actual file size.

 root@freenas:/mnt/Pool/sysadmin/Test # du -A  
 5080134 ./none  
 5080134 ./lz4  
 5080134 ./gzipfast  
 5080134 ./gzip-9  
 5080134 ./gzip-6  

For this exercise we want to know actual used space so I am not using the -A nor -h switch.

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 5083217 ./none  
 1974197 ./lz4  
 1659773 ./gzipfast  
 1546173 ./gzip-9  
 1553781 ./gzip-6  

As expected gzip -9 compressed it greatly from 5G down to 1.5G or about 30% of its original vmx size. This is very close to the Max setting within Veeam. I took this further with each Veeam method to see how much FreeNAS could compress it more.

Dedup Setting in Veeam



 root@freenas:/mnt/Pool/sysadmin/Test # du  
 3643537 ./none  
 1953457 ./lz4  
 1637709 ./gzipfast  
 1526513 ./gzip-9  
 1532541 ./gzip-6  

This one was interesting and really not expected to me. Using dedup in Veeam and compression it beat out all the Veeam backup methods. Think of Dedup as the cousin of compression.

Optimal Setting in Veeam



 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1918857 ./none  
 1875889 ./lz4  
 1640389 ./gzipfast  
 1623297 ./gzip-9  
 1623685 ./gzip-6  


High compression in Veeam




 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1609401 ./none  
 1585361 ./lz4  
 1583137 ./gzipfast  
 1582909 ./gzip-9  
 1582913 ./gzip-6  

Note in the GUI the file size is the same, not much to compress but if you look at the actual size you see a slight variance between all of them. While I did not keep track of CPU usage I am sure the compute cost at this point is pretty high. It sure felt like the copy command was taking longer than previously. Especially since the Veeam system is a VM as well.

Extreme compression in Veeam


 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1496945 ./none  
 1473553 ./lz4  
 1472269 ./gzipfast  
 1472161 ./gzip-9  
 1472161 ./gzip-6  

Just like high compression within Veeam, FreeNAS could not do very much with it so I did not bother with a screenshot as the GUI shows it exactly the same as high.

Gzip -9 results


5080134 Nothing
1623297 Optimize with gzip-9
1582909 High with gzip-9
1546173 None with gzip-9
1526513 Dedup with gzip-9
1472161 Max with gzip-9

So using Veeams max compression along with FreeNAS max compression netted 24M savings (Nothing minus Max with gzip -9) in the end. Since I was not tracking CPU usage this benefit comes at a cost. It sure felt like the copy was taking longer the more compression going on which makes sense actually when you think about it. Veeam backs up VMs overnight when things are most idle so I am not super concerned with CPU usage. Except if I am doing a backup during a scrub! I'll need to revisit the times so they do not overlap actually.

To be more scientifically complete I would have to do it again with more input to get a better picture of it all. How long does each Veeam method take? How long does the cp take on the FreeNAS? How much CPU is used for all this? Is 80% more CPU usage (and power, heat) worth the 24M saving for this one VM? Since both my ESX and FreeNAS hosts use L series procs that plays a part vs higher end procs.

The dedup setting was the most interesting for sure. I might do a similar test again with all my VMs backed up in dedup friendly method via Veeam and create a dataset with dedup enabled to compare to non dedup. I doubt I would do that in real life. I just don't see the cost of memory for dedup on the FreeNAS based on the use case of backup data. Perhaps I will do a part II looking at that, CPU usage and backup time of the various methods.

This made me think of creating some child datasets though for my content. As my FreeNAS stores lots of files such as Windows OS ISOs I use gzip (and 7ZIP) today via the command line to save space. Layover from when it was hosted on Linux. This made me think to change the compression of those files to let the FreeNAS handle it automatically via a child dataset. Save me from extracting the ISO then open the ISO for whatever I need it for. Same for WIMs I host there. These results are pretty similar to wacking a WIM over the head with 7ZIP.

 So was this a waste of time?

-Kevin






Sunday, April 14, 2019

Upgrade ConfigMgr Client Outside Maintenance Windows


A friend reached out to me recently about his environment as he was wondering why it was taking so long to upgrade the ConfigMgr clients when he set a 14-day window for the auto-upgrade function. In talking to him I learned he had many maintenance windows, even for workstation use case due to his companies work with some windows being once a month or even quarterly.

I pointed him to this uservoice as client upgrades honor maintenance windows and it is a request to allow upgrade outside of the maintenance windows. As a workaround, I shared an app we created to address for 1E Nomad in task sequences but which would work fine as it allows the client to be treated like any other application/package and pushed accordingly.

Create an application pointing to the stock source path located at
 \\MYSCCMSERVER.mydomain.com\SMS_SITECODE\Client\  

For the install program set to your defaults
 ccmsetup.exe /skipprereq:scepinstall.exe;silverlight.exe /forceinstall SMSCACHESIZE=10240 SMSMP=MYSCCMMP.mydomain.com SMSSITECODE=SITECODE  

Finally, for the detection method use a PowerShell script with the following.
 $VersionCheck = "5.00.8740.1024"  
 $CurrentVersion = $(Get-WMIObject -Namespace root\ccm -Class SMS_Client).ClientVersion  
 if ( $CurrentVersion -ge $VersionCheck ) {  
   Write-Host $CurrentVersion  
 }  

Note you need to update the version variable as you manage your environment. Super simple application. Just advertise to the relevant collection. Below is a query for what is not current. Need to modify the version portion to your environment as this is a not like 1810, including both rollups.

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.ClientVersion not like "5.00.8740.1%"  

-Kevin