Sunday, September 22, 2019

Add storage via LVM

This post is mostly notes just for myself. While I use FreeNAS locally to benefit from all its great functions such as ZFS and snapshots, I also perform a daily backup via rsync to an offsite host. That backup host was about to run out of space. Since it's my backup, it gets all the hand me downs so I barely invest in it.

Up until recently it was an AMD AM2 based system and is now AM3 based and will transition to a higher-end AM3+ when I replace another system with Ryzen. Its chokepoint is the network so I don't need anything super fast, it's just to address the fire and theft type scenarios at my main site. Just needs disk For storage I have a 64GB SSD for the OS and three spinner drives I reallocated from my main FreeNAS a while ago when it ran out of space.

These drives consist of a 4TB, 4TB, and 3TB smooshed together via LVM on an Ubuntu host. This gives me about 10TB of usable space. I just replaced two 2TB drives in the FreeNAS with larger drives so had them lying around ready to go. The case is a 3RU ATX chassis so I have plenty of space for platters.

Since it is commodity hardware-based (easy to fix!) there is no hot-swap so after cabling up the drive and starting it back up I started the add process in a few steps.

Add a Drive

Before beginning, I had to collect some items and let my heart skip a beat.

First was to locate where the drive is assigned to. Its on /dev/sdc.

 sudo fdisk -l  

Next was to get the LVM volume group name, which is vg0, the default.

 sudo vgdisplay  

Finally get the logical volume path, which is /dev/vg0/mystuff

 sudo lvdisplay  

So now I can convert it to a physical volume. I use the entire disk as I have no reason to partition them to use for other purposes.

 sudo pvcreate /dev/sdc  

Next, add the physical volume to the volume group

 sudo vgextend vg0 /dev/sdc  

Allocate the physical volume to the logical volume

 sudo lvextend -l +100%FREE /dev/vg0/mystuff  

Then finally grow the ext4 file system on the logical volume so I can use all this new space.

 sudo resize2fs /dev/vg0/mystuff  

All done, using df -h that mount went from 10TB total to 12TB.

Instead of using fdisk -l you can use cat /proc/partitions to locate the drive device. before or after you can use lsblk to get a pretty layout of your drives

 sda      8:0  0 2.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdb      8:16  0 59.6G 0 disk  
 ├─sdb1    8:17  0 51.7G 0 part /  
 └─sdb5    8:21  0  8G 0 part [SWAP]  
 sdc      8:32  0 1.8T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdd      8:48  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sde      8:64  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  

Replace a drive

While I have space for more drives I will eventually run out so instead of getting a bigger case I can just replace drives with bigger ones. This process is easy as well.

You need to collect all the relevant information first such as the drive locations, physical, and logical volumes. Instead of lvextend you run these steps after running vgextend.

Move the data from the old drive to the new drive. This can be done live but you do have a performance hit to both drives during the process. This can take a considerable amount of time to complete so its advisable to do when the volume(s) are not too busy. Also consider power outage risk during this process. The larger the source drive the longer it will take.

 sudo pvmove /dev/sdc /dev/sde  

Finally, you remove the old drive from the volume.

 sudo vgreduce /dev/sdc  

Once done power down and physically remove. After this, you would run the resize2fs command above to grow the usable space.


Saturday, June 1, 2019

ESXi Veeam vs FreeNAS compression comparison

I have mentioned previously how I back up all my infrastructure configs to the FreeNAS. I use a dataset with gzip-9 compression since they are mostly text and let snapshots manage revision control for me. works really well. For the VMs themselves, I use Veeam (Community Edition) to back up the guests on my ESXi host to the FreeNAS server. I had some time so I was curious what was the best method within Veeam and FreeNAS to backup so I performed some tests,  totally not scientific.

I took one of my smaller VMs, Pi-Hole, which is a 2 core, 2GB memory, with 10GB of storage and backed it up sequentially with each compression method within Veeam. It is using about 5G of space currently. Here is what each one came out to:

 root@freenas:/mnt/Pool/sysadmin # ll  
 -rw-rw-rw-  1 root wheel 5202056192 May 21 20:58 1_None_Pi-HoleD2019-05-21T205500_02D1.vbk  
 -rw-r--r--  1 root wheel 3728351232 May 21 21:06 2_Dedup_Pi-HoleD2019-05-21T210008_3D6D.vbk  
 -rw-r--r--  1 root wheel 1963535872 May 21 21:12 3_Optimal_Pi-HoleD2019-05-21T210835_3CD8.vbk  
 -rw-r--r--  1 root wheel 1646802432 May 21 21:17 4_High_Pi-HoleD2019-05-21T211400_EC95.vbk  
 -rw-r--r--  1 root wheel 1531771392 May 21 21:21 5_Max_Pi-HoleD2019-05-21T211803_EC3B.vbk  

I then created several FreeNAS datasets with various compression methods from none to gzip with the max -9 compression switch and named them for the method and copied each backup. In other words, I copied '1_None_Pi-HoleD2019-05-21T205500_02D1.vbk' to each of the folders under Test below and compared how FreeNAS compressed the same data using each method.

None setting in Veeam

Looking in the GUI above you can see they all had fantastic compression ratios. FreeNAS determines usage using this formula:

Compressed space = uncompressed space * (1 + compression ratio)

To put another way, if your pool shows 1TB used, 3TB available and a compression ratio of 2.00x you have 2TB of data in the pool that has compressed to 1TB. If the compression ratio stayed at 2.00x you could copy another 6TB to the pool.

So how much did it compress each file with each method? Just look via the command line. 'du -h' will tell you how much its using in human-readable format.

 root@freenas:/mnt/Pool/sysadmin/Test # du -h  
 4.8G  ./none  
 1.9G  ./lz4  
 1.6G  ./gzipfast  
 1.5G  ./gzip-9  
 1.5G  ./gzip-6  

Since these folders contain the same file, just compressed via the dataset, you can add the -A switch to see the apparent-size and see they are the same as far as any users would know.

 root@freenas:/mnt/Pool/sysadmin/Test # du -Ah  
 4.8G  ./none  
 4.8G  ./lz4  
 4.8G  ./gzipfast  
 4.8G  ./gzip-9  
 4.8G  ./gzip-6  

Human readable is nice but you can remove the -h switch to see the actual file size.

 root@freenas:/mnt/Pool/sysadmin/Test # du -A  
 5080134 ./none  
 5080134 ./lz4  
 5080134 ./gzipfast  
 5080134 ./gzip-9  
 5080134 ./gzip-6  

For this exercise we want to know actual used space so I am not using the -A nor -h switch.

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 5083217 ./none  
 1974197 ./lz4  
 1659773 ./gzipfast  
 1546173 ./gzip-9  
 1553781 ./gzip-6  

As expected gzip -9 compressed it greatly from 5G down to 1.5G or about 30% of its original vmx size. This is very close to the Max setting within Veeam. I took this further with each Veeam method to see how much FreeNAS could compress it more.

Dedup Setting in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 3643537 ./none  
 1953457 ./lz4  
 1637709 ./gzipfast  
 1526513 ./gzip-9  
 1532541 ./gzip-6  

This one was interesting and really not expected to me. Using dedup in Veeam and compression it beat out all the Veeam backup methods. Think of Dedup as the cousin of compression.

Optimal Setting in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1918857 ./none  
 1875889 ./lz4  
 1640389 ./gzipfast  
 1623297 ./gzip-9  
 1623685 ./gzip-6  

High compression in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1609401 ./none  
 1585361 ./lz4  
 1583137 ./gzipfast  
 1582909 ./gzip-9  
 1582913 ./gzip-6  

Note in the GUI the file size is the same, not much to compress but if you look at the actual size you see a slight variance between all of them. While I did not keep track of CPU usage I am sure the compute cost at this point is pretty high. It sure felt like the copy command was taking longer than previously. Especially since the Veeam system is a VM as well.

Extreme compression in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1496945 ./none  
 1473553 ./lz4  
 1472269 ./gzipfast  
 1472161 ./gzip-9  
 1472161 ./gzip-6  

Just like high compression within Veeam, FreeNAS could not do very much with it so I did not bother with a screenshot as the GUI shows it exactly the same as high.

Gzip -9 results

5080134 Nothing
1623297 Optimize with gzip-9
1582909 High with gzip-9
1546173 None with gzip-9
1526513 Dedup with gzip-9
1472161 Max with gzip-9

So using Veeams max compression along with FreeNAS max compression netted 24M savings (Nothing minus Max with gzip -9) in the end. Since I was not tracking CPU usage this benefit comes at a cost. It sure felt like the copy was taking longer the more compression going on which makes sense actually when you think about it. Veeam backs up VMs overnight when things are most idle so I am not super concerned with CPU usage. Except if I am doing a backup during a scrub! I'll need to revisit the times so they do not overlap actually.

To be more scientifically complete I would have to do it again with more input to get a better picture of it all. How long does each Veeam method take? How long does the cp take on the FreeNAS? How much CPU is used for all this? Is 80% more CPU usage (and power, heat) worth the 24M saving for this one VM? Since both my ESX and FreeNAS hosts use L series procs that plays a part vs higher end procs.

The dedup setting was the most interesting for sure. I might do a similar test again with all my VMs backed up in dedup friendly method via Veeam and create a dataset with dedup enabled to compare to non dedup. I doubt I would do that in real life. I just don't see the cost of memory for dedup on the FreeNAS based on the use case of backup data. Perhaps I will do a part II looking at that, CPU usage and backup time of the various methods.

This made me think of creating some child datasets though for my content. As my FreeNAS stores lots of files such as Windows OS ISOs I use gzip (and 7ZIP) today via the command line to save space. Layover from when it was hosted on Linux. This made me think to change the compression of those files to let the FreeNAS handle it automatically via a child dataset. Save me from extracting the ISO then open the ISO for whatever I need it for. Same for WIMs I host there. These results are pretty similar to wacking a WIM over the head with 7ZIP.

 So was this a waste of time?


Sunday, April 14, 2019

Upgrade ConfigMgr Client Outside Maintenance Windows

A friend reached out to me recently about his environment as he was wondering why it was taking so long to upgrade the ConfigMgr clients when he set a 14-day window for the auto-upgrade function. In talking to him I learned he had many maintenance windows, even for workstation use case due to his companies work with some windows being once a month or even quarterly.

I pointed him to this uservoice as client upgrades honor maintenance windows and it is a request to allow upgrade outside of the maintenance windows. As a workaround, I shared an app we created to address for 1E Nomad in task sequences but which would work fine as it allows the client to be treated like any other application/package and pushed accordingly.

Create an application pointing to the stock source path located at

For the install program set to your defaults
 ccmsetup.exe /skipprereq:scepinstall.exe;silverlight.exe /forceinstall SMSCACHESIZE=10240 SMSSITECODE=SITECODE  

Finally, for the detection method use a PowerShell script with the following.
 $VersionCheck = "5.00.8740.1024"  
 $CurrentVersion = $(Get-WMIObject -Namespace root\ccm -Class SMS_Client).ClientVersion  
 if ( $CurrentVersion -ge $VersionCheck ) {  
   Write-Host $CurrentVersion  

Note you need to update the version variable as you manage your environment. Super simple application. Just advertise to the relevant collection. Below is a query for what is not current. Need to modify the version portion to your environment as this is a not like 1810, including both rollups.

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.ClientVersion not like "5.00.8740.1%"  


Friday, March 8, 2019

'VMWare Virtual Platform' changed to 'VMWare 7,1'

One of my most popular posts is around VMWare Tools. WIth that in mind, I do lots and lots of image testing using VMs .Even sharing about a retired "image mule" design from years ago. VMWare platform mostly. Was doing some testing and was scratching my head on why the VMWare Drivers injection step was not working all of a sudden.

Doing imaging 101, the smsts.log showed the step was skipped as the condition was false. Go check the task seuquence and it is looking of OSDComputerModel=VMWare Virtual Platform. This step has worked for years and years. So back on the VM, I open a quick CMD and do a 'wmic csproduct get name' and it returned 'VMWare 7,1'. Turns out in VMWare Workstation 15 this was changed.

The steps looks for 'VMware Virtual Platform' as the model via OSDComputerModel variable. I dont use WMI queries on each step. I have it run as part of the tech interview screen and spits all that stuff like make, model, memory, storage, etc into variables that I use as conditions on the steps. Runs faster then using WMI on every driver step for example. Imagine my surprise while in Workstation 15 this came back as 'VMWare 7,1'.

I'm guessing this will also hit ESXi in the next version based on this VMWare KB article around Virtual Hardware 16.

If you're looking for model in ConfigMgr, MDT, etc, this will bite you pretty soon no matter how you look for it.

Appears to be Workstation with UEFI. I created a Workstation 14 compatible VM in 15 and it was also 'VMWare 7,1'. Did a 12 and it was 'VMWare 7,1' also. Changed from UEFI to BIOS and its back at 'VMware Virtual Platform'. I was using UEFI in Workstation 14 so think this is related to Workstation 15 only. If I get time I will put 14 back on and try it. I'll also check on my 6.7 Update 1 ESXi host at home to see what it returns for VMs to compare.


Tuesday, January 29, 2019

Tiny Ubiquity Networks UniFi Controller VM

Like many out there, I use Ubiquiti's Enterprise devices at home. I don't do routers but instead use a firewall (PFSense currently) with a separate Access Point for WiFi so I can upgrade them separately. Been this way for two decades now. Currently only have a Unifi AP (UAP-AV-PRO-US)  however if they ever make a 24 port gig switch with two SFP+ and two POE(+) I'm all over that. This would be the ultimate soho/homelab purpose switch IMO.

With that said, Ubiquiti has a controller for their devices to manage firmwares, configurations, and track metrics. Really nice featureset. They have a device you can buy which is pretty slick, especially the gen 2 version. I just cannot justify the price to control only one AP today so I use their controller software. It used to be a Windows 7 VM on ESX (did other stuff as well) so with its demise looming next year, I decided to move it to a Linux Server VM. Glen R, a user on the ubnt forum created a fantastic script to automate the install process once you get the OS ready, so all credit goes to him. I thought I would document how I created a tiny VM to support it. I even ended up creating a 'template' VM that friends are using with thier clients who have Ubiquiti devices.

While this is around VMWare ESX, any hypervisor of choice, such as Hyper-V would work great as well, even on Windows 10. Everything is a VM, even my pi-hole adblocker is a VM even though it was designed for the Raspberry Pi hardware.

Since I moved from gentoo linux to Ubuntu years ago I elected to use that so I downloaded the Ubuntu Server 18.04 LTS version ISO. Set it to autoupdate and forget about it for the most part as it will be supported into 2023.

First was to get on the current Windows 7 controller and backup its configuration and save it out. Stopped the controller service in case I needed to fall back on it.

Created a VM with these settings and mounted the ISO
  • 2 CPU (1 socket, 2 core)
  • 2 GB memory
  • 10GB thin provisioned storage
  • vmxnet3 network
  • paravirtual storage

Start up the VM and boot from the ISO. Install with defaults with automatic security updates. Nothing fancy like /var/usr etc on its own partition. Just one partition.

After first login I modify /etc/fstab to move /tmp to memory by adding this

 tmpfs      /tmp      tmpfs      defaults,nosuid     0 0  

It was unable to find some basic packages. I am guessing this is fixed in 18.04.1 that is available now, however, I had to add the main repositories as they were not in the install ISO media.

 sudo add-apt-repository main universe multiverse restricted  

Then update from the ISO to the latest code

 sudo apt-get update  
 sudo apt-get upgrade  

Restarted it then added some other packages. First was rsnapshot. Used this for many years since I was introduced to the snapshot concept via NetApp Filer. It is a filesystem snapshot utility based on rsync to backup folders on various cycles.

 sudo apt-get install rsnapshot   

 I configured it to back up the /etc directory and /var/lib/unifi where the controller configuration is kept. Lots of guides to set up rsnapshot in more detail. Edit /etc/rsnapshot.conf and set these lines

  snapshot_root /home/rsnapshot/   
  backup /etc/ localhost/   
  backup /var/lib/unifi localhost/   

Then create an /etc/cron.d/rsnapshot file with these below settings. Should be a sample file at this location actually, or put in crontab.

 0 */4      * * *      root  /usr/bin/rsnapshot hourly  
 30 3  * * *      root  /usr/bin/rsnapshot daily  
 0 3  * * 1      root  /usr/bin/rsnapshot weekly  
 30 2  1 * *      root  /usr/bin/rsnapshot monthly  

This will run the snapshot every 4 hours then also do a daily (3:30AM), weekly (3:00AM), and monthly (2:30AM) run. adjust as you see fit. since they are on the same file system they use little space as rsync use hardlinks and renames folders from the current hourly to the oldest monthly snapshot.

Next is to install haveged. This helps a VM to generate entropy for randomization such as SSH key generation and what not. It keeps /dev/random and /dev/urandom full.

 sudo apt-get install haveged  

Edit /etc/default/haveged to contain

 DAEMON_ARGS="-w 1024"  

and set it to start at boot

 update-rc.d haveged defaults  

Set the timezone so shell shows correctly using the wizard.

 dpkg-reconfigure tzdata  

Finally clean up old packages

 sudo apt-get autoremove  

Now the OS is all ready to go. From here, follow Glenn's directions in his forum post which is pretty straight forward. I will not cover that here. I used 5.9 to replace the 5.6 I had on the Windows 7 VM. I have not needed to upgrade the controller yet but Glenn has a script for that also at the same link.

Once the controller was installed I modify my DNS entry for to point to this system as the AP will look that up from time to time. Create one if you dont have one. Then open the web interface to and imported my config and after a few minutes it found the AP and adopted it.

As I use the free Veeam scripts and rsnapshot I do not backup the controller config to another host, but you can adapt my FreeNAS and ESX posting with keygen.  Be sure to set this VM autostartup setting in ESX.

After running for a few months it barely is noticed on the host


ConfigMgr upgrade error "Failed to apply update changes 0x87d20b15"

We attempted to upgrade one of our ConfigMgr environments from 1802 to 1810 and it failed out in the wizard on the 'Upgrade ConfigMgr Database' step. Looking at cmupdate.log we found this error.

 Failed to apply update changes 0x87d20b15  
 Error information persisted in the database.  

Going further back we then find this nice error.

 ERROR: Failed to execute SQL Server command: ~ DECLARE @viewname  nvarchar(255) ~ DECLARE newviews INSENSITIVE CURSOR FOR ~  SELECT name FROM sysobjects WHERE type='V' AND name like 'v[_]%' ~ OPEN newviews ~ FETCH NEXT FROM newviews INTO @viewname ~ WHILE (@@FETCH_STATUS = 0) ~ BEGIN ~  EXEC('GRANT SELECT ON ' + @viewname + ' to smsschm_users') ~  FETCH NEXT FROM newviews INTO @viewname ~ END ~ CLOSE newviews ~ DEALLOCATE newviews  

After much investigation, it turns out someone created a custom view for the ConfigMgr database. We backed up these views and deleted them and retried the upgrade. That fixed this issue.

For the rest of the upgrade story it was a loooong weekend and not having a good time. It progressed further but still on the Upgrade ConfigMgr Database step failed out. cmupdate.log showed that we encountered some deadlocks on the database and the upgrade panicked out and retry stayed greyed out after other tasks were performed to reset it. We then chose to do a site recovery to get us back to pre upgrade state (you do manually fire those off before upgrade right?) We then tried again and it froze at another spot past the ConfigMgr database step. sWe also had a "ghost" secondary. It was torn down years ago and now that we are without Secondarys we actually had a site link present still. This prevented us from using the upgrade reset tool that recently came out. We ended up involving Microsoft who got us going with 1810 and fixed these other quirky things also.


Friday, January 11, 2019

PxeInstalled (PXE Provider is not installed) error in smsdpprov.log

As my firm got purchased by another I am now starting to collapse the ConfigMgr environment. As it was designed to service 50K endpoints before breaking a sweat, it now manages a couple thousand systems that have not been migrated yet so is way overkill now. As we also use 1E Nomad all we have are the back office roles to contend with. First was to downsize the primary site to remove the multiple MP/DP/SUP and then SUPs on the secondaries. After that, I was to start collapsing the secondaries and converting those hosts to be only DPs. Eventually, this instance will drop to a couple of servers to be kept around for historical use.

So after uninstalling the secondary and cleaning up SQL etc I installed a DP role on it. Once done I started injecting the prestaged content, however I started seeing the following errors in the DP's smsdpprov.log.

 [D28][Thu 01/10/2019 04:06:09]:RegQueryValueExW failed for Software\Microsoft\SMS\DP, PxeInstalled  
 [D28][Thu 01/10/2019 04:06:09]:RegReadDWord failed; 0x80070002  
 [D28][Thu 01/10/2019 04:06:09]:PXE provider is not installed.  

PXE was not flagged during install so after double-checking settings I ran off to Google to see what others have around this. All I could find was people saying to live with it. Seemed strange as this was not on other DPs I checked, so I thought I'd try a quick PXE enable then disable on the DP. By quick I mean enable it, come back later and disable once I validated it was installed via smsdpprov.log and distmgr.log. Sure enough, smsdpprov.log only shows 'PXE provider is not installed' occasionally so the problem is fixed.

Now if it will finish hashing the prestaged content so I can update its boundry group so it will serve its purpose and move onto the next one!