Wednesday, December 11, 2019

Windows 10 Install.WIM too big for USB

I've developed methods to image one-off devices with the corporate image, however, there are times when I need to use a vanilla install to test something or prove it’s not a problem with the image. While I used to have a vanilla TS in ConfigMgr and MDT, I had several cases where I needed to go even more vanilla without drivers being injected. Therefore, I have a USB stick with the latest Windows install from VLSC. Sometimes I also DISM in the Home edition for All-In-One (AIO) use. A friend wanted to know how I made it since the install.wim is > 4GB since about 1709 I believe.

Why the limit? 

The 4 GB barrier is a hard limit of FAT32: the file system uses a 32-bit field to store the file size in bytes, and 2^32 bytes = 4 GB (actually, the real limit is 4 GB minus one byte, or 4,294,967,295 bytes, because you can have files of zero length). This means that you cannot copy a file that is larger than 4 GB to any plain-FAT volume.


You can use exFAT or NTFS, however these are not always bootable across devices.

You can use Rufus to burn the ISO, however it creates its own bootloader that only works with UEFI and I sometimes have a need for MBR. The stock Windows ISO just works so I wanted that functionality.

The consumer Windows ISO gets around this by compressing the WIM to an ESD as well as not including editions such as Enterprise that have additional files. I did this at first, however it was barely under the 4GB limit so would not scale. So I went even more simple. Create two partitions; that’s it –  one FAT32 the other NTFS.


I use diskpart, however you can also do this via the GUI using Disk Management and format dialogs. However it takes longer then diskpart. You can proceed through this in just a few minutes. It took me longer to write about it then to actually do it! First is to identify the disk so you don’t break something else. In this example I have an 8GB stick that is disk 6.

DISKPART> lis dis

  Disk ###  Status         Size     Free     Dyn  Gpt

  --------  -------------  -------  -------  ---  ---
  Disk 0    Online         1863 GB   350 MB        *  
  Disk 1    Online          476 GB  1024 KB        *
  Disk 2    No Media           0 B      0 B
  Disk 3    No Media           0 B      0 B
  Disk 4    No Media           0 B      0 B
  Disk 5    No Media           0 B      0 B
  Disk 6    Online         7810 MB      0 B

DISKPART> sel dis 6

Disk 6 is now the selected disk.

Then do a clean to remove any formatting on the stick.


DiskPart succeeded in cleaning the disk.

Create the first partition and format it as FAT32. You only need about 600MB but I do a GB for future use.

DISKPART> create partition primary size=1000

DiskPart succeeded in creating the specified partition.

DISKPART> format fs=fat32 quick

  100 percent completed

DiskPart successfully formatted the volume.

Set to active so it boots, and assign a drive letter

DISKPART> active

DiskPart marked the current partition as active.

DISKPART> assign

DiskPart successfully assigned the drive letter or mount point.

Create a second volume using the remaining space. If you have a large stick and want to use it for other stuff, you can create about 5GB NTFS and create a third volume for file storage, but I just use a folder on this volume if I need NIC drivers to install later, for example.

DISKPART> create partition primary

DiskPart succeeded in creating the specified partition.

Format it as NTFS

DISKPART> format fs=ntfs quick

  100 percent completed

DiskPart successfully formatted the volume.

Finally, assign it a drive letter. (Do not make it active.)

DISKPART> assign

DiskPart successfully assigned the drive letter or mount point.

Now that the two volumes are created, you copy all the files from the Windows ISO to the FAT32 volume minus the sources folder. Then create a sources folder on the FAT32 volume and copy boot.wim to it from the ISO. Finally, copy the sources folder to the NTFS volume.

All done. This USB stick will boot on any system the ISO will. As new Windows 10 releases come out, I just copy the ISO contents to these two volumes.

I didn't have the heart to tell my friend about my multiboot USB that I keep on my key-ring with all sorts of ISOs including the Windows installs, my AdminPE, and Disk Sanitizer ISOs. I'll share that setup sometime.


Mike Terrill talked about this as well a bit ago.


Sunday, November 24, 2019

Windows 7 on Ryzen 3000

My kids’ gaming PC died in early 2019. The motherboard or CPU fried and took the other with it.  It was an FX-8370 (AM3+) based MSI system with R9 270X Video Card. I gave them a choice: I build a new one around Ryzen 2, or they wait for Ryzen 3 and get more. I am happy they chose the latter, so we waited until Ryzen 3 came out. I decided to wait more for the MSI MAX boards which have a larger firmware (BIOS) chip. I finally got tired of waiting and was going to get an ASRock Steel Legend and found the MAX on NewEgg when I started ordering. I went with this config:
I already had the power supply, GPU, and storage from the old one to reuse. After assembly, I was pleased it came up on first try. I put Windows 10 and let my youngest play FortNite on it for a few hours. Who needs CPUBurn when you have a gaming kid...!

Before it went into "production" I wanted to get Windows 7 on it for ... Reasons. MSI, AMD, and other sites only say Windows 7 is supported on these Ryzen processors:
  • Bristol Ridge (APU)
  • Summit Ridge (Ryzen 1)
  • Pinnacle Ridge (Ryzen 2)
Since mine is a Ryzen 3000, its code name is Matisse and therefore unsupported. Which is fine really, as Windows 7 goes EOL in a very short few months. This motherboard is B450 based so it should have some support compared to the newer X570 chipset. As I've proven many times over with my deployment work, just because it’s unsupported doesn't mean it won't work. So away I went!

I put a 250GB SSD I keep as a spare on port 1 of the mobo and boot off my Windows 7 AIO USB. It also includes the NVMe, TPM2, and Post SP1 rollup on it. Get to the welcome screen, and nothing. Keyboard and mouse are dead. Try a couple other ports and even the "slow" ones by the NIC used for keyboard and mouse compatibility. NOTHING. Change some USB compatibility settings in the BIOS. STILL NOTHING. Unable to interact with the wizard makes it real hard to inject drivers. It might be a short trip. Instead, I swing by my dad’s and grab an old HP PS2 keyboard since this is a gaming board. Works! Get through the wizard to where it asks what partition to install onto and it wants drivers since it cannot see the storage.

Next problem. l pull a trick from ConfigMgr and MDT: inject drivers. I grab the MSI Windows 7 drivers as well as AMD's all-in-one and inject those drivers into the boot.wim on the AIO USB and reboot. No luck. I then mount the Windows PE BOOT.WIM and put the drivers on it in a folder so I can browse them in the wizard. Still no luck. Note to self: The installer's main volume is index 2 of BOOT.WIM and this is what you could browse. I also try the Windows 10 BOOT.WIM and it errors out in fantastic ways I need to revisit.

I then decide to use an older storage controller but don’t have any available so order one and got this ASM1061-based one which I know uses built-in drivers in at least Vista, and it says it is supported back to even XP. After installing it into the machine I get to the same spot, however, I can now browse the SSD as it had a single NTFS partition on it. But I still cannot proceed.

Next problem. I attach the SSD to a Linux VM and copy the MSI and AMD drivers along with others that I think might work as I did not have to mess with the BOOT.WIM since I reverted back to the original sealed one. The error dialog changes slightly from before and hints it could not find the install media. PE will use higher performance drivers. I burn the Windows 7 AIO to a DVD and hook up a BluRay player and SUCCESS!  It did not work off the motherboard but does work off the PCIe controller I got above. Windows 7 got installed!

After boot-up I still have chipset/USB issues as only the PS2 keyboard worked, so I open a shell and install the AMD all-in-one and got USB mouse and keyboard. I also install the NIC and Audio drivers. Out of curiosity, I move the SSD to the internal controller and it boots up fine now that it had the right drivers. It was happy and I could have used it after applying patches.

Now that I got what I wanted out of Windows 7, I move the (Windows 10) SSD and hard drive from the fried system to it and give it to the kids so they can run that for a while. I will keep an eye on sales over the holidays though. I do want to get this system over to M.2 NVMe. So told the kids it will get rebuilt from scratch when I obtain those. This instance of Windows has been in about 5 or 6 different PCs. Additionally, my FX-8350 system is showing its age so I'll move to Ryzen in a few months and hand this down to my backup server. Maybe I'll have to do this again or just move the kids to a 3950X and X570 system while I take the guts from this one. Their games push a system more than my work does.


Thursday, November 21, 2019

Enforce TLS 1.2 via ConfigMgr Compliance Setting

One of the products in our environment is deprecating support for earlier TLS versions 1.0 and 1.1 (Yay!). This means we are being required to support TLS 1.2 for this product on Windows 7, Windows Server 2008 R2, and Windows Server 2012. It is default on more recent OSes, such as Windows 8 and Windows Server 2012 R2 and greater. TLS 1.2 should be enabled via  KB3140245. While installed on the majority, we found it actually was not enabled on all of the fleet even though this KB was applied a while ago.

So Piers came to the rescue with a Compliance Setting to manage this. Luckily TLS is set at the OS level via registry keys so it is not that difficult to manage. Piers enjoys using PowerShell so he went that route to mitigate.


Piers created a compliance baseline in ConfigMgr which reports on the following:

Windows 7 requirements for TLS 1.2
  • KB3140245 must be installed
  • [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
    • "DisabledByDefault"=dword:00000000
Windows 2008 R2 and 2012 requirements for TLS 1.2
(Note this is slightly different.)
  • KB3140245 must be installed
  • [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
    • "DisabledByDefault"=dword:00000000
    • "Enabled=dword:00000001"
      • Or "Enabled=dword:0xFFFFFFFF" (It equates to decimal 1 – see info here)
  • [HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]
    • "DisabledByDefault"=dword:00000000
    • "Enabled=dword:00000001"
      • Or "Enabled=dword:0xFFFFFFFF"
This is checking both server and workstations for TLS 1.2 enablement at the OS level. This overrides any TLS settings set explicitly for IE, as we understand it, so I don’t believe we need to check IE-specific settings (see the section "How the DefaultSecureProtocols registry entry works" in this article).
  • Firefox has had TLS 1.2 support enabled since version 27
  • For Chrome, TLS 1.2 is automatically enabled from version 29


We created several compliance items to detect TLS 1.2 and enable if not. For servers, they were eventually split out for Server and Client functions of TLS for IIS, etc. and to make use of nice features such as Supported Platforms so we can target them specifically for advertisements.

We are using the following discovery script for the client keys:

 $RegPath = 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client'  
 $RegName = 'DisabledByDefault'  
 $RegData = '0'  
 $Return = 'Not Found'  
 $RegLookUp = Get-ItemProperty -Name $RegName -Path $RegPath -ErrorAction SilentlyContinue  
 if ($RegLookUp.$RegName -eq $RegData) {  
   $Return = 'Found'  
 if ( $RegLookUp -and ($RegLookUp.$RegName -ne $RegData) ) {  
   $Return = 'Value='+$RegLookUp.$RegName  
 Write-Host $Return  

He is using a remediation script adapted fromRoger Zander:

 # Reg2CI (c) 2019 by Roger Zander  
 if((Test-Path -LiteralPath "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client") -ne $true) { New-Item "HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client" -force -ea SilentlyContinue };  
 New-ItemProperty -LiteralPath 'HKLM:\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client' -Name 'DisabledByDefault' -Value 0 -PropertyType DWord -Force -ea SilentlyContinue;  

The Compliance rule is pretty straightforward. It looks for "Found" from the Discovery Script.

The configuration baselines are pretty straightforward as well.

The Client baseline contains both workstation and Server OS versions.

We have two advertisements: one to monitor, and one to remediate. The monitor is run daily and remediate is run every couple hours.

The advertisement is going to a collection that calls out the affected Operating Systems only.

For the server settings, just change the relevant registry path, as the rest is the same.

[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Client]
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\SecurityProviders\SCHANNEL\Protocols\TLS 1.2\Server]

This is a good read on it from a Dev perspective via a Microsoft Security post.

Tuesday, November 19, 2019

Using Global Entry API to Get Better Appointment


I bit the bullet and decided to apply for TSA PreCheck for my traveling. Since it was only $15 US more for Global Entry (GE), I actually went that route. It gives me everything PreCheck does along with quicker customs processing when I return to the States. Since it has more intrusive background checks, you have to do an interview for final approval. With all the political drama going on right now, staffing these locations has been a challenge for US Customs and Border Protection. If I knew then what I know now, I would have just gotten TSA PreCheck.

I applied around February 2019 and after about a month was moved to the 'Conditionally Approved' phase so I was able to set an appointment for the interview. My brother applied a week after me and just now was made 'Conditionally Approved' this week. In my case, I set an appointment at Denver (DIA) and the soonest was May 2019. Two days before my appointment it was canceled on me and the soonest they had was July so I signed for that. That was canceled on me and Jan 2021 was the next avail. That put me at nearly a year from application. Ugh. 


As both my brother and I are in IT we started talking about getting an earlier appointment. People have to cancel their appointments for example so what happens to them? Did some research and found that the Global Entry website has an API available to talk to its database.

Sure enough, we found a blog post about it by Jeremy Stretch. ‘stretch' covers a GitHub repository he found that looks for appointments via the GE API. As the GE site changed, he had to start over and documented what he did but never finished it out as he got an appointment via the website. So my brother and I decided we could get something going based on what stretch started. My brother volunteered to write it in Python and I offered to run it on my main Ubuntu server.

As stretch documented, you can go into your browser's developer mode while looking at appointments on the website to pull out the site code.
  • Denver          6940
  • Miami           5181
  • Fort Lauderdale 5443
We looked up a few others for testing and apparently, Guam has free slots almost daily. I should go there! After identifying the office codes, we went to work within our requirements:
       Since there were no rules of engagement around the API we decided to run it every 5 minutes via cron.
       It will email us when it finds an opening. Nothing special, it just uses the MTA on the system such as PostFix.
       It will also stop once it finds one, and we have to reset it, also done around the API concerns. This drops /tmp/checkGE.disable and the script will exit out if that dropper is found.
       It should also track its history when it finds openings, and this is kept in /tmp/checkGE.history.
       Support multiple locations.
After writing it and doing some initial bug hunting it was returning results – too many, as it would show what was freshly available six to nine months out. Since I had an appointment a few months out, I was only interested in an earlier appointment so it supports regex to parse specific ranges. Initially, it was the remainder of 2019, but we then wanted the first couple months of 2020.

if re.match('^(2019-(10|11|12)|2020-(01|02))-',result):

This was done for readability more than optimized performance. Additionally, it follows ISO 8601, which is in the format of YYYY-MM-DDThh:mm:ss. In the above example, it is looking for the following months:

       October 2019
       November 2019
       December 2019
       January 2020
       February 2020

So this could have its regex streamlined as the following, but it would be much harder to read and modify if we had to go further into 2020 versus above.

if re.match(^20(19-1[0-2]|20-(0[12]))-',result):

Once it was set up, we just had to be near a PC when the email showed up to go to the website to snag the earlier appointment when we were notified one was available. I cannot prove it, but I believe the website polls the database on an interval as well. We had a few matches but they were not on the website. Initially, I thought I was too late, but a couple of appointments showed up on the website a few minutes after I got the email notification.

I really lucked out as it found an appointment two days out for me, which I snagged. I am all set up with Global Entry now. Wahoo! I might have to dust this off in 5 years if I get elected to go through another interview during the renewal process.


This script is provided as-is; no warranty is provided or implied. The author is NOT responsible for any damages or data loss that may occur through the use of this script.  Always test, test, test before rolling anything into a production environment.

Since it was not originally written to share, you will have to work on it a little. Using what stretch covered, get your locations to code on line 14. Adjust the regex on line 26. Enter your email address on line 11. You can find the script here.

Sunday, September 22, 2019

Add storage via LVM

This post is mostly notes just for myself. While I use FreeNAS locally to benefit from all its great functions such as ZFS and snapshots, I also perform a daily backup via rsync to an offsite host. That backup host was about to run out of space. Since it's my backup, it gets all the hand me downs so I barely invest in it.

Up until recently it was an AMD AM2 based system and is now AM3 based and will transition to a higher-end AM3+ when I replace another system with Ryzen. Its chokepoint is the network so I don't need anything super fast, it's just to address the fire and theft type scenarios at my main site. Just needs disk For storage I have a 64GB SSD for the OS and three spinner drives I reallocated from my main FreeNAS a while ago when it ran out of space.

These drives consist of a 4TB, 4TB, and 3TB smooshed together via LVM on an Ubuntu host. This gives me about 10TB of usable space. I just replaced two 2TB drives in the FreeNAS with larger drives so had them lying around ready to go. The case is a 3RU ATX chassis so I have plenty of space for platters.

Since it is commodity hardware-based (easy to fix!) there is no hot-swap so after cabling up the drive and starting it back up I started the add process in a few steps.

Add a Drive

Before beginning, I had to collect some items and let my heart skip a beat.

First was to locate where the drive is assigned to. Its on /dev/sdc.

 sudo fdisk -l  

Next was to get the LVM volume group name, which is vg0, the default.

 sudo vgdisplay  

Finally get the logical volume path, which is /dev/vg0/mystuff

 sudo lvdisplay  

So now I can convert it to a physical volume. I use the entire disk as I have no reason to partition them to use for other purposes.

 sudo pvcreate /dev/sdc  

Next, add the physical volume to the volume group

 sudo vgextend vg0 /dev/sdc  

Allocate the physical volume to the logical volume

 sudo lvextend -l +100%FREE /dev/vg0/mystuff  

Then finally grow the ext4 file system on the logical volume so I can use all this new space.

 sudo resize2fs /dev/vg0/mystuff  

All done, using df -h that mount went from 10TB total to 12TB.

Instead of using fdisk -l you can use cat /proc/partitions to locate the drive device. before or after you can use lsblk to get a pretty layout of your drives

 sda      8:0  0 2.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdb      8:16  0 59.6G 0 disk  
 ├─sdb1    8:17  0 51.7G 0 part /  
 └─sdb5    8:21  0  8G 0 part [SWAP]  
 sdc      8:32  0 1.8T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sdd      8:48  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  
 sde      8:64  0 3.7T 0 disk  
 └─vg0-stuff 253:0  0 11.8T 0 lvm /home/mystuff  

Replace a drive

While I have space for more drives I will eventually run out so instead of getting a bigger case I can just replace drives with bigger ones. This process is easy as well.

You need to collect all the relevant information first such as the drive locations, physical, and logical volumes. Instead of lvextend you run these steps after running vgextend.

Move the data from the old drive to the new drive. This can be done live but you do have a performance hit to both drives during the process. This can take a considerable amount of time to complete so its advisable to do when the volume(s) are not too busy. Also consider power outage risk during this process. The larger the source drive the longer it will take.

 sudo pvmove /dev/sdc /dev/sde  

Finally, you remove the old drive from the volume.

 sudo vgreduce /dev/sdc  

Once done power down and physically remove. After this, you would run the resize2fs command above to grow the usable space.


Saturday, June 1, 2019

ESXi Veeam vs FreeNAS compression comparison

I have mentioned previously how I back up all my infrastructure configs to the FreeNAS. I use a dataset with gzip-9 compression since they are mostly text and let snapshots manage revision control for me. works really well. For the VMs themselves, I use Veeam (Community Edition) to back up the guests on my ESXi host to the FreeNAS server. I had some time so I was curious what was the best method within Veeam and FreeNAS to backup so I performed some tests,  totally not scientific.

I took one of my smaller VMs, Pi-Hole, which is a 2 core, 2GB memory, with 10GB of storage and backed it up sequentially with each compression method within Veeam. It is using about 5G of space currently. Here is what each one came out to:

 root@freenas:/mnt/Pool/sysadmin # ll  
 -rw-rw-rw-  1 root wheel 5202056192 May 21 20:58 1_None_Pi-HoleD2019-05-21T205500_02D1.vbk  
 -rw-r--r--  1 root wheel 3728351232 May 21 21:06 2_Dedup_Pi-HoleD2019-05-21T210008_3D6D.vbk  
 -rw-r--r--  1 root wheel 1963535872 May 21 21:12 3_Optimal_Pi-HoleD2019-05-21T210835_3CD8.vbk  
 -rw-r--r--  1 root wheel 1646802432 May 21 21:17 4_High_Pi-HoleD2019-05-21T211400_EC95.vbk  
 -rw-r--r--  1 root wheel 1531771392 May 21 21:21 5_Max_Pi-HoleD2019-05-21T211803_EC3B.vbk  

I then created several FreeNAS datasets with various compression methods from none to gzip with the max -9 compression switch and named them for the method and copied each backup. In other words, I copied '1_None_Pi-HoleD2019-05-21T205500_02D1.vbk' to each of the folders under Test below and compared how FreeNAS compressed the same data using each method.

None setting in Veeam

Looking in the GUI above you can see they all had fantastic compression ratios. FreeNAS determines usage using this formula:

Compressed space = uncompressed space * (1 + compression ratio)

To put another way, if your pool shows 1TB used, 3TB available and a compression ratio of 2.00x you have 2TB of data in the pool that has compressed to 1TB. If the compression ratio stayed at 2.00x you could copy another 6TB to the pool.

So how much did it compress each file with each method? Just look via the command line. 'du -h' will tell you how much its using in human-readable format.

 root@freenas:/mnt/Pool/sysadmin/Test # du -h  
 4.8G  ./none  
 1.9G  ./lz4  
 1.6G  ./gzipfast  
 1.5G  ./gzip-9  
 1.5G  ./gzip-6  

Since these folders contain the same file, just compressed via the dataset, you can add the -A switch to see the apparent-size and see they are the same as far as any users would know.

 root@freenas:/mnt/Pool/sysadmin/Test # du -Ah  
 4.8G  ./none  
 4.8G  ./lz4  
 4.8G  ./gzipfast  
 4.8G  ./gzip-9  
 4.8G  ./gzip-6  

Human readable is nice but you can remove the -h switch to see the actual file size.

 root@freenas:/mnt/Pool/sysadmin/Test # du -A  
 5080134 ./none  
 5080134 ./lz4  
 5080134 ./gzipfast  
 5080134 ./gzip-9  
 5080134 ./gzip-6  

For this exercise we want to know actual used space so I am not using the -A nor -h switch.

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 5083217 ./none  
 1974197 ./lz4  
 1659773 ./gzipfast  
 1546173 ./gzip-9  
 1553781 ./gzip-6  

As expected gzip -9 compressed it greatly from 5G down to 1.5G or about 30% of its original vmx size. This is very close to the Max setting within Veeam. I took this further with each Veeam method to see how much FreeNAS could compress it more.

Dedup Setting in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 3643537 ./none  
 1953457 ./lz4  
 1637709 ./gzipfast  
 1526513 ./gzip-9  
 1532541 ./gzip-6  

This one was interesting and really not expected to me. Using dedup in Veeam and compression it beat out all the Veeam backup methods. Think of Dedup as the cousin of compression.

Optimal Setting in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1918857 ./none  
 1875889 ./lz4  
 1640389 ./gzipfast  
 1623297 ./gzip-9  
 1623685 ./gzip-6  

High compression in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1609401 ./none  
 1585361 ./lz4  
 1583137 ./gzipfast  
 1582909 ./gzip-9  
 1582913 ./gzip-6  

Note in the GUI the file size is the same, not much to compress but if you look at the actual size you see a slight variance between all of them. While I did not keep track of CPU usage I am sure the compute cost at this point is pretty high. It sure felt like the copy command was taking longer than previously. Especially since the Veeam system is a VM as well.

Extreme compression in Veeam

 root@freenas:/mnt/Pool/sysadmin/Test # du  
 1496945 ./none  
 1473553 ./lz4  
 1472269 ./gzipfast  
 1472161 ./gzip-9  
 1472161 ./gzip-6  

Just like high compression within Veeam, FreeNAS could not do very much with it so I did not bother with a screenshot as the GUI shows it exactly the same as high.

Gzip -9 results

5080134 Nothing
1623297 Optimize with gzip-9
1582909 High with gzip-9
1546173 None with gzip-9
1526513 Dedup with gzip-9
1472161 Max with gzip-9

So using Veeams max compression along with FreeNAS max compression netted 24M savings (Nothing minus Max with gzip -9) in the end. Since I was not tracking CPU usage this benefit comes at a cost. It sure felt like the copy was taking longer the more compression going on which makes sense actually when you think about it. Veeam backs up VMs overnight when things are most idle so I am not super concerned with CPU usage. Except if I am doing a backup during a scrub! I'll need to revisit the times so they do not overlap actually.

To be more scientifically complete I would have to do it again with more input to get a better picture of it all. How long does each Veeam method take? How long does the cp take on the FreeNAS? How much CPU is used for all this? Is 80% more CPU usage (and power, heat) worth the 24M saving for this one VM? Since both my ESX and FreeNAS hosts use L series procs that plays a part vs higher end procs.

The dedup setting was the most interesting for sure. I might do a similar test again with all my VMs backed up in dedup friendly method via Veeam and create a dataset with dedup enabled to compare to non dedup. I doubt I would do that in real life. I just don't see the cost of memory for dedup on the FreeNAS based on the use case of backup data. Perhaps I will do a part II looking at that, CPU usage and backup time of the various methods.

This made me think of creating some child datasets though for my content. As my FreeNAS stores lots of files such as Windows OS ISOs I use gzip (and 7ZIP) today via the command line to save space. Layover from when it was hosted on Linux. This made me think to change the compression of those files to let the FreeNAS handle it automatically via a child dataset. Save me from extracting the ISO then open the ISO for whatever I need it for. Same for WIMs I host there. These results are pretty similar to wacking a WIM over the head with 7ZIP.

 So was this a waste of time?


Sunday, April 14, 2019

Upgrade ConfigMgr Client Outside Maintenance Windows

A friend reached out to me recently about his environment as he was wondering why it was taking so long to upgrade the ConfigMgr clients when he set a 14-day window for the auto-upgrade function. In talking to him I learned he had many maintenance windows, even for workstation use case due to his companies work with some windows being once a month or even quarterly.

I pointed him to this uservoice as client upgrades honor maintenance windows and it is a request to allow upgrade outside of the maintenance windows. As a workaround, I shared an app we created to address for 1E Nomad in task sequences but which would work fine as it allows the client to be treated like any other application/package and pushed accordingly.

Create an application pointing to the stock source path located at

For the install program set to your defaults
 ccmsetup.exe /skipprereq:scepinstall.exe;silverlight.exe /forceinstall SMSCACHESIZE=10240 SMSSITECODE=SITECODE  

Finally, for the detection method use a PowerShell script with the following.
 $VersionCheck = "5.00.8740.1024"  
 $CurrentVersion = $(Get-WMIObject -Namespace root\ccm -Class SMS_Client).ClientVersion  
 if ( $CurrentVersion -ge $VersionCheck ) {  
   Write-Host $CurrentVersion  

Note you need to update the version variable as you manage your environment. Super simple application. Just advertise to the relevant collection. Below is a query for what is not current. Need to modify the version portion to your environment as this is a not like 1810, including both rollups.

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.ClientVersion not like "5.00.8740.1%"  


Friday, March 8, 2019

'VMWare Virtual Platform' changed to 'VMWare 7,1'

One of my most popular posts is around VMWare Tools. WIth that in mind, I do lots and lots of image testing using VMs .Even sharing about a retired "image mule" design from years ago. VMWare platform mostly. Was doing some testing and was scratching my head on why the VMWare Drivers injection step was not working all of a sudden.

Doing imaging 101, the smsts.log showed the step was skipped as the condition was false. Go check the task seuquence and it is looking of OSDComputerModel=VMWare Virtual Platform. This step has worked for years and years. So back on the VM, I open a quick CMD and do a 'wmic csproduct get name' and it returned 'VMWare 7,1'. Turns out in VMWare Workstation 15 this was changed.

The steps looks for 'VMware Virtual Platform' as the model via OSDComputerModel variable. I dont use WMI queries on each step. I have it run as part of the tech interview screen and spits all that stuff like make, model, memory, storage, etc into variables that I use as conditions on the steps. Runs faster then using WMI on every driver step for example. Imagine my surprise while in Workstation 15 this came back as 'VMWare 7,1'.

I'm guessing this will also hit ESXi in the next version based on this VMWare KB article around Virtual Hardware 16.

If you're looking for model in ConfigMgr, MDT, etc, this will bite you pretty soon no matter how you look for it.

Appears to be Workstation with UEFI. I created a Workstation 14 compatible VM in 15 and it was also 'VMWare 7,1'. Did a 12 and it was 'VMWare 7,1' also. Changed from UEFI to BIOS and its back at 'VMware Virtual Platform'. I was using UEFI in Workstation 14 so think this is related to Workstation 15 only. If I get time I will put 14 back on and try it. I'll also check on my 6.7 Update 1 ESXi host at home to see what it returns for VMs to compare.


Tuesday, January 29, 2019

Tiny Ubiquity Networks UniFi Controller VM

Like many out there, I use Ubiquiti's Enterprise devices at home. I don't do routers but instead use a firewall (PFSense currently) with a separate Access Point for WiFi so I can upgrade them separately. Been this way for two decades now. Currently only have a Unifi AP (UAP-AV-PRO-US)  however if they ever make a 24 port gig switch with two SFP+ and two POE(+) I'm all over that. This would be the ultimate soho/homelab purpose switch IMO.

With that said, Ubiquiti has a controller for their devices to manage firmwares, configurations, and track metrics. Really nice featureset. They have a device you can buy which is pretty slick, especially the gen 2 version. I just cannot justify the price to control only one AP today so I use their controller software. It used to be a Windows 7 VM on ESX (did other stuff as well) so with its demise looming next year, I decided to move it to a Linux Server VM. Glen R, a user on the ubnt forum created a fantastic script to automate the install process once you get the OS ready, so all credit goes to him. I thought I would document how I created a tiny VM to support it. I even ended up creating a 'template' VM that friends are using with thier clients who have Ubiquiti devices.

While this is around VMWare ESX, any hypervisor of choice, such as ProxMox/Hyper-V would work great as well, even on Windows 10. Everything is a VM, even my pi-hole adblocker is a VM even though it was designed for the Raspberry Pi hardware.

Since I moved from gentoo linux to Ubuntu years ago I elected to use that so I downloaded the Ubuntu Server 18.04 LTS version ISO. Set it to autoupdate and forget about it for the most part as it will be supported into 2023.

First was to get on the current Windows 7 controller and backup its configuration and save it out. Stopped the controller service in case I needed to fall back on it.

Created a VM with these settings and mounted the ISO
  • 2 CPU (1 socket, 2 core)
  • 2 GB memory
  • 10GB thin provisioned storage
  • vmxnet3 network
  • paravirtual storage

Start up the VM and boot from the ISO. Install with defaults with automatic security updates. Nothing fancy like /var/usr etc on its own partition. Just one partition.

After first login I modify /etc/fstab to move /tmp to memory by adding this

 tmpfs      /tmp      tmpfs      defaults,nosuid     0 0  

It was unable to find some basic packages. I am guessing this is fixed in 18.04.1 that is available now, however, I had to add the main repositories as they were not in the install ISO media.

 sudo add-apt-repository main universe multiverse restricted  

Then update from the ISO to the latest code

 sudo apt-get update  
 sudo apt-get upgrade  

Restarted it then added some other packages. First was rsnapshot. Used this for many years since I was introduced to the snapshot concept via NetApp Filer. It is a filesystem snapshot utility based on rsync to backup folders on various cycles.

 sudo apt-get install rsnapshot   

 I configured it to back up the /etc directory and /var/lib/unifi where the controller configuration is kept. Lots of guides to set up rsnapshot in more detail. Edit /etc/rsnapshot.conf and set these lines

  snapshot_root /home/rsnapshot/   
  backup /etc/ localhost/   
  backup /var/lib/unifi localhost/   

Then create an /etc/cron.d/rsnapshot file with these below settings. Should be a sample file at this location actually, or put in crontab.

 0 */4      * * *      root  /usr/bin/rsnapshot hourly  
 30 3  * * *      root  /usr/bin/rsnapshot daily  
 0 3  * * 1      root  /usr/bin/rsnapshot weekly  
 30 2  1 * *      root  /usr/bin/rsnapshot monthly  

This will run the snapshot every 4 hours then also do a daily (3:30AM), weekly (3:00AM), and monthly (2:30AM) run. adjust as you see fit. since they are on the same file system they use little space as rsnapshot use hardlinks and renames folders from the current hourly to the oldest monthly snapshot.

Next is to install haveged. This helps a VM to generate entropy for randomization such as SSH key generation and what not. It keeps /dev/random and /dev/urandom full.

 sudo apt-get install haveged  

Edit /etc/default/haveged to contain

 DAEMON_ARGS="-w 1024"  

and set it to start at boot

 update-rc.d haveged defaults  

Set the timezone so shell shows correctly using the wizard.

 dpkg-reconfigure tzdata  

Finally clean up old packages

 sudo apt-get autoremove  

Now the OS is all ready to go. From here, follow Glenn's directions in his forum post which is pretty straight forward. I will not cover that here. I used 5.9 to replace the 5.6 I had on the Windows 7 VM. I have not needed to upgrade the controller yet but Glenn has a script for that also at the same link.

Once the controller was installed I modify my DNS entry for to point to this system as the AP will look that up from time to time. Create one if you dont have one. Then open the web interface to and imported my config and after a few minutes it found the AP and adopted it.

As I use the free Veeam scripts and rsnapshot I do not backup the controller config to another host, but you can adapt my FreeNAS and ESX posting with keygen.  Be sure to set this VM autostartup setting in ESX.

After running for a few months it barely is noticed on the host

Something else to consider is a DHCP option on your router to tell your Unifi equipment where the controller is.  Option 43 specifically. More info here as it is not straight forward.

ConfigMgr upgrade error "Failed to apply update changes 0x87d20b15"

We attempted to upgrade one of our ConfigMgr environments from 1802 to 1810 and it failed out in the wizard on the 'Upgrade ConfigMgr Database' step. Looking at cmupdate.log we found this error.

 Failed to apply update changes 0x87d20b15  
 Error information persisted in the database.  

Going further back we then find this nice error.

 ERROR: Failed to execute SQL Server command: ~ DECLARE @viewname  nvarchar(255) ~ DECLARE newviews INSENSITIVE CURSOR FOR ~  SELECT name FROM sysobjects WHERE type='V' AND name like 'v[_]%' ~ OPEN newviews ~ FETCH NEXT FROM newviews INTO @viewname ~ WHILE (@@FETCH_STATUS = 0) ~ BEGIN ~  EXEC('GRANT SELECT ON ' + @viewname + ' to smsschm_users') ~  FETCH NEXT FROM newviews INTO @viewname ~ END ~ CLOSE newviews ~ DEALLOCATE newviews  

After much investigation, it turns out someone created a custom view for the ConfigMgr database. We backed up these views and deleted them and retried the upgrade. That fixed this issue.

For the rest of the upgrade story it was a loooong weekend and not having a good time. It progressed further but still on the Upgrade ConfigMgr Database step failed out. cmupdate.log showed that we encountered some deadlocks on the database and the upgrade panicked out and retry stayed greyed out after other tasks were performed to reset it. We then chose to do a site recovery to get us back to pre upgrade state (you do manually fire those off before upgrade right?) We then tried again and it froze at another spot past the ConfigMgr database step. sWe also had a "ghost" secondary. It was torn down years ago and now that we are without Secondarys we actually had a site link present still. This prevented us from using the upgrade reset tool that recently came out. We ended up involving Microsoft who got us going with 1810 and fixed these other quirky things also.


Friday, January 11, 2019

PxeInstalled (PXE Provider is not installed) error in smsdpprov.log

As my firm got purchased by another I am now starting to collapse the ConfigMgr environment. As it was designed to service 50K endpoints before breaking a sweat, it now manages a couple thousand systems that have not been migrated yet so is way overkill now. As we also use 1E Nomad all we have are the back office roles to contend with. First was to downsize the primary site to remove the multiple MP/DP/SUP and then SUPs on the secondaries. After that, I was to start collapsing the secondaries and converting those hosts to be only DPs. Eventually, this instance will drop to a couple of servers to be kept around for historical use.

So after uninstalling the secondary and cleaning up SQL etc I installed a DP role on it. Once done I started injecting the prestaged content, however I started seeing the following errors in the DP's smsdpprov.log.

 [D28][Thu 01/10/2019 04:06:09]:RegQueryValueExW failed for Software\Microsoft\SMS\DP, PxeInstalled  
 [D28][Thu 01/10/2019 04:06:09]:RegReadDWord failed; 0x80070002  
 [D28][Thu 01/10/2019 04:06:09]:PXE provider is not installed.  

PXE was not flagged during install so after double-checking settings I ran off to Google to see what others have around this. All I could find was people saying to live with it. Seemed strange as this was not on other DPs I checked, so I thought I'd try a quick PXE enable then disable on the DP. By quick I mean enable it, come back later and disable once I validated it was installed via smsdpprov.log and distmgr.log. Sure enough, smsdpprov.log only shows 'PXE provider is not installed' occasionally so the problem is fixed.

Now if it will finish hashing the prestaged content so I can update its boundry group so it will serve its purpose and move onto the next one!