Monday, October 1, 2018

SUP Rebuild From Nothing Random Notes

I thought I shared this a while ago.

So I have new corporate overlords as my firm was procured by a larger fish. Their ConfigMgr environment was still 2012 R2 and 2008 R2 Windows Server so one of my projects was to determine path forward for SCCM and a pre-req was to get them to Current Branch. This was around the 1710 version timeframe. We had to choose to upgrade ConfigMgr then the server OS or Servers first then ConfigMgr. While I had 16xx Current Branch media around for the former, we elected to move all the roles to Windows Server 2012 R2 and then once all were done we would upgrade Configuration Manager itself. Sure I'll have notes around that.

First up was the SUP/WSUS environment for which there were several servers. Below are random notes, not a howto, from our experience of completely removing SUP and rebuilding from nothing on new 2012 R2 servers as well as preparing them for new features in Current Branch. I would expect some of these to be resolved in newer Server OS and/or ConfigMgr.

Before uninstall:

  • When you make note of current SUP settings such as products and classifications be sure to clear out the products and classifications in the SUP role. Leaving them will cause SUP to not sync from WSUS until you do a clear, sync, then setup again, and sync.
  • Goto HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Update Services\Server\Setup and copy SQLServerName REG_EXPAND_SZ so you have the current SQL server and instance. Useful if its on the primary site SQL server.
  • Backup the SUSDB just in case
  • On the SUP server, delete SUPSetup.LOG and WSUSCTRL.LOG from COnfigMgr Log dir so you have clean logs to start with. This was more around the first bullet as we had to remove to nothing again and start over with empty settings.
For uninstall:
  • When you uninstall the WSUS role, also uninstall WID role on the server if installed.
  • Delete SUSDB from the SQL server as new server version uses new WSUS database version. We were going from 2008R2 to 2012R2 WSUS.
For Install
  • During install of WSUS role it will enable WID even if you select SQL server. On Feature dialog uncheck it if you are using full SQL. Otherwise uninstall after WSUS installed. We put on Primary site DB since it is supported as well as us using the Enterprise Edition of SQL.
  • If using a Shared DB enter the path to the root share, ie \\\WSUS. It will add the WSUSContent folder onto that. We put it on the first installed WSUS/SUP server.
  • For the other servers give their machineobject full rights to the shared WSUSContent folder for the filesystem and/or share.
  • After install of WSUS role go to HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Update Services\Server\Setup and modify SQLServerName to SQL server/instance as it defaults to WID (MICROSOFT##WID) even if you select database to use full SQL. Do this before you start the configure task in Server Manager otherwise it will fail out. Good idea to modify to FQDN if not using it.
  • After configuration task goto SSMS if using full SQL and configure
    • The account you installed with is the owner of SUSDB, change to sa
    • set database file to 100MB growth unlimited for DB and 50MB growth max 10GB for logs. Should plan on approx 20GB WSUS, but that is pure WSUS.
  • After inititial setup in Roles, go into WSUS console to Options | Update Files and Languages | Update Languages and match what was chosen in SUP setup such as EN only. It will be set to Download all languages. SUP did not change this when it was configured later.
  • in IIS manager | Application Pools | WsusPool |  Advanced Settings
    • Private Memory Limit = 0
    • Queue Length = 25000
  • For Database sharing you need to configure all servers to also share the ContentDir. In IIS Manager | Server Name | WSUS Administration | Content | Manage Virtual Directory | Physical Path. Be sure to use FQDN here. It forgets the slashes.
Post Install

We did not have this problem but expected it due to how long the existing SUP/WSUS was in place. You might have issue with catalog version.
Some other useful links we referenced.
In SSMS you can run this to validate the server is set right.
  • select * from [SUSDB].[dbo].[tbConfigurationB]
During a reinstall effort on a SUP that was also an MP the MP started failing. To fix, just run this command. The uninstall doesn't always remove SUP completely.

%windir%\system32\inetsrv\appcmd.exe set config -section:system.webServer/httpCompression /-[name='xpress']


Tuesday, September 4, 2018

Eliminate rogue KMS with ConfigMgr

A friend of mine came to me as he noticed he had several KMS servers in his environment, yet should only have the one as he also has ADBA setup for the newer Operating Systems. In digging it turns out these others were Windows 7 laptops. We are guessing the users did something to try and change Editions perhaps but definitely intentional. Instead of putting in tickets with his support team we chose to fix it via ConfigMgr ourself. Note this was a focused case so did not spend much time at all to make it more automatic so some expectations were made. If this issue resurfaces I'll do something with compliance settings to really automatically handle.

First off you can find out who is advertising as KMS by doing a SRV record lookup in DNS for nslookup is easiest, though you can use the DNS tool in RSAT and dig down via Forward Lookup Zones | | _tcp | and you'll see all _vlmcs.* records.

 Default Server: UnKnown  
 > set type=srv  
 Address:    SRV service location:  
      priority    = 0  
      weight     = 0  
      port      = 1688  
      svr hostname  =    SRV service location:  
      priority    = 0  
      weight     = 0  
      port      = 1688  
      svr hostname  =  

First thing we did was remove the bad records from DNS leaving only the one good one. Since it was only a few systems acting up we created a collection and added these hosts as direct members. Then created a package with the following script to run on them and rerun weekly. Eventually, they went away. We kept an eye on the DNS records and removed as needed which was one time.

The script consists of several steps and its all ran via slmgr.vbs. The first step uninstalls whatever key the system has present:

 ECHO Uninstalling KMS Key  
 cscript %windir%\system32\slmgr.vbs /upk  

Next it installs the Windows 7 Pro KMS key that Microsoft provided:

 ECHO Installing KMS Setup Key for Windows 7 Pro  
 cscript %windir%\system32\slmgr.vbs /IPK FJ82H-XT6CR-J8D7P-XQJJ2-GPDD4  

We were lucky in that it was only some Windows 7 systems doing it however if it was other versions  we would have to detect the right OS so we use the correct KMS key. Since we were just using a shell script I was originally thinking of just using ver to pull it then an if-then statement for the /IPK step. Something similar to this.
 for /f "tokens=4-5 delims=. " %%i in ('ver') do set VERSION=%%i.%%j  

Since the system is now unlicensed we have it activate against the valid KMS.

 ECHO Activating the computer to KMS Server  
 cscript %windir%\system32\slmgr.vbs /ato  

Finally, we set the system to not report itself to DNS in case the user decides to do something like this again.

 ECHO Disable this machine from publishing itself as a KMS server in DNS  
 cscript %windir%\system32\slmgr.vbs /cdns  
 reg add "HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows NT\CurrentVersion\SoftwareProtectionPlatform" /v DisableDnsPublishing /t REG_DWORD /d 1 /f  

Should this happen again I'll do something more intelligent to handle it. He also had some policy drawn up to invoke on the users via HR.


Monday, July 9, 2018

FreeNAS smartd service refuses to start

Logged into my FreeNAS-11.1-U5 console after updating it and had a system alert that smartd was unable to start.

The GUI log was complaining as well:

Don't recall which disks these were, so I went over to Storage | View Disks in the GUI and they were the USB sticks used for the mirrored boot. I knew this as I labeled each drives slot in the server.

Since the event log was mentioning removable this was another clue. S.M.A.R.T. really is not designed for USB flash drives and more for hard drives and solid-state drives. Click Edit on each of these and turn off S.M.A.R.T.:

I did not try it but you could pass '-d removable' for S.M.A.R.T. extra options based on the syslog complaining about that switch. This all generates a fresh smartctl.conf located in /usr/local/etc. Once completed, just head back over to services and smartd is running happily. If not just start it and you should be good.


Tuesday, July 3, 2018

Flash IEClickToPlay ConfigMgr Compliance Setting (manipulate mms.cfg)

Recently we updated Flash to version on Windows 7 systems and discovered that they are not able to view Skype Broadcast events in Internet Explorer as discussed in greater detail here on the Adobe Forum. We use them quite heavily at my firm and basically, the video never starts. You see the spinning wheel at startup of the Skype Broadcast. It only impacted IE whereas Chrome and FireFox worked fine. We obviously do not want to revert to an older version so chose to correct the issue.

Compliance setting to the rescue, however, I won't cover how to create one from scratch. We are using one to set this line below in mms.cfg so that Skype Broadcast will work in IE on Windows 7.


The mms.cfg file is located in %WINDIR%\System32\Macromed\Flash or %WINDIR%\SysWOW64\Macromed\Flash depending on the arch. We have a Powershell Discovery Script that looks for this line in mms.cfg and reports back and then a Remediation Script that sets it if needed.

For the Configuration Item we set the Supported Platforms to Workstation OS' of Windows 7 and higher as it may impact Windows 10.

The Compliance Rules are pretty straightforward. We have two rules, one for System32 and the other for SysWOW64 locations. This screenshot is for System32 and it looks for the script to return 'OK' and if not to run the remediation script.

The remediation script will keep any existing lines and just modify the one in question as well as encode in ANSI so Flash processes it correctly.

The Compliance Baseline is deployed to our 'All Workstations' Collection to evaluate every 14 days. This will eventually get incorporated into our main Adobe Flash Compliance setting as it manipulates autoupdate etc. We actually just took our Flash autoupdate PS1 code and changed the top lines so its easy to manage multiple settings in mms.cfg via one Compliance Setting.

  $SettingsToRemove = @(   
  $SettingsToAdd = @(  

On the client side, the baseline's compliance report is pretty straightforward.


These scripts are provided as-is, no warranty is provided or implied. The author is NOT responsible for any damages or data loss that may occur through the use of this Script.  Always test, test, test before rolling anything into a production environment.

You can find the report here


Saturday, April 14, 2018

ConfigMgr WSUS Server Assignments Report

With all the cool changes in Current Branch 1702 and later around Software update Points and boundary groups it made me think about our current topology and what endpoints are using which SUP. Numbers were changed to protect the innocent.

Looking at basic machine metrics such as memory and CPU I knew one of our primary site SUPs was busier then the others. Sure enough this report shows its is about 70% of the load (top two above). We have three WSUS servers in the primary site with two internal and one for Internet facing. Rest are on Secondary sites. It also showed a couple WSUS servers that have been gone for years and one I have no idea about so some service tickets were placed to address these anomalies.

We did not spend too much time on the report to make it fancy so it shows the counts at the top and breaks down each machine below it so you can export to CSV, XLSX or whatever to manipulate. In the case of the strange ones above, identify those systems so we can put in ticket to fix them.

We could not find anything already collected so we created a MOF to collect the data from the endpoints registry.

 // WSUS Machine Location  
 #pragma namespace ("\\\\.\\root\\cimv2")  
 #pragma deleteclass("WSUSLocation", NOFAIL)  
 Class WSUSLocation  
 [key] string KeyName;  
 String WUServer;  
 String WUStatusServer;  
 Instance of WSUSLocation  
 [PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate|WUServer"),Dynamic,Provider("RegPropProv")] WUServer;  
 [PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate|WUStatusServer"),Dynamic,Provider("RegPropProv")] WUStatusServer;  
 #pragma namespace ("\\\\.\\root\\cimv2")  
 #pragma deleteclass("WSUSLocation_64", NOFAIL)  
 Class WSUSLocation_64  
 [key] string KeyName;  
 String WUServer;  
 String WUStatusServer;  
 Instance of WSUSLocation_64  
 [PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate|WUServer"),Dynamic,Provider("RegPropProv")] WUServer;  
 [PropertyContext("Local|HKEY_LOCAL_MACHINE\\SOFTWARE\\Policies\\Microsoft\\Windows\\WindowsUpdate|WUStatusServer"),Dynamic,Provider("RegPropProv")] WUStatusServer;  
 // WSUS Machine Location END  

The report needed some tweaking as our Internet facing WSUS would be returned as several DNS names based on how the endpoint reported it. You'll see that in the report and can modify to your IBCM WSUS or comment it out but it should work fine unless you have a system called 'IBCM_WSUS'.

 when wuserver00 like '%IBCM_WSUS%:80' then 'IBCM_WSUS:80'  
       when wuserver00 like '%IBCM_WSUS%:8530' then 'IBCM_WSUS:8530'  
       when wuserver00 like '' then SUBSTRING(wuserver00, 8, CHARINDEX('',wuserver00) -8 )  
       when wuserver00 like '' then SUBSTRING(wuserver00, 8, CHARINDEX('',wuserver00) -8 )  
       when wuserver00 like '' then SUBSTRING(wuserver00, 9, CHARINDEX(':8531',wuserver00) -9 )  
       when wuserver00 like '%IBCM_WSUS%:80' then 'IBCM_WSUS:80'  
       when wuserver00 like '%IBCM_WSUS%:8530' then 'IBCM_WSUS:8530'  


This Report is provided as-is, no warranty is provided or implied.The author is NOT responsible for any damages or data loss that may occur through the use of this Script.  Always test, test, test before rolling anything into a production environment.

You can find the report here.

Thursday, April 12, 2018

ConfigMgr Count Windows 10 Versions Report

Wish there was a built in report and am glad to hear one is coming soon. Until then I have this report we made to show the breakdown of Windows 10 Feature releases.

As we did this a while ago it is a manual process to update when each release comes out (such as the pending 1803 Feature release).Towards the bottom just add a couple likes to add new version and mostly add the friendly name.

           Select Case sBuild  
                Case "14393"  
                     sBuild = sB & " (1607)"  
                Case "10586"  
                     sBuild = sB & " (1511)"  
                Case "10240"  
                     sBuild = sB & " (1507)"  
                Case "15063"  
                     sBuild = sB & " (1703)"  
                Case "16299"  
                     sBuild = sB & " (1709)"  
           End Select  


This Script is provided as-is, no warranty is provided or implied.The author is NOT responsible for any damages or data loss that may occur through the use of this Script.  Always test, test, test before rolling anything into a production environment.

You can find the report here.


Sunday, March 25, 2018

2008R2/7 March 2018 Cumulative and vmxnet3 NIC

Hmm. I thought I posted this last week. whoops!

VMWare support notified us that there were issues with two Microsoft patches released this month.  Sure others were notified as well. We would have found this out following our Patch Testing Group process which I loosely cover here. These updates can cause Server 2008 R2 and Windows 7 Virtual Machines to loose their IP configuration. The two KBs in question are:
with this item specifically causing the problem
A new Ethernet virtual Network Interface Card (vNIC) may be created with default settings in place of the previously existing vNIC, causing network issues after applying this update. Any custom settings on the previous vNIC are still persisted in the registry but unused.
Both Twitter and Reddit are lighting up over this. My understanding is this issue requires 3 conditions to apply:
  • OS is Server 2008 R2 or Windows 7
  • NIC is vmxnet3 (therefore on VMWare)
  • IP is statically assigned.
Microsoft has a workaround which is to basically set up the new vNIC with the IP info of the old one. While most of our systems are newer versions of Windows we have enough of these impacted systems that touching each one manually is not all that appealing. VMWare suggests you not apply either of these. We choose this route to see if Microsoft more directly addresses. As we have taken a "Virtualize Only' stratagy we have alot of systems that would be affected, mostly on the server side. Per policy we have chosen to exclude these systems from the cumulatives for now until we decide on a more eleqouent resolution. Since we use ConfigMgr to patch I put together some collections to captured impacted systems.

First up is we have a 'All Virtual Systems' Collection that sources from 'All Systems' that we will pivot off.

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System where SMS_R_System.IsVirtualMachine = "True"  

as well as an 'All Physical Systems' Collection to break these out. As I am always thinking of the future vs focusing on this one issue I then collected all the Virtual Systems with a VMWare vmxnet3 NIC that had a static IP called 'All Virtual Systems with VMXNet3 NIC and Static IP':

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_NETWORK_ADAPTER on SMS_G_System_NETWORK_ADAPTER.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER_CONFIGURATION on SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_NETWORK_ADAPTER.DeviceID = SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.Index and SMS_G_System_NETWORK_ADAPTER.Manufacturer like "VMWare%" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPEnabled = 1 and SMS_G_System_NETWORK_ADAPTER.Name like "vmxnet3 Ethernet Adapter%" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.DHCPEnabled = 0 and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPAddress is not NULL  

Additionally those with DHCP.

 select SMS_R_SYSTEM.ResourceID,SMS_R_SYSTEM.ResourceType,SMS_R_SYSTEM.Name,SMS_R_SYSTEM.SMSUniqueIdentifier,SMS_R_SYSTEM.ResourceDomainORWorkgroup,SMS_R_SYSTEM.Client from SMS_R_System inner join SMS_G_System_NETWORK_ADAPTER on SMS_G_System_NETWORK_ADAPTER.ResourceID = SMS_R_System.ResourceId inner join SMS_G_System_NETWORK_ADAPTER_CONFIGURATION on SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.ResourceID = SMS_R_System.ResourceId where SMS_G_System_NETWORK_ADAPTER.DeviceID = SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.Index and SMS_G_System_NETWORK_ADAPTER.Manufacturer like "VMWare%" and SMS_G_System_NETWORK_ADAPTER_CONFIGURATION.IPEnabled = 1 and SMS_G_System_NETWORK_ADAPTER.Name like "vmxnet3 Ethernet Adapter%"  

Counts didnt seem quite right so looking at some systems which we learned that instead of doing an 'equal vmxnet3 Ethernet Adapter' in the query to change it to 'like vmxnet3 Ethernet Adapter%' as some had multiple NICs or "vmxnet3 Ethernet Adapter #8" (!!) in one case. I am assuming several did not get legacy devices cleaned up after a P2V operatoin. This put counts more in line with what vCenter showed but identified the vmxnet3 vs e1000 and other NICs in use.

We then took it further by narrowing on the impacted OS versions and created a collection that limits from the above 'All Virtual Systems with VMXNet3 NIC and Static IP' collection. We create collections for each OS version, and feature release for Windows 10, so we just had to include the Server 2008 R2 and Windows 7 collections and now we have the impacted systems to exclude. Called it 'All 2008R2 and 7 Virtual Systems with VMXNet3 NIC and Static IP'.

For further detail we created two more collections, one for each OS and sourced the above collection and included just the 2008R2 or 7 OS collection.

For the SUGs (Software Update Group) We follow the monthly/yearly method. So we put these two KBs into their own SUG and targeted the 2008 R2 and 7 systems but excluded the above impacted systems. With how we structured Software Update collections and Maintenence Window Collections we then had to create a collection that included all systems but excluded the impacted systems.

Took me longer to write this post then to do the actualy work! I would expect this to be resolved next month so the special SUG goes away however the initial collections seem pretty useful for the future.


Saturday, March 3, 2018

ConfigMgr Agent Self-Installer

We have a large percentage of remote workers and they work fine via ConfigMgr ibcm (not to be confused with icbm!). However with any system, sometimes the agent needs to be re-installed or installed to begin with so to make life easier on both the end user and the techs working the ticket I created a self installer for the agent. Users get the perception its installing "something" and the tech can give them a download URL and instruct them to download and run it knowing it will take care of the issue.

For the user they see a familair experience as this installer is used for many products, both free and commercial.

Behind the scenes this extracts the agent files to the users %TEMP% and then runs the agent installer after extraction is completed.

First step is to put the agent install into a working directory. In my case I pulled it from GPO as a VBS does all sorts of checks and balances as well as manages other components such as 1E Nomad, WUA versions, or fix WMI. Similar to Jasons starutp script. If you dont have it wrapped you can just install CCMSETUP.EXE with your ibcm info as you would manually run.

To create this I am using HM NSI Edit and NullSoft installer. HM NSI has a nice wizard interface to generate the install text file and from habit I use Nullsoft installer to compile it into something useable. HM NSI Edit has a compiler but it doesnt support everything Nullsoft does directly. There are other tools that work with NSI but I've used these for years with great success.

Start HM Edit and choose File | New Script from Wizard to start its Wizard. On the app info dialog I leave the website blank but you can point to say a KB article URL or support desk.

On the Setup Options dialog you can point to an icon file for a company logo and point to the setup file. This is what you are going to generate. Additionally if you want to add support for multiple languages you can do that. I just use English. The Modern GUI is the best looking one and I would suggest LZMA for compression as it packs the tightest. we'll make a minor tweak to this later.

For the Application directory and license dialog I just use '$TEMP\SCCMAgentInstall' for the App default directory. As NSI is an installer that will put files into Program Files, registry, add/remove etc  however in this case we are only extracting it and having it run the agent install silently. Since its an internal app the license file is blank.

Now you get to tell it what to do. On the Application Files dialog you remove the two entires on the right.

Click the Directory icon

and point to the directory you cached the agent install previously. Validate the Destination directly is set to $INSTDIR so it gets put into the temp dir you set a few steps above. If you use this to acutally install software you can use different sections and actions. For this use case the defaults are fine.

It then populates all the files from your cache dir. Otherwise leave this dialog alone

On the Application Icons dialog you want to remove the checkboxes and shortcuts so this is all blank.

Now you get to set your switches around your install. If you have a wrapper like me then you just enter the exe and move on, but if you do not and are calling CCMSETUP.EXE directly you can put your switches here as shown below. Additionally there is option for a readme if you have some sort of support KB. Note it is text and needs to be in the files you imported.

For the uninstaller dialog just uncheck the box as there is no uninstaller in this case.

Finally on the finish dialog select to save the script and convert relative paths. Do not compile as we will make some changes within the script file first.

Open the saved script file in your favorite text editor.  First change is to not put this "install" into the registry. Just comment (or delete) out the PRODUCT_DIR_REGKEY line. Note NSI files use semi-colon as comment code.

 ;!define PRODUCT_DIR_REGKEY "Software\Microsoft\Windows\CurrentVersion\App Paths\CMSEtup.EXE"  

Modify the SetCompresor setting add /solid switch. This will cause it to treat the files as one big blob vs individual files so you get greater compression.

 SetCompressor /SOLID lzma  

Comment (or delete) the MUI_FINISHPAGE_RUN line. This puts a checkbox in the final diaog of the installer to run the program after closing, in this case CMSetup.exe. You silently run CMSetup.EXE (or CCMSETUP) later in the file so no need to give the user an option to uncheck it since we are forcefully running it.

 ; By uncommenting the next line you can run CMSCript after NSIS exits via Run checkbox. If so then you need to  
 ; comment the secion CMScript below  
 ;!define MUI_FINISHPAGE_RUN "$INSTDIR\CMScript.exe"  

This is simply referencing this section. This will run CMScript.EXE silently without any user choice. Since we are installing the ConfigMgr agent this is the behavior we want.

 Section "CMScript"  
  File "SCCMAgentInstall\CMScript.exe"  
  Exec "$INSTDIR\CMScript.exe"  

That should be all the changes you need to make however you can tweak as you test before making available to end users. To compile I use NSIS but HM will do it as well. Just open NSIS, click Compile NSI scripts then drag the script file to the new window it created and away it goes.

Once complete just close it out and go test your installer to make sure the behaviour is what you want and that the agent is actually installed. Note this compressed the install 77% so its pretty small and self contained in an EXE.

So what do you do when you update the agent install due to the backend being upgraded? You can go through the entire process above or just edit the your MSI file to update source files. I have not found an easy way to modify the text file so I actually go through the wizard and ignore all steps and stop on the Application Files dialog to import the updated files.

Then just copy the MainSection from the new file and replace in the original script file.

 Section "MainSection" SEC01  
  SetOutPath "$INSTDIR"  
  SetOverwrite try  

Be sure to update other sections of the file such as the version. Then compile again and test.

If you want to get real creative there is a huge community and documentation around Nullsoft Installer to get suggestions. In my case I just needed to create a simple way for remote users to install or re-install the latest agent under instruction from support staff. For example, I wanted to polish it a little and change the users install dialog to hide the details so I changed this setting from show to hide:

 ShowInstDetails hide  


Friday, February 23, 2018

FreeNAS and ESXi config backup

My FreeNAS was running off a single USB so I grabbed another one to setup a mirror but it was a few blocks smaller then the current one so would not mirror. Instead of spending time to make it work, I just took a backup of the config, re-installed to the mirror and restored the config. Super simple. This made me think I should get some automation in place to backup its config vs me doing it manually when I make changes in the GUI.

Doing some research on the FreeNAS forums you can just copy the config db from /data/freenas-v1.db to somewhere else. Since I'm using v11 its a simple 'cp -a' script. Being FreeNAS I thought using snapshots would be useful for change control etc so I created a new dataset (sysadmin) with max compression and copied the config to that location.

Then I thought someone else did something more eloquent so I went looking around and found a nice script from Spearfoot on GitHub that handles this and had the bonus of backing up the config of a ESXi host which I also needed to get some config backups.

I created the directory /root/bin as its in path so easy to keep track of and put the script there.

I ended up changing the script as it puts a timestamp in the file however due to my snapshots I didnt need it. Here is old vs new.


I'll still have to manually cleanup files when the version changes however I wanted to retain the version for compatability reasons. This gives me a nice single daily file and then the snapshot handles revision control.

 -rw-r----- 1 root wheel 933888 Feb 22 03:00 freenas-FreeNAS-11.1-U1-f7e246b8f.db  

For the ESXI host this script will SSH to it and then pulls the config. In order to automate I had to first setup certificates between these two systems. Interactive logins are disabled on all my hosts but I didnt expect these two to talk to each this way so had to setup certificates. You can go here or here for more detail on the process if you are unfamilar. These are the commands.

 #cd /root/.ssh  
 #ssh-keygen -N "" -f id_esxi  
 #cat | ssh root@ESXI_HOSTNAME_OR_IP_ADDRESS 'cat >>/etc/ssh/keys-root/authorized_keys'  

First line  just puts you into roots ssh dir. The second generates a passphrase less key pair. Think hard here on whether this works in your environment. Being my homelab and interactice logins disabled everywhere I was ok with the risk. Third line will copy the public key to the ESXi host.

Once the keys are done and on each system you just ssh with this switch:

 ssh -i id_esxi root@ESXI_HOSTNAME_OR_IP_ADDRESS  

For the GitHub script I had to add the -i switch (highlighted above) to use the certificate.

 #esxihostname=$(ssh -i /root/.ssh/id_esxi root@"${esxihost}" hostname)   
 #esxiversion=$(ssh -i /root/.ssh/id_esxi root@"${esxihost}" uname -a | sed -e "s|VMkernel ||;s|$esxihostname ||")   
 #esxiconfig_url=$(ssh -i /root/.ssh/id_esxi root@"${esxihost}" vim-cmd hostsvc/firmware/backup_config | awk '{print $  7}' | sed -e "s|*|$esxihostname|")  

This all works via shell and I now have backups of my two main systems.

 -rw-r--r-- 1 root wheel  20623 Feb 22 03:00 esxi-configBundle.tgz  
 -rw-r----- 1 root wheel 933888 Feb 22 03:00 freenas-FreeNAS-11.1-U1-f7e246b8f.db  

Next up was to get the cronjob going so we head over to the FreeNAS GUI under Tasks | Cron Jobs and create it. I did it for 3AM. For the command I am using:

 sh /root/bin/  

Finally create the snapshot for 4AM.

The script will also send an email which is nice.

 Configuration file saved successfully on Thu Feb 22 03:00:00 MST 2018  
 Version: FreeNAS-11.1-U1 (f7e246b8f)  
 File: /mnt/Pool/sysadmin/freenas-FreeNAS-11.1-U1-f7e246b8f.db  
 Version: 6.5.0 #1 SMP Release build-5969303 Jul 6 2017 21:22:25 x86_64 x86_64 x86_64 ESXi  
 File: /mnt/Pool/sysadmin/esxi-configBundle.tgz  

Almost done. RAID is not a backup so I still need to do something for fire/theft etc. To address that, I created a weekly cronjob to copy this from the sysadmin dataset into my main dataset which gets rsynced offsite daily. I may change this to daily so I have the most current configs offsite.

Just having about a weeks worth of backups its looking good


For a Linux/UN*X host you would put it in the accounts .ssh folder authorized_keys.

 cat | ssh username@hostname_or_IP_Address ''cat >>/home/username/.ssh/authorized_keys'  
 cat | ssh root@hostname_or_IP_Address ''cat >>/root/.ssh/authorized_keys'  

For other commands that ride SSH such as RSYNC you can tell it to use the certificate.
 rsync -a -h --delete -e "ssh -i /root/.ssh/id_hostname"/path/to/source. username@hostname_or_IP_Address:/path/to/remote  


Wednesday, February 14, 2018

Dell OneShot Driver Update

Sometimes when there is a problem with a machine an older driver is at fault. For firms that have standardized on Dell they have provided a nice tool call Dell Command  | Update. This is a nice tool Dell made to keep their business lines (OptiPlex, Precision, or Latitude for example) driver sets up to date.

It installs an agent that runs on a schedule (monthly by default) to keep the system up to date on drivers. It queries drivers from Dell directly (or via an on site repository) so you can get the latest drivers easily and automatically. As my firm has very few egress points, I chose not to use it in this capacity however it does have a CLI executable! While we use ConfigMgr to do driver management as issues dictate (needing newer nVidia non ISV driver for example), I put together a package in ConfigMgr for the techs to manually update all drivers on a device when it comes in for service. Known as the "OneShot Driver Update".

You can find install instructions to get Command | Update installed on a PC. Once installed, just copy the install folder to a source folder in SCCM. I created two BAT Files that perform the following:

  • Remove logs from a previous run
  • Pause Bitlocker
  • Update Drivers
  • Resume Bitlocker if no restart needed
  • Optionally copy the logs to a server location

As you are generally updating drivers that impact the boot chain (yes the video driver at times) you need to pause Bitlocker. Older versions of the tool did not handle Bitlocker, which is what started me down the path of a simple BAT wrapper for the tool. The Update drivers bullet above is handled via a specific BAT and custom XML file though I have though about doing a menu in the BAT at some point to combine them. For the XML, you can create it in the GUI version of the tool then save it out and therefore use it to update only the categories you want.

I have created two file sets. One that does all device drivers yet skips applications. The second only updates the system firmware, aka BIOS or UEFI. You just run the appropriate BAT as an admin. They are even called '_RunAsAdmin.BAT'.

For SCCM we just have a package with both BATs called out as Programs and have them published via our internal Package Downloader Tool (PDT). Otherwise you can simply advertise it as available to be ran from Software Center. Or just put on a network location somewhere.


This Script is provided as-is, no warranty is provided or implied.The author is NOT responsible for any damages or data loss that may occur through the use of this Script.  Always test, test, test before rolling anything into a production environment.

You can find these two scripts here.


Monday, January 29, 2018

Old School BitLocker Enable Script

A friend of mine has a small client with a few hundred systems. Recently they identified a business need to encrypt all their devices so he asked me for some assistance. As they were on Windows 10 this would be an easy exorcise but one I would have to do differently due to their maturity and lack of something like MBAM licensed or third party options so we elected to use native Bitlocker with AD DS integration. Instead of using Powershell we chose to do it oldshool so it was easier to follow.

We chose to do this in three steps:
  1. Enable TPM
  2. Configure Bitlocker
  3. Encrypt with Bitlocker
Luckily they were over 95% Dell OptiPlex systems so it was pretty easy. For the TPM we used the Dell Command | Configure (CCTK) to create SCE files. These were pushed out via GPO as a DOS script. The script does these tasks

  • Checks for a dropper file and exits out if ran. If not creates dropper file
  • Detects 32-Bit or 64-Bit so it runs the right SCE
  • Initiates a restart for the TPM to be actually setup

Per Dell requirements you have to set a firmware (BIOS) password if there is none, then turn on and enable the TPM, and then finally reset the password. You can follow the process in this White Paper by Dell instead of me rehashing. This script is attached at the bottom.

Next we had to configure Bitlocker and this was done via GPO. Choosing things such as 128-bit vs 256-bit and XTS vs CBC for Windows 10. We went with 128 bit XTS as well as configure it to escrow the key in AD.

Finally we had to start encryption. Some people think you just set the GPO policy and the system starts encryption. This is not true, GPO just sets all the settings or preferences. You still need to trigger encryption. We did this also via a GPO startup script a week after using GPO to enable TPM. It created a scheduled task to run the script.

While their %SYSTEMDRIVE% is on C: some of these systems have additional volumes on secondary drives that they needed to encrypt as well. I started with a for loop like this one but it was not that eloquent.

 ::look for drives  
 for %%a in (a b c d e f g h i j k l m n o p q r s t u v w x y z) do if exist %%a:\nul (   
 Call :ENCRYPT %%a  
 Goto EXIT  

However these being OptiPlex units they had optical drives so that meant the OS was on C: and D: or maybe E: was the optical so I went a different path. Additionally they had network shares setup via GPO. While manage-bde would error out in these two situations it was not that pretty so I went with a different for loop that used diskpart. I modified one used previously for other tasks. I found it online and unfortunately I do not recall where to give credit.

   for /f "delims=" %%i in ('^  
     echo list volume ^|^  
     diskpart ^|^  
     findstr Volume ^|^  
     findstr /v ^  
     /c:"Volume ### Ltr Label    Fs   Type    Size   Status   Info"^  
     ') do (  
     set "line=%%i"  
     set letter=!line:~15,1!  
     set fs=!line:~32,7!  
     if not "    "=="!fs!" (  
       if not " "=="!letter!" (  
         call :Encrypt !letter!  

This spits out any physical volume simply as 'C' or 'E' which then calls the function :Encrypt. It will put the key into both AD and TPM, and then encrypt it. At the end it will prompt the user to restart as a restart is needed for the system drive to start encrypting.


This simply sets up shop. The loop passes C for example, but manage-bde wants the volume as C: so this addresses that but also changes to a more friendly variable used throughout the rest of the script. You could technically pass this via the loop by using:

      call :Encrypt !letter:!  

above. Since this is running via GPO we have a check to exit out if any volumes are already encrypted.

 ::Detecting if Bitlocker is already on  
 %WINDIR%\System32\manage-bde.exe -status %1 | FIND "Protection On" > nul2  

In addition I put in some friendliness in case it is ran outside of the GPO so there are ECHO statements throughout as well as the initial header.

 ECHO Encrypting Volume %DRIVELETTER% your PC, be patient . . .  
 ECHO There is no Need to write down the numerical password below  
 TITLE Encrypting your PC, be patient . . .  

The actual meat of it is to create the protectors and encrypt it. First it creates the password protector which then gets put into ActiveDirectory per GPO. then enables the TPM protector, and finally starts the encryption.

 ::Create Recovery Key  
 ECHO Create Recovery Key  
 %WINDIR%\System32\manage-bde.exe -protectors -add %DRIVELETTER% -recoverypassword  
 ::Create TPM Key  
 ECHO Create TPM Key  
 %WINDIR%\System32\manage-bde.exe -protectors -add %DRIVELETTER% -tpm  
 ::Enable Bitlocker on Windows Drive  
 ECHO Enable Bitlocker on Windows Drive  
 %WINDIR%\System32\manage-bde.exe -on %DRIVELETTER%  

Finally we need to exit out. If a volume was encrypted it will set a variable and exit the loop. Once all volumes are parsed it will initiate a restart which is when the Windows volume actually encrypts.

 Set BLEnabled=YES  
 EXIT /B  
 IF %BLENABLED%==YES %WINDIR%\System32\shutdown.exe /r /t 300 /c "IT Department made a change and your workstation will restart in 5 Mins. Questions? Please open a ticket with IT Support."  

Thats it. They were able to encrypt several hundred systems quickly to meet their business need and I did not have to spend a great deal of time helping my friend out and this was real easy for them to follow and understand how it worked. I would say it took you longer to read this then for me to write it.

On the lab system you can see the key is escrowed in AD and it matches if you manually print the key to PDF. AD also holds all the previous keys for that machine object. My friend ran it many times to validate it worked.


This Script is provided as-is, no warranty is provided or implied.The author is NOT responsible for any damages or data loss that may occur through the use of this Script.  Always test, test, test before rolling anything into a production environment.

You can find these two scripts here.


Monday, January 8, 2018

Windows 10 In-Place Upgrade Assessment Error Handling

As I progress to making Windows 10 available to end users I needed to polish out the error handling. While I am using the Upgrade Readiness tool that Microsoft provides to target successful systems, I am still performing one final check in the Task Sequence by having the Task Sequence do an assessment before continuing and capturing those results.

You just insert the Upgrade Operating System step and select the box 'Perform Windows Setup compatibility scan without starting upgrade' at minimum. I have also selected the other chekboxes below it such as 'Ignore any dismissible compatibility messages' checkbox. If unchecked, this will for example, trigger a failure for an incompatible driver that gets removed anyway. Useful if you have some weird driver for a microscope type device (I do!) but not useful if you accept the removal that the upgrade does. For those types of endpoints the Readiness tool has identified them.

I also enabled the last two check boxes. These will cause the eval step to pull the latest eval info from Microsoft instead of using what data shipped on the ISO. There may be some proxy concerns here.

Here is what that part of the Task Sequence looks like:

For initial roll out to IT, I am using Niall C. Brady's dialog Powershell Script so I wont cover that part here. I am working on a template script to pull HTML edited file to be more polished for an end user when I get to that point but Niall's script works great.

The 'Upgrade Assessment' step outputs to the read-only variable _SMSTSOSUpgradeActionReturnCode. SETUP.EXE actually outputs in hex whereas the variable is in decimal. Looking at Microsofts blog post these are the major exit codes. I converted the hex to decimal with a converter so I work in the same format ConfigMgr is.

  • No issues found:  0xC1900210 (3247440400)
  • Compatibility issues found (hard block):  0xC1900208 (3247440392)
  • Migration choice (auto upgrade) not available (probably the wrong SKU or architecture)· 0xC1900204 (3247440388)
  • Does not meet system requirements for Windows 10: 0xC1900200  (3247440384)
  • Insufficient free disk space: 0xC190020E (3247440398)

The 'Assessment Errors Detected' group has a condition of _SMSTSOSUpgradeActionReturnCode ≠ 3247440400 whereas the 'Upgrade the Operating System' group has _SMSTSOSUpgradeActionReturnCode = 3247440400.

Under the 'Assessment Errors Detected' Group, each of the 4 sub groups matches a code in _SMSTSOSUpgradeActionReturnCode. For example, the 'Compatibility Issues found (hard block)' group has a condition for _SMSTSOSUpgradeActionReturnCode = 3247440392.

Then each error code displays a custom message and errors the Task Sequence out. For the  'Compatibility Issues found (hard block)' error it shows

The Upgrade Assessment detected an error which is preventing a successful upgrade and it must be mitigated first. There is an application or driver that must be removed first. This is generally due to an old version of Sophos Safeguard present. Please contact the IT Service Desk for assistance.

Again, these groups are just a final sanity check, While the Readiness tool identifies bullet 2 impacted systems, ConfigMgr collections are identifying the last two bullets for example.


This TaskSequence is provided as-is, no warranty is provided or implied.The author is NOT responsible for any damages or data loss that may occur through the use of this TaskSequence.  Always test, test, test before rolling anything into a production environment.

Per request I have made a sample Task Sequence available here so you can see it in action.

Friday, January 5, 2018

Move the System Reserved Partition

One of my techs came to me with a Windows 7 system that they cloned the drive to a bigger one using something like this from StarTech. The problem was the System Reserved Partition was at the back of the drive instead of the front so after cloning it was now in the middle which prevented the Windows partition from being expanded via Disk Manager.

This system in particular was an older system that was deployed originally without the System Reserved Partition due to us using a 3rd party FDE when it was deployed however later was transitioned to BitLocker. That and SCCM/MDT didn't initially create one back then.

These cloners work well vs a Software Refresh when new space is all thats needed. Also works well to transition older system to SSD from HDD, assuming the SSD is bigger. Just make sure you use SSDTweak or TweakSSD or whatever else is out there on Windows 7. 7 only adjusts for SSDs at install whereas Windows 10 will generally adjust itself after detecting an SSD is present. We will clone when theres still life left in the device but a Hardware Refresh is in its near future.

There are a few options to address in the Windows world that use more labor such as using BCD to move the Boot Manager to the Windows partition and decrypting etc, however I like to do the simplest thing. I like and use gparted so I thought I would just throw up how I've done it in the past in hopes to maybe help a few others that visit.

To prep, you need to download the gparted LiveCD ISO and convert to USB with LiveLinuxUSB. Boots faster and many systems no longer have optical drives.

Suspend Bitlocker if present, then boot up gparted. During bootup you are prompted for keyboard and language etc. In most cases you can just accept the defaults.

Once gparted has started up it should select the first disk and show layout in two formats. Note the System Reserved partition in red. The goal is to migrate the System Reserved partition in red) to the back end of the drive.  This is done so using the following procedure..

In my case this is done by selecting /dev/sda2 and then selecting the Resize/Move button. 

In the popup, simply drag the partition to the right side of the bar. While your here you can first resize to 500MB that Windows 10 sets this partition to by default. Then select Resize/Move and OK to the warning.

Just to note, we are not actually performing any moves or resizes at this point, we are simply just creating a chain of commands that gparted will follow once it is applied.  You can either apply at the end of each step, or wait till the bitter end and do it, its up to you.  Either way when you are done, you should see your drive laid out the way you want.

Once its laid out the way you want and the actions at the bottom are correct, just click Apply and away it goes. Since the Reserved partition is so small it only takes a moment or two to move it.

Once done, remove the gparted media and restart into Windows. It will run a CHKDSK. Then after startup you can go to Disk Manager and see your handy work.

Then its a simple matter of extending the volume for your end results.

After extending, Bitlocker, if present, will start encrypting the larger volume and this can take a while since it  encrypts the entire volume in Windows 7 while newer can be configured for used space only. Once this is done you can resume Bitlocker and go about your day. Additionally, depending how you are managing recovery keys, be sure that is updated if its not something like MBAM or a 3rd party manager.