Should the image contain hotfixes or not?

One more post in my WSUS/Hotfix series of blogposts. I’ve been asked a couple of times how we approve Hotfixes and if we include them in the images.

I’ve made an Autoapproval Rule where we approve all Hotfixes automatically to the various Computer Groups with a Deadline, like this.

wsus16

And this is how the details looks like;

wsus17

First of all, any server that could cause problems if it automatically rebooted doesn’t have a Deadline, thats servers like Hyper-V Hosts and SOFS Nodes. Those servers are managed by SCVMM’s (System Center Virtual Machine Manager) Patch Management. VMM has a feature to put a cluser node in maintenance mode, automatically drain the node of VM’s, patch it, and then bring the node back online again before it takes the next node.  So we handle all patching of clustered servers from SCVMM. While we let the WSUS Client handle all other servers. We might add SCCM to the mix some day and let it handle all of the servers, but as most of our customers don’t want to run SCCM to manage their Fabric, this is the way we do it now.

By putting a deadline, we know the hotfix will be installed sooner or later. And if there is a Patch Tuesday before that date, it will also install the hotfixes at the same time.

Notice that the hotfix is NOT approved for All Computers and NOT for Unassigned Computers. How come?

When we build a VM image for any OS, it’s done automatically through MDT. Those VM’s are ending up in Unassigned Computers as they don’t have a role yet and we don’t want any Hotfixes in the images. Of course, if there is a mandatory hotfix whish is needed to make the image or deploy it, that one will be included!

The reasons we don’t want any hotfixes in an image is quite simple if you think about it. There are two main reasons really.
The first one is that if we make an image in august, which contains hotfixes. When we deploy that image 3 months later, there is a big chance that the hotfix we had in the image is replaced by a proper update from Microsoft so there was no use for the hotfix in the first place.
Second, when we create an image, we don’t add Clustering, Hyper-V and other roles and features to the image, right? So Windows will then only install the hotfixes for the core OS. And when the image is later deployed and someone adds the Hyper-V Role, it would install hotfixes for that role then. So the server wouldn’t be fully patched anyway so adding 5 or 15 hotfixes automatically after deployment doesn’t really make much of a difference.
Third, a minor reason is also that we normally use the same images for Fabric, Workload and Tenants and we like to keep them quite generic.

Here is a great blogpost about making reference images from my colleague Mikael Nystrom.

 

Live (VSM) migration fails with mirror operation failed and access is denied error

When doing a Live Migration from SCVMM (System Center Virtual Machine Manager) with VSM, moving a Virtual Machine from one Cluster to another Cluster and at the same time also to a new Storage Location, you are getting an error message similar to this:

The strange thing is that there is a destination folder in the new location, it’s just does not copy content to that folder and aborts with the Access Denied error. But If you shutdown the VM first, so it’ s just a migration over the Network, it works!

The solution is to give the SOURCE Cluster Write Access on the DESTINATION Storage. When you do a VSM Migration, the destination Hyper-V host, creates the Directory on the SOFS Node, but it’s the Hyper-V Host that owns the VM that copies the VHD’s files to the destination storage. And as the current owner, by default does not have access to write there, it will fail. One could think that VMM should grant permissions to a host when VMM knows that the host needs to write in the location?

Maybe it’s fixed in the next version, but until then, there are two ways to do this.
Solution 1) In VMM add the Destination SOFS Shares as Storage on the Source VM Hosts like this. That will make VMM add the VM Hosts with Modify Permissions in the SOFS Shares so it can write there.

sofs2

This works quite fine, if the Hyper-V Clusters and all Storage is located in roughly the same location. But if you have one compute cluster with storage in one location, and another compute cluster with storage in another location. There is then a risk that you may be running VM’s cross the WAN link.

Solution 2) This is the one we used. By not using VMM to grant permissions to the shares, but rather do it manually we achieve the same solution as above but with the added benefit that a new VM will always be provisioned on the local storage and there is no (or a lot less) risk of running a VM cross the WAN link. Yes, it’s still technically possible to do it, but no one will by accident provision a VM that uses storage in the other datacenter.

You can either add each node manually, so we have created a “Domain Servers Hyper-V Hosts” security Group in AD where we add ALL Hyper-V hosts to during deployment. And then added that group to the Share and NTFS Permissions. All Hyper-V hosts will then automatically have write access to all locations they may need.

I wrote these two short scripts to query the VMM Database for the available SOFS Nodes and use powershell to grant permissions to the share, and to NTFS.

As all our SOFS Shares were called vDiskXX or CSVXX (where XX is a number) I just used a vDisk* and CSV* to do the change on all those shares. You might have to modify it a little to suit your name standard.

Updated Script (2016-02-04):
I got a report that the script was getting an error on some servers, which I managed to reproduce. Here is an alternative version where it will connect to the server and execute the ACL change locally via invoke-command. It’s also only changing permissions on Continuously Available (SOFS) shares.

 

 

Set MPIO Policy via PowerShell for Storage Spaces

Here is a small script to set the MPIO Policy via Powershell according to Microsofts Best Practices for Storage Spaces as seen here https://technet.microsoft.com/library/0923b851-eb0a-48ee-bfcb-d584363be668

It will set the Global MPIO policy to Least Block and then change the MPIO Policy for all SSD’s to Round Robin. Though, it’s possible that mpclaim.exe will use a different DiskID from what Powershell/Device Manager is using.
So the script has a built-in feature to adjust the DiskId if needed, though you have to verify and set the value manually before running the script!