Monthly Archives: April 2014

Jason’s Windows 7 Optimisation & Customisation Guide

This is going to be a living guide and I will be constantly updating it as I encounter anything worth sharing. Please do come back regularly to this post to as a starting point.

The objective of this guide is my way of sharing what I normally would consider and apply to Windows 7 virtual desktops to ensure they are optimised suitably for each environment. Where necessary, I will break out into sub-posts to dive into the details. This post will focus on the higher level.

  • Virtual Machine Hardware
    • Upgrade virtual machine hardware to the latest version supported on the hosting vSphere hosts.
    • Disable unnecessary virtual hardware in VM BIOS, i.e. Serial Ports, Parallel Port, Floppy Controller.
  • Disable Virtual Machine hot plug [post]
  • Clean up missing devices in Windows Device Manager
  • Installation process of VMware Tools & VMware View Agent [coming soon…]
  • run the customisation script provided with the VMware’s Windows Desktop Optimization Guide found here.
    • how do I tailor the script for each deployment [coming soon…]
  • optimise Windows page file settings [coming soon…]
  • Local Group Policy changes [coming soon…]
    • enable Verbose Windows logon messages
    • enable Loopback processing
  • Set Shutdown script for VMware tools [coming soon…]
  • Removing excess Windows profiles & customising the default profile [coming soon…]
  • Configure Windows Visual Settings [post]

Part of the virtual desktop deployment can encompass some applications, and below are some guidelines on areas to consider.

  • Disable automatic application update checks [coming soon…]
  • Application installation location [coming soon…]

Steps involved in a Greenfield VMware Horizon View Installation

With anything you do, it’s always good to have a design and a plan in mind before you roll up the sleeves and execute. This is particularly critical for any infrastructure deployment. The reason is simple, anything infrastructure related forms part of the foundation for everything else that rides on top. Any architecture with a soft foundation will run into a all sorts of problems which will be more complex and costly to rectify later on.

Plan and Design is most certainly the most critical part for a VDI deployment, big or small. For anyone who is considering a VDI solution, this is a very critical starting point that must be treated with care and priority.

This article will not dive into the details of the design phase, but it is so important that I must first stress about it.

What I am focusing on here is the high level steps for deploying a greenfield virtual desktop environment. These apply to any environment and hope it helps to serve as a guide for you.

A. Hardware Setup
Of course we will first have to rack and stack all the servers, storage & network.
Have them all fired up and tested as per the requirements of the environment.
Don’t forget to make sure the BIOS and firmware of all hardware are at least at the level indicated in the VMware HCL.
I shall skip the details on the actual setup of server, storage and network

B. ESXi Installation
Here you would have determined what are all the host names & which servers are to be clustered together, etc. How and when are these determined? During the Plan & Design phase of course.
At end of this, all your hosts would already be up with VMware vSphere.

C. Deploy First Virtual Machines & Setup vSphere Clusters
At this juncture, there is no vCenter Server yet. That’s is going to be the primary target we try to bring up. Do bear in mind that vCenter Server has lots of dependencies, a Database server, Microsoft Active Directory & DNS server are the least. So in terms of sequence here’s what I would do:

  1. Create my first AD server with DNS and possibly DHCP
  2. Create the second AD for redundancy
  3. Create the first database server (let’s assume it to be MS SQL)
  4. Create a dedicated VM for vCenter Server SSO (A SSO cluster will be valid consideration here.)
  5. Create and install vCenter Server (this will be for the management cluster); add all the relevant ESXi hosts to be managed by this vCenter Server.
  6. Finish setting up all that is needed for this vCenter Server & management cluster.
  7. Now, since vCenter Server is operational, we can start to create template virtual machines which can be used to deploying the remaining server VMs.
  8. Create additional vCenter Servers and hook them up to the same SSO deployed earlier. This is particularly useful for larger deployments with multiple Desktop Blocks & vCenter Servers, as you would probably prefer to enable linked mode.
  9. Enable vCenter Server linked mode; I would only do this for all the vCenter Servers for the desktop blocks. The management block vCenter Server will be kept on its own.

When you have arrived here, all the vSphere configurations should be completed.

D. File Share for User Profiles and Data
In almost all VDI deployments, there is some need for a Windows File Share. These are very useful to backup user profiles and data, and keep things synchronized and available between virtual desktop sessions. The file share can either be hosted off a Microsoft Windows VM, or by specialized Enterprise Network Attached Storage (NAS).

E. Antivirus Service & Management
There isn’t any environment that will go without any Antivirus/Anti-malware solution. I would strongly recommend a good solution which integrates with VMware vShield Endpoint. This is the stage where the solution is deployed and configured.

F. VMware Horizon View
This is where we start working on the View layer. We’ll deploy all the View manager servers into the Management Block. Some environment may choose to deploy the View Security Servers in the DMZ clusters already present in the environment. That is perfectly acceptable as well.

  1. For simplicity, the very first thing to prepare are the SSL certificates for the View Connection Servers, View Security Servers and View Composer Servers. Installing with self signed certificates first, then replacing them with Certificate Authority issued certificates is also possible.
  2. Create and install the first View Connection Server (VCS).
  3. Create and install the second VCS (a.k.a. the Replica Server).
  4. Any additional VCS will depend on the design; it is not solely dependent on how many concurrent sessions the environment needs to support.
  5. Install View Composer Service – this will either be co-installed with each Desktop Block vCenter Server, or on dedicated virtual machines. The choice depends on several factors, and I shall defer the details to another post.
  6. Create and install any View Security Servers, minimum 2 for production environments.
  7. Complete any View configurations such as global policies, configuration backup timings, etc.
  8. If this environment has multiple desktop admins responsible for different desktops; create the folders in vCenter Server & View Admin and grant the relevant permissions.

G. Create Virtual Desktops
Now, all the infrastructure components should be up, and you are ready to create the master/parent virtual desktops. I will have a sticky post that dives into the details of creating virtual desktops.

  1. Create new Windows Virtual Desktop VM; this will be the master desktop.
  2. Optimize, customize & harden the virtual desktop.
  3. Create desktop pool based on the master desktop.
  4. Test, test, test; functional test & performance measurement

H. Set Up End Points
In many VDI deployments, users may choose to replace existing desktops with the lean Zero clients, or Thin clients. There are typically some management tool available to be deployed to simplify the management of the new fleet of devices. I do recommend leveraging the management tools where available.

vSphere Maintenance in a VDI Environment

I was asked recently what would be a good practice when it comes to performing maintenance on the vSphere layer in a VDI environment. This should be a useful post to share.

I look at this from 2 angles, the update strategy and the actual execution.

Strategically, I always start with a conservative approach; then from there decide if everything applies or not in the target environment.

Virtual desktops are mostly like any virtual machines in a vSphere environment; you should be able to vMotion them among hosts. Exceptions would be virtual desktops which are tied to a physical hardware, such as one with vDGA.

Now let’s consider the most common situation where the virtual desktops are free to move about. Considering using vMotion, Virtual Desktops are no different from any virtual machines; all the rules of what can be moved applies. This allows us to plan according to vSphere maintenance best practices.

Strategic Considerations

  1. Have a separate Test/UAT VDI environment. This should be much smaller in scale, but with hardware as similar to the production as possible. Reason being that for any changes you are going to make to the production environment, you can test the updates as well as the procedure of the update. It will not be pretty if a gung ho update to the production goes south.
  2. Avoid having multiple changes at the same time – such as keep infrastructure changes separate from virtual desktop changes. That last thing you want is to make too many changes, end up with an issue which you have no idea on the actual cause. Sensible change management practice is a must.
  3. Major patches or application changes to virtual desktops should be well documented and tested. Usually, only functional tests are performed. In a virtual desktop environment which typically has very high consolidation ratios (compared to virtual servers), any change which results in increase resource demand by the virtual desktop can quite likely result in performance issues very quickly. I recommend to do a resource utilisation measurement as well.

Tactical Considerations

  1. Definitely have at least 1 host spare capacity (N+1) in the cluster so that one host can be taken out for maintenance. Clusters hosting mission critical desktops should definitely have at least be N+2 so that even during a single host maintenance, there is still one host capacity available to handle any unplanned failure with the remaining hosts.
  2. Leverage multi-NIC vMotion; ESXi hosts for desktops typically have at least 128GB of RAM. Transferring that much of data will take time, and the sooner it can finish the better. Also consider the time needed to rebalance the cluster once the host is back from maintenance. Larger hosts should consider having 10Gbps NICs.
     Total time needed per host = pre-maintenance evacuation time + maintenance time + post-maintenance rebalancing
  3. Test to ensure that the applications running in the virtual desktops are not sensitive to vMotion. This is just like for virtual servers; some applications are very network sensitive and are not able to tolerate the network switch over when a VM cuts across hosts during vMotion. This test should also apply to virtual desktops. Applications that has high network transfer or are latency sensitive are indications for such tests.

Take Note

  1. These apply to all types of virtual desktops, be it for them to be persistent, floating, full clones or linked clones. The type of virtual desktops they are does not really change the nature of being virtual machines.
  2. Again, these do not apply to virtual desktops which are tied to specific hosts, e.g. vDGA is in use.
  3. Storage vMotion is not related to this case on vSphere maintenance, and it is a whole different topic. Quickly I should mention that Storage vMotion are not supported with linked clones.

vChips is now live!

Today, 9th April 2014, VMware is going to be announcing the highly anticipated update to the End User Computing products. The launch event will be broadcast live over the Internet and you can register to watch it at

It is today which I will also start my new blog which is fully dedicated to my experience on VMware and End User Computing technologies.

I am looking forward to sharing my experience here, and if you have any questions or requests, feel free to ping me via twitter @jasonyzs88.

For now, enjoy the VMware launch in a few more hours.