This is how to setup VMware vSAN on Cisco UCS rack mount servers connect to 6248UP Fabric Interconnects. In this case, it’s for VMware Horizon View, but this will work for a lot of use cases. The boot drive for ESX is going to be a pair of mirrored 32GB FlexFlash cards. These don’t work out of the box, so there is a few steps to get them working. There is also a few extra steps to get a the SSD to show up for ESX.
Archive for category VMware View
If your boss pops his head in your cube and asks if you can get a Horizon Workspace PoC stood up, you can confidently answer yes if you’ve read this book. I imagine as more and more VMware customers have opted for the Horizon Suite, the need to get Workspace stood up fast will increase.
This book lays out the exact steps needed for a PoC, and the authors are very good at pointing out where scale-out will need to occur as the deployment moves towards production. If you are not familiar with using a vApp, they lay out the steps for creating IP Pools and other tasks that VMware Administrators may not have had a chance to do before.
One thing I did not like was the NFS-only storage for Horizon Data, formally known as Octopus. That is not the fault of the authors, however- Horizon Data does not support CIFS, which drives me nuts.
The part that got me fired up was pulling SaaS applications and Thinapps into Workspace. The authors showed exactly how to set up SalesForce- if it works with SalesForce, it will work with anything.
To sum up, go and get this book! This is an important part of a VDI admins toolkit. I’m really glad we have something more than the standard documentation.
vExpert Lior Kamrat over at http://imallvirtual.com has posted a script for deploying VMware View Linked Clones.
I did a vBrownbag podcast at professionalvmware.com on a very similar technique of using Excel to create batch files for automatically generating all the PoSh and PowerCli code for deploying fully automated, non-persistent linked clone pools (with replica tiering) managed by a single AD Global Group.
Lior’s script is a great next step if you are ready to move past batch files, check it out.
Quite a mouthful, eh? If you have a chance to add some SSDs to your blades, though, I think you will be happy with the results. See the VMware vSphere 5.1 Documentation Center for details on how ESX uses write back cache for virtual machine swap files.
First, get some SSDs and put them in your B2XX series blades and configure a local disk policy. I was lucky enough to get two drives per blades, so I set the local disk policy to RAID1.
You could go with RAID0, but I plan on using the local disks for A/V offload with vShield End-Point protection, so i wanted a bit more surety.
When the blade boots, you will be dismayed to see your new disks listed as “remote” during the ESX install. This is expected, see Scott Lowe’s post on it for an explanation. It isn’t a problem unless you are trying to use your disks for the ESX scratch disk- we are going to be using the disk for VM swapping, not the ESX Host, so we have one less step to do- see here for a vreference.
Finish your install and either drag the ESX box into vCenter or connect with the tools directly to the host. Create a new datastore from the local disks as you usually would. I recommend using a meaningful name, like _Local_SSD. If you use Host Profiles, you will want to uncheck the relevant checkboxes under Storage before pushing the Profile down to other hosts.
With your host selected, go to the “Configuration” tab and look under “Software”. You see a new link called “Host Cache Configuration”. Click it, and you will not see the disks you added.
Oh joy, we get to play with Putty. Connect to your host with Putty (don’t forget to turn on SSH in your security settings) and get ready to paste some commands. Leave your VMware tools showing “Storage”, you will want to refer back here for the super long naa numbers.
At this point, I could point out the numerous ways you could use PowerCLI, scripts or the vMA to do the same thing, but I think it better to learn how to do it from the command line first. Let’s get an understanding of the big list of values we are trying to manipulate first. We need to add a new value to the list of possible “Storage Array Type Plugins (SATPs)”. Refer to this great post by Stephen Foskett for more on SATPs and the PSA.
Type in “esxcli storage nmp satp rule list” into your putty session and hit enter to see all the SATPs your host knows about:
~ # esxcli storage nmp satp rule list
|VMW_SATP_ALUA_CX||DGC||CLARiiON array in ALUA mode|
|VMW_SATP_ALUA||NETAPP||NetApp arrays with ALUA support|
|VMW_SATP_ALUA||IBM||2810XIV||IBM 2810XIV arrays with ALUA support|
|VMW_SATP_ALUA||Any array with ALUA support|
|VMW_SATP_MSA||MSA1000 VOLUME||MSA 1000/1500 [Legacy product, Not supported in this release]|
|VMW_SATP_DEFAULT_AP||HSVX700||active/passive HP StorageWorks SVSP|
|VMW_SATP_DEFAULT_AP||HSV100||active/passive EVA 3000 GL [Legacy product, Not supported in this release]|
|VMW_SATP_DEFAULT_AP||HSV110||active/passive EVA 5000 GL [Legacy product, Not supported in this release]|
|VMW_SATP_EQL||EQLOGIC||All EqualLogic Arrays|
|VMW_SATP_EVA||HSV200||active/active EVA 4000/6000 XL|
|VMW_SATP_EVA||HSV210||active/active EVA 8000/8100 XL|
|VMW_SATP_EVA||HSVX740||active/active HP StorageWorks SVSP|
|VMW_SATP_EVA||HSV101||active/active EVA 3000 GL [Legacy product, Not supported in this release]|
|VMW_SATP_EVA||HSV111||active/active EVA 5000 GL [Legacy product, Not supported in this release]|
|VMW_SATP_EVA||HSV300||active/active EVA 4400|
|VMW_SATP_EVA||HSV400||active/active EVA 6400|
|VMW_SATP_EVA||HSV450||active/active EVA 8400|
|VMW_SATP_CX||DGC||All non-ALUA Clariion Arrays|
|VMW_SATP_LSI||SUN||STK6580_6780||Sun StorageTek 6580/6780|
|VMW_SATP_LSI||SGI||IS500||SGI InfiniteStorage 4000/4100|
|VMW_SATP_LSI||SGI||IS600||SGI InfiniteStorage 4600|
|VMW_SATP_LSI||SUN||SUN_6180||Sun Storage 6180|
|VMW_SATP_DEFAULT_AA||IBM||2810XIV||IBM 2810XIV arrays without ALUA support|
|VMW_SATP_DEFAULT_AA||Fibre Channel Devices|
|VMW_SATP_DEFAULT_AA||IBM||SAS SES-2 DEVICE||IBM SAS SES-2|
|VMW_SATP_DEFAULT_AA||IBM||1820N00||IBM BCS RSSM|
|VMW_SATP_LOCAL||RAID Block Devices|
|VMW_SATP_LOCAL||Parallel SCSI Devices|
|VMW_SATP_LOCAL||Serial Attached SCSI Devices|
|VMW_SATP_LOCAL||Serial ATA Devices|
We need to add a line to end of this list, so that your SSD disk (which ESX is seeing as a SAS disk on the SAN) can use the “VMW_SATP_LOCAL” SATP. First, we need to get the naa of your drive. Look in the VMware tools at your disk, click on “Manage Paths” and you will see the naa number. In this case, mine is “naa.600508e0000000006c793530aa10e80e”. You can get this in Putty, but I like to check in the GUI, because Putty can be hard to read. Don’t bother typing it out, enter:
~ # esxcli storage nmp device list
~ # esxcli storage core device list -d naa.600508e0000000006c793530aa10e80e
The “Is SSD: false” is what we need to change. We want to add a new rule for the VMW_SATP_LOCAL SATP, one that has option=enable_ssd
~ # esxcli storage nmp satp rule add -s VMW_SATP_LOCAL --device naa.600508e0000000006c793530aa10e80e --option=enable_ssd
If you up arrow a few times and enter “esxcli storage nmp satp rule list”, you’ll see a new line at the bottom.
VMW_SATP_LOCAL naa.600508e000000000bd5fc37cdb32b60d enable_ssd user
Now unclaim the device
~ # esxcli storage core claiming unclaim --type device --device naa.600508e0000000006c793530aa10e80e
and finally reload and run the claim rules
~ # esxcli storage core claimrule load
~ # esxcli storage core claimrule run
Now lets see if it worked
~ # esxcli storage core device list -d naa.600508e0000000006c793530aa10e80e
Now add some VMs and browse the datastore to see your new swap files, and make sure to show the SAN guys on your team how you’re saving precious SAN resources.
I’ve done a ton of VMware View deployments, so I went ahead and got the VMware Certified Professional 5- Desktop. I wasn’t that difficult of an exam if you have experience with the product. The only part that was hard is the questions about sub-optimal setups- like doing full clone desktops. I hate full-clone desktops and always go linked-clone if I can.
I plan on taking the VCAP5-DT as soon as it comes out. Since I do a lot of PowerShell and real world deployments of View it makes since. my secret plan is to get the VCDX and be one of the first VCDX Desktop guys.
Update: Make sure to enter some text into the spreadsheet to make the fields populate.
This is a spreadsheet that automates almost all aspects of a VMware View deployment using Powershell and Concatenate. I built it mainly out of a desire to avoid using the Flash GUI that comes with View. It is slow and it makes me want to scream when I’m halfway through a 20-click wizard and I realize I need to look something up.
Be sure to keep this after using it- it doubles as your documentation. It is also everything you need to rebuild in a DR situation after restoring your templates and View servers.
It assumes you have deployed the View Infrastructure, have built some templates and taken snap shots. Look at each workbook left to right. Fill out the variables for you environment in the yellow fields, and use the grey fields to make scripts. A further refinement would making a mail merge to spit all this into batch files for you. Copying and pasting into PowerGUI suffices for me.
For each pool name entered, this will nest a Global AD Group into a Domain Local Group, add a space separated list of users to the Global Group, create an OU under a defined VDI OU, create a Resource Pool in vCenter, compose a Pool and then entitle the Global Group to the Pool. This is set to create Floating Linked-Clone Automated Pools. If you wish to change this, feel free to edit this:
=IF(ISBLANK(‘Pool Sizing’!B9),””,CONCATENATE(“get-composerdomain | Add-AutomaticLinkedClonePool -pool_id “””, ‘Pool Sizing’!B9,””” –displayName “””,’Pool Sizing’!C9, “”” -namePrefix “””,’Pool Sizing’!G9,””” -resourcePoolPath “”/”,Datacenter,”/host/”,Cluster,”/Resources/”,’View Object Names’!C9,””” -parentVMPath “”/”,Datacenter,”/vm/”,BaseImages,”/”,’View Object Names’!D9,””” -parentSnapshotPath “”/”,’Pool Sizing’!F9,””” -datastorespecs “”[Moderate,replica]/”,Datacenter,”/host/”,Cluster,”/”,B9,”;[Moderate,OS,data]/”,Datacenter,”/host/”,Cluster,”/”,C9,””” -persistence “”Nonpersistent”””,” -organizationalUnit “”ou=”,’View Object Names’!E9,”,”,DesktopBaseOu,””” -minimumcount “””,’Pool Sizing’!H9,””” -maximumcount “””,’Pool Sizing’!I9,””” -headroomCount “””,’Pool Sizing’!J9,””” -refreshpolicytype “”Never”””,” -deletepolicy “”RefreshOnUse”””,” -powerpolicy “”AlwaysOn”””,” -vmFolderPath “”/”,Datacenter,”/vm”””))
Just make the GPOs for each Pool and you are set. Don’t create more than a Pool or two at a time, this will crater your environment if run too much at once.
Download the spreadsheet: vHipsterViewDeploymentTemplate
N.B. You have to start entering values to make code appear.
Be sure to consult the documentation if you need to modify anything.