This is how to setup VMware vSAN on Cisco UCS rack mount servers connect to 6248UP Fabric Interconnects. In this case, it’s for VMware Horizon View, but this will work for a lot of use cases. The boot drive for ESX is going to be a pair of mirrored 32GB FlexFlash cards. These don’t work out of the box, so there is a few steps to get them working. There is also a few extra steps to get a the SSD to show up for ESX.
Expected screen during install of ESX- HV Hypervisor partition is the target
The FlexFlash SD cards shipped with B-series servers have an HV partition. C-series cards have four partitions; HV, HUU, SCU, and Drivers. When the FlexFlash controller is enabled, UCSM will only show the HV partition to guest.
Local Storage Policy- FlexFlash state Enabled
If a template has been created and the install guides from Cisco have been followed, a local storage policy should exist with FlexFlash State set to Enable. The Mode can left to Any.
Its seems a little counter-intuitive, but a Flex-Scrub Policy has to be created to properly RAID the two SD cards to together. Later, a No-Scrub policy will be put in place.
If the scrub policy if not used setup the RAID pair, ugly errors about “not enough resources overall”, “RAID State: Enabled Not Paired” and “RAID Health: Degraded” will appear
Unbind from template
Assuming a template was already created for the server, Unbind from the Template so that the Scrub Policy can be changed.
Change the Policy to Flex-Scrub
Right click the server, select Server Maintenance and Re-acknowledge the server
This will bounce the server and apply the scrub policy.
Watch the FSM to see the progress
Proper Settings for Controller and RAID State
Over on the Equipment tab, click Inventory and Storage, and the proper states should be reported.
Go back to No-Scrub by using Bind to Template
This will put the the server back to its original state.
Choose a Template
Getting the SSD drive to appear correctly in ESX
Assuming all the steps went fine, ESX should now be installed and dragged into vCenter. Unfortunately, the SSD is not correctly detected. I’m pretty sure this one is on VMware. Its not to difficult to fix, however, and you’ll learn a bit about claim rules.
Take note of the Identifier (in this case naa.600605b009dd39e0ff00004e04c6ca89). Save some frustration and paste all the naa numbers you need into Evernote or whatever you use.
Enable SSH and connect to the host
esxcli storage nmp satp rule add -s VMW_SATP_LOCAL –device naa.600605b009dd39e0ff00004e04c6ca89 –option=enable_ssd
Unclaim the device, load the rules and re-run them
esxcli storage core claiming unclaim –type device –device naa.600605b009dd39e0ff00004e04c6ca89
esxcli storage core claimrule load
esxcli storage core claimrule run
Claim Disks for VSAN Use
The SSD should now appear when you setup VSAN, which is very easy.
The vsanDatastore should now be available.