I recently had the opportunity to deploy 12 Nutanix nodes for a customer across 2 sites (Primary and DR), 6 of which were 3055-G5 nodes with dual NVIDIA M60 GPU cards installed and dedicated to running the Horizon View desktop VMs for this customer. This was my first experience doing a Nutanix deployment using the NVIDIA GPU cards with VMware, and thankfully there is plenty of documentation out there on the process.

The Nutanix deployment with GPU cards installed is no different than without, you still go thru the process of imaging the nodes with Foundation just like you’d do without GPU cards. In this case, each site was configured with 2 Nutanix clusters, one for Server VMs and a second cluster specific to VDI. The VDI cluster was configured in a 3 node cluster, using the NX-3055-G5 nodes, running Horizon View 7.2.0 specifically.

I’ll touch on some details of the M60 card below, and then get into some of the places where I had a few issues with the deployment and how I fixed them, and finally some Host/VM configuration and validation commands.


Well, it’s that time again… 2017 has come and gone, and sometimes I just don’t know where all the time went and what I was able to accomplish.

I’m happy to say that for the 2nd year in a row I’m part of a great group of people in the IT industry, those of us pushing the value of Nutanix and their simple, effective and scalable HyperConverged solutions.

Pretty cool in the large world of IT, to be a part of this small group of folks in the #NutanixNTC family, and especially joined by another eGroup Member, Dave Strum (http://vthistle.com) on this journey.

Thank you Nutanix for giving us an amazing platform to help our customers along on their journey, and I cannot wait what’s in store for 2018!

To read the full post about the 2018 Nutanix Technology Champions, follow the link below.


This week I had the pleasure of deploying 2 more Nutanix blocks on behalf of one our partners, who is now starting to highly recommend Nutanix for their customer deployments of critical systems.

The installation was pretty vanilla, 3 NX-1065-G5 nodes at the Primary site and matching at the DR site.  For the VMware components, we went with the vCenter 6.5 appliance (I love the stability and speed of the 6.5 appliance by the way), and for the ESXi hosts we went with 6.5 (build 4887370).

The install went great, super fast and easy as always is the case with Nutanix deployments, and off we were rolling for customer deployment.

After running the command ncc health_checks run_all post install (running ncc version 3.0.4-b0379d15 for this), I noticed that the results were calling out 3 hosts for having disabled services.

Detailed information for esx_check_services:
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog

After doing some research on why the sfcbd-watchdog wasn’t starting – and trying to start it manually, I came across this KBase from VMware, which detailed that this was expected behavior starting in ESXi 6.5.

Wondering if the NCC code just wasn’t updated for this specific change from VMware, I checked the Nutanix Knowledge Base, and came across this link which details that services identified by the ncc health_checks hypervisor_checks esx_check_services command should be enabled.

Ok, so that makes sense… ESXi 6.5 has been out long enough to assume that the ncc scripts have been updated to accomodate the 6.5 changes.  So time to get the service re-enabled, and check ncc again.

To enable the service on a ESXi 6.5 host, use the command esxcli system wbem set –enable true (be sure to use double hypens!).  Per VMware, if a 3rd party CIM provider is installed, sfcbd and openwsman should start automatically.  Just to be safe I also ran /etc/init.d/sfcbd-watchdog startfollowed by /etc/init.d/sfcbd-watchdog status to make sure my services started.

So let’s see what we get now after running the ncc checks again, using the command ncc health_checks hypervisor_checks esx_check_services to simplify my results.

Results look much better, no more warnings about disabled services on the ESXi hosts.

Running : health_checks hypervisor_checks esx_check_services
[==================================================] 100%
/health_checks/hypervisor_checks/esx_check_services [ PASS ] 
| State | Count |
| Pass | 1 |
| Total | 1 |
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log

Good to know that VMware purposefully has disabled this service, and it’s easy to put that in a checklist for future deployments.   I do wish though that since Foundations is taking care of the ESXi install and customization, they would add those 2 cli commands to the routine to make those services start, if they truly are needed.

Hope this helps if you run into this same issue!

As I wrote about in the last post that started our journey with Nutanix and Mellanox, we will be testing AHV DR replication for one of our partners while evaluating the use of the Mellanox SX switch platform for a lower cost 10/40Gbe switch.

The NX-1050 Block was pre-configured at another location, so all network subnets will be recreated in this lab. The NX-3050 block is net new, and that will be configured onsite.

This post will serve as the initial setup of the lab testing environment and topology.


I’ve been given the opportunity to do some testing with 2 Nutanix blocks and a Mellanox SX1012 switch for one of our customers, who is looking to do some disruptive changes to the platform they deploy their software onto.


Currently, we partner with this customer to do your typical 3-Tier infrastructure deployments with EMC VNX/VNXe for Storage, Cisco Catalyst and Nexus for switching, Cisco UCS or HP for compute and VMware vSphere for the hypervisor. While this solution has worked very well over the years, when we approached this partner a few years ago with Nutanix, the interest was there but the justification was hard to come by.

Fast forward to this year, we were able to get our partner out to the Nutanix .Next conference for an executive briefing and generate some more internal interest.

To say we were successful is an understatement! By the time we had gotten over our jetlag coming home, it was how fast can we get a POC box onsite to test with. We got a box into their hands, a nice 4 node NX-1050 model, I headed out to California, did the install and gave them the keys and let them start testing – all on Nutanix’s Acropolis Hypervisor (AHV).