It’s been a while since I’ve gotten any fresh content on this blog, hopefully I’ll get some content ideas to keep a regular cadence of updates going.

While I was updating the cabling on the garage lab, I realized it had been a while since I had done anything on my CE lab from a version perspective, in fact the last update I had done was March of 2019.  So I figured now was as good a time as any to go ahead and upgrade the CE cluster.

Much like the luck I usually have, I went to upgrade CE thru Prism, and the upgrade seemed to fail with a USB that was corrupted – seems like my USB luck continues on having really crappy USB drives.  So, I went over to Best Buy, bought a few $9.99 64GB PNY USB Drives, came home and started the process to get the image file over to USB, since the CE .iso installer still hasn’t made it’s return.

All was going well, until the hosts booted up.  Now my hosts are a bit long in the tooth, but they are still decent enough with 24 cores and 48gb ram.  This PNY USB drive was HORRIBLY slow, so much so that I couldn’t stand it.  Never again will I buy PNY drives.

So, I thought about what other options do I have.  The drive configuration on these CE nodes was as follows:

  • 1x 256GB Samsung EVO SSD
  • 1x 500GB Samsung EVO SSD
  • 1x 1TB Samsung EVO SSD
  • 1x 1TB Western Digital HDD

So, I figured, why not try to use the 256GB SSD as the boot drive, instead of a USB drive. My Supermicro hosts are old enough that a Satadom might be hard to come by for it, and I honestly had more than enough space on each node, that the 256GB drive wouldn’t hurt too bad.

So, I pulled the drives out of the drive caddy’s, pulled out my trusty Inatek USB Drive Caddy, and proceeded to drop the CE .img file onto the 256GB SSD, using the gdd commands I prefer over the dd command.

Imaging done, and once I correctly set the BIOS on the Supermicro hosts to use the 256GB drive in Port 0, I booted  up each of the hosts and much to my happiness, I was able to get the install to go thru, CVM deployed and cluster created.  And the speed of install was as you’d expect much better!

So in hindsight, with my dislike for USB boot, I wish I had thought of using the internal drive with the .img file.  I did this when the CE .iso installed allowed you to select a boot drive, but for some reason always tried to get the usb drive to work.

So, now I’m happy to say I’m not using the USB drives anymore, have a sturdy SSD drive for my CE boot drive without having to give up much space at all.


Updated 5.22.19

Coming back from the Nutanix .Next conference two weeks ago, the biggest announcement that really got me excited as the ability for Nutanix Frame to run in AHV environments.   AHV comes as an additional environment to AWS where Frame started, Azure and Google Cloud, currently in early release.

I’ll be going thru a multi-part series around Frame and configurations use cases. So stay tuned!


If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).


Freedom to Choose… Freedom to Play… Freedom to Cloud….

I just returned from a week in New Orleans at the Nutanix .Next conference, where I was fortunte to represent eGroup as a partner as well as being part of the Nutanix Technical Champions group.

In addition to being a conference attendee, a co-worker Dave Strum  and I co-presented with one of our customers on the benefits of deploying Nutanix on Cisco UCS hardware, lessons learned and future plans.  It was fun and definintely not like your typical presentation.IMG_0830.jpeg

There’s a lot of blog posts and content around the .Next conference news (Plug for Dave here), and the Nutanix roadmap continues to dazzle and amaze people (ok, me especially) with simplicity, functionality and yes, Freedom.  Keyword here is Freedom.

And this post isn’t about recapping the .next conference, I’ll let my peers and friends handle that.  This post is about Freedom…


I recently had the opportunity to deploy 12 Nutanix nodes for a customer across 2 sites (Primary and DR), 6 of which were 3055-G5 nodes with dual NVIDIA M60 GPU cards installed and dedicated to running the Horizon View desktop VMs for this customer. This was my first experience doing a Nutanix deployment using the NVIDIA GPU cards with VMware, and thankfully there is plenty of documentation out there on the process.

The Nutanix deployment with GPU cards installed is no different than without, you still go thru the process of imaging the nodes with Foundation just like you’d do without GPU cards. In this case, each site was configured with 2 Nutanix clusters, one for Server VMs and a second cluster specific to VDI. The VDI cluster was configured in a 3 node cluster, using the NX-3055-G5 nodes, running Horizon View 7.2.0 specifically.

I’ll touch on some details of the M60 card below, and then get into some of the places where I had a few issues with the deployment and how I fixed them, and finally some Host/VM configuration and validation commands.


Well, it’s that time again… 2017 has come and gone, and sometimes I just don’t know where all the time went and what I was able to accomplish.

I’m happy to say that for the 2nd year in a row I’m part of a great group of people in the IT industry, those of us pushing the value of Nutanix and their simple, effective and scalable HyperConverged solutions.

Pretty cool in the large world of IT, to be a part of this small group of folks in the #NutanixNTC family, and especially joined by another eGroup Member, Dave Strum ( on this journey.

Thank you Nutanix for giving us an amazing platform to help our customers along on their journey, and I cannot wait what’s in store for 2018!

To read the full post about the 2018 Nutanix Technology Champions, follow the link below.

This week I had the pleasure of deploying 2 more Nutanix blocks on behalf of one our partners, who is now starting to highly recommend Nutanix for their customer deployments of critical systems.

The installation was pretty vanilla, 3 NX-1065-G5 nodes at the Primary site and matching at the DR site.  For the VMware components, we went with the vCenter 6.5 appliance (I love the stability and speed of the 6.5 appliance by the way), and for the ESXi hosts we went with 6.5 (build 4887370).

The install went great, super fast and easy as always is the case with Nutanix deployments, and off we were rolling for customer deployment.

After running the command ncc health_checks run_all post install (running ncc version 3.0.4-b0379d15 for this), I noticed that the results were calling out 3 hosts for having disabled services.

Detailed information for esx_check_services:
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog
Node 10.xx.xx.xx: 
WARN: Services disabled on ESXi host:
 -- sfcbd-watchdog

After doing some research on why the sfcbd-watchdog wasn’t starting – and trying to start it manually, I came across this KBase from VMware, which detailed that this was expected behavior starting in ESXi 6.5.

Wondering if the NCC code just wasn’t updated for this specific change from VMware, I checked the Nutanix Knowledge Base, and came across this link which details that services identified by the ncc health_checks hypervisor_checks esx_check_services command should be enabled.

Ok, so that makes sense… ESXi 6.5 has been out long enough to assume that the ncc scripts have been updated to accomodate the 6.5 changes.  So time to get the service re-enabled, and check ncc again.

To enable the service on a ESXi 6.5 host, use the command esxcli system wbem set –enable true (be sure to use double hypens!).  Per VMware, if a 3rd party CIM provider is installed, sfcbd and openwsman should start automatically.  Just to be safe I also ran /etc/init.d/sfcbd-watchdog startfollowed by /etc/init.d/sfcbd-watchdog status to make sure my services started.

So let’s see what we get now after running the ncc checks again, using the command ncc health_checks hypervisor_checks esx_check_services to simplify my results.

Results look much better, no more warnings about disabled services on the ESXi hosts.

Running : health_checks hypervisor_checks esx_check_services
[==================================================] 100%
/health_checks/hypervisor_checks/esx_check_services [ PASS ] 
| State | Count |
| Pass | 1 |
| Total | 1 |
Plugin output written to /home/nutanix/data/logs/ncc-output-latest.log

Good to know that VMware purposefully has disabled this service, and it’s easy to put that in a checklist for future deployments.   I do wish though that since Foundations is taking care of the ESXi install and customization, they would add those 2 cli commands to the routine to make those services start, if they truly are needed.

Hope this helps if you run into this same issue!