It’s been a while since I’ve gotten any fresh content on this blog, hopefully I’ll get some content ideas to keep a regular cadence of updates going.
While I was updating the cabling on the garage lab, I realized it had been a while since I had done anything on my CE lab from a version perspective, in fact the last update I had done was March of 2019. So I figured now was as good a time as any to go ahead and upgrade the CE cluster.
Much like the luck I usually have, I went to upgrade CE thru Prism, and the upgrade seemed to fail with a USB that was corrupted – seems like my USB luck continues on having really crappy USB drives. So, I went over to Best Buy, bought a few $9.99 64GB PNY USB Drives, came home and started the process to get the image file over to USB, since the CE .iso installer still hasn’t made it’s return.
All was going well, until the hosts booted up. Now my hosts are a bit long in the tooth, but they are still decent enough with 24 cores and 48gb ram. This PNY USB drive was HORRIBLY slow, so much so that I couldn’t stand it. Never again will I buy PNY drives.
So, I thought about what other options do I have. The drive configuration on these CE nodes was as follows:
- 1x 256GB Samsung EVO SSD
- 1x 500GB Samsung EVO SSD
- 1x 1TB Samsung EVO SSD
- 1x 1TB Western Digital HDD
So, I figured, why not try to use the 256GB SSD as the boot drive, instead of a USB drive. My Supermicro hosts are old enough that a Satadom might be hard to come by for it, and I honestly had more than enough space on each node, that the 256GB drive wouldn’t hurt too bad.
So, I pulled the drives out of the drive caddy’s, pulled out my trusty Inatek USB Drive Caddy, and proceeded to drop the CE .img file onto the 256GB SSD, using the gdd commands I prefer over the dd command.
Imaging done, and once I correctly set the BIOS on the Supermicro hosts to use the 256GB drive in Port 0, I booted up each of the hosts and much to my happiness, I was able to get the install to go thru, CVM deployed and cluster created. And the speed of install was as you’d expect much better!
So in hindsight, with my dislike for USB boot, I wish I had thought of using the internal drive with the .img file. I did this when the CE .iso installed allowed you to select a boot drive, but for some reason always tried to get the usb drive to work.
So, now I’m happy to say I’m not using the USB drives anymore, have a sturdy SSD drive for my CE boot drive without having to give up much space at all.
Coming back from the Nutanix .Next conference two weeks ago, the biggest announcement that really got me excited as the ability for Nutanix Frame to run in AHV environments. AHV comes as an additional environment to AWS where Frame started, Azure and Google Cloud, currently in early release.
I’ll be going thru a multi-part series around Frame and configurations use cases. So stay tuned!
If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).
I love the fact that Nutanix provides a Community Edition for those of us with home labs, I can bring the Enterprise Cloud I so enjoy deploying for customers close to home. Sure, would I love to have a small NX-1365 in the rack at home, who wouldn’t? Maybe someday…
Nutanix CE Version 5.6 is out, and it’s hot!!!
With the release of Nutanix Community Edition version 5.6, Nutanix has also provided a new installation mechanism as an alternative to the previous dd imaging method, now allowing for a .iso installer.
Freedom to Choose… Freedom to Play… Freedom to Cloud….
I just returned from a week in New Orleans at the Nutanix .Next conference, where I was fortunte to represent eGroup as a partner as well as being part of the Nutanix Technical Champions group.
In addition to being a conference attendee, a co-worker Dave Strum and I co-presented with one of our customers on the benefits of deploying Nutanix on Cisco UCS hardware, lessons learned and future plans. It was fun and definintely not like your typical presentation.
There’s a lot of blog posts and content around the .Next conference news (Plug for Dave here), and the Nutanix roadmap continues to dazzle and amaze people (ok, me especially) with simplicity, functionality and yes, Freedom. Keyword here is Freedom.
And this post isn’t about recapping the .next conference, I’ll let my peers and friends handle that. This post is about Freedom…
The Nutanix deployment with GPU cards installed is no different than without, you still go thru the process of imaging the nodes with Foundation just like you’d do without GPU cards. In this case, each site was configured with 2 Nutanix clusters, one for Server VMs and a second cluster specific to VDI. The VDI cluster was configured in a 3 node cluster, using the NX-3055-G5 nodes, running Horizon View 7.2.0 specifically.
I’ll touch on some details of the M60 card below, and then get into some of the places where I had a few issues with the deployment and how I fixed them, and finally some Host/VM configuration and validation commands.
Well, it’s that time again… 2017 has come and gone, and sometimes I just don’t know where all the time went and what I was able to accomplish.
I’m happy to say that for the 2nd year in a row I’m part of a great group of people in the IT industry, those of us pushing the value of Nutanix and their simple, effective and scalable HyperConverged solutions.
Pretty cool in the large world of IT, to be a part of this small group of folks in the #NutanixNTC family, and especially joined by another eGroup Member, Dave Strum (http://vthistle.com) on this journey.
Thank you Nutanix for giving us an amazing platform to help our customers along on their journey, and I cannot wait what’s in store for 2018!
To read the full post about the 2018 Nutanix Technology Champions, follow the link below.