If you haven’t taken a close look at the hypervisor from Nutanix, AHV, well you might be missing out on something very valuable – that you already have access to as a Nutanix customer. AHV addresses the majority of the use cases people require with virtualization, and it does so very well with a simple deployment, simple management and POWERFUL features when Prism Central is added (and still powerful when it’s not).
I love the fact that Nutanix provides a Community Edition for those of us with home labs, I can bring the Enterprise Cloud I so enjoy deploying for customers close to home. Sure, would I love to have a small NX-1365 in the rack at home, who wouldn’t? Maybe someday…
Nutanix CE Version 5.6 is out, and it’s hot!!!
With the release of Nutanix Community Edition version 5.6, Nutanix has also provided a new installation mechanism as an alternative to the previous dd imaging method, now allowing for a .iso installer.
Continuing our journey with testing out Nutanix AHV functionality for one of our partners, one of things we wanted to get deployed was Prism Central. Prism Central is very similar to VMware’s vCenter, defining Prism Central as “software provides centralized infrastrcuture management, one-click simplicity and intelligence for everyday operations.”
Welcome back to my series on our journey of testing Nutanix and Mellanox. Part 1 & 2 of the series focused on Nutanix AHV networking and integrating with Mellanox, so we’re going to shift in Part 3 and look at the AHV configuration for getting Data Protection going and performing a failover test.
Welcome back to my short series on our journey of testing Nutanix and Mellanox. Following up on Part 1 of the Nutanix and Mellanox Series, I’m going to dive deeper into the Nutanix network configuration for use with the Mellanox SX1012.
As I wrote about in the last post that started our journey with Nutanix and Mellanox, we will be testing AHV DR replication for one of our partners while evaluating the use of the Mellanox SX switch platform for a lower cost 10/40Gbe switch.
The NX-1050 Block was pre-configured at another location, so all network subnets will be recreated in this lab. The NX-3050 block is net new, and that will be configured onsite.
This post will serve as the initial setup of the lab testing environment and topology.
I’ve been given the opportunity to do some testing with 2 Nutanix blocks and a Mellanox SX1012 switch for one of our customers, who is looking to do some disruptive changes to the platform they deploy their software onto.
Currently, we partner with this customer to do your typical 3-Tier infrastructure deployments with EMC VNX/VNXe for Storage, Cisco Catalyst and Nexus for switching, Cisco UCS or HP for compute and VMware vSphere for the hypervisor. While this solution has worked very well over the years, when we approached this partner a few years ago with Nutanix, the interest was there but the justification was hard to come by.
Fast forward to this year, we were able to get our partner out to the Nutanix .Next conference for an executive briefing and generate some more internal interest.
To say we were successful is an understatement! By the time we had gotten over our jetlag coming home, it was how fast can we get a POC box onsite to test with. We got a box into their hands, a nice 4 node NX-1050 model, I headed out to California, did the install and gave them the keys and let them start testing – all on Nutanix’s Acropolis Hypervisor (AHV).