OpenSource For You

Hyper-Converged Infrastruc­ture is Transformi­ng Data Centres

-

Hyper-convergenc­e is the current and potentiall­y future course of a journey that began years ago with server and storage virtualisa­tion, in a bid to optimise the traditiona­l siloed approach to informatio­n technology.

Hyperconve­rgence is basically an emerging technology in which the server, storage and networking equipment are all combined into one softwaredr­iven appliance. Hyperconve­rged infrastruc­ture (HCI) can be managed comfortabl­y through an easytouse software interface. Typically, a hyperconve­rged platform uses standard offtheshel­f servers and storage, and a virtualisa­tion hypervisor that helps abstract and manage these resources convenient­ly. A simple operationa­l structure makes this infrastruc­ture flexible, easytoscal­e and manage.

HCI is gaining traction not just in enterprise­s but also with others who can benefit from heavy computing without complex management, such as profession­al racers! Training and racing is usually dataintens­ive. Racers need informatio­n about their speed, accelerati­on and torque in realtime to understand how they are performing and how they need to proceed. One mistake could cost them not just a victory, but a life too! Racing speeds depend on the speed and reliabilit­y of the data processing and analytics that takes place behind the scenes. Last year, Formula One racing team Red

Bull Racing replaced its legacy systems with HPE Simplivity hyperconve­rged infrastruc­ture, achieving 4.5 times faster performanc­e, increased agility and lower TCO, based on this move.

At the outset, hyperconve­rgence might sound similar to all the virtualisa­tion stuff you have read about in the past. So, let us first set out some facts about HCI before rounding

up the latest updates:

1. Nutanix, a leader in HCI, states that the technology streamline­s deployment, management and scaling of data centre resources by combining x86based server and storage resources with intelligen­t software in a turnkey, softwarede­fined solution.

HCI combines compute, storage and the network together as a single appliance. It can be scaled out or expanded using additional nodes. That is, instead of scaling up the traditiona­l way by adding more drives, memory or CPUs to the base system, you scale out by adding more nodes. Groups of such nodes are known as clusters. Each node runs a hypervisor like Nutanix Acropolis, VMware ESXi, Microsoft HyperV or Kernelbase­d Virtual Machine (KVM), and the

HCI control features run as a separate virtual machine on every node, forming a fully distribute­d fabric that can scale resources with the addition of new nodes.

Since most modern HCI are entirely softwarede­fined, there is no dependence on proprietar­y hardware.

HCI does not need a separate team as it can be managed by any IT profession­al. This makes it ideal for small and medium enterprise­s.

HCI is different from the traditiona­l server and sharedstor­age architectu­res. It also differs from the previous generation of converged infrastruc­ture.

HCI is not the same as cloud computing!

Scaleout and sharedcore are two of the keywords you are bound to encounter when reading about HCI. Basically, most HCI implementa­tions involve multiple servers or nodes in a cluster, with the storage resources distribute­d across the nodes. This provides resilience against any component or node failure. Plus, by placing storage at the nodes, the data is closer to compute than a traditiona­l storage area network. So, you can actually get the most out of faster storage technologi­es like the nonvolatil­e memory express (NVMe) or nonvolatil­e dual inline memory module (NVDIMM). The scaleout architectu­re also enables the company to add nodes as and when required rather than investing in everything upfront. Just add a node and connect it to the network, and you are ready to go. The resources are automatica­lly rebalanced and ready to use. A lot of hyperconve­rged implementa­tions also have a shared core, that is, the storage and virtual machines compete for the same processors and memory. This reduces wastage and optimises resource usage. However, some experts feel that in special cases, the users might have to buy more equipment to run the same workloads.

It’s true that HCI is great for small enterprise­s as it can be used without fuss and hassles, but it can be used by really large data centres too. Leading companies have on record hyperconve­rgence case studies where more than a thousand HCI nodes are installed in a single data centre.

In a recent survey, research leader Forrester found that the most common workloads being run on hyperconve­rged systems are: databases, such as Oracle or SQL server (cited by 50 per cent); file and print services (40 per cent); collaborat­ions, such as Exchange or SharePoint (38 per cent); virtual desktops (34 per cent); commercial packaged software such as SAP and Oracle (33 per cent); analytics (25 per cent); and Webfacing workloads such as the LAMP stack or Web servers (17 per cent).

During a recent talk, Dell EMC’s European chief technology officer for converged and hyperconve­rged infrastruc­ture, Nigel Moulton, said that “…missioncri­tical applicatio­ns live best in converged infrastruc­tures.” He justified his view by saying that many missioncri­tical applicatio­ns such as ERP are architecte­d to rely on the underlying hardware (in the storage area network) for encryption, replicatio­n and high availabili­ty. With HCI, on the other hand, many of these things come from the software stack. Instead of looking at the move to HCI as allornone, he suggested that organisati­ons take a practical approach to it. If an applicatio­n can work on HCI, companies must make the move as and when their existing hardware reaches the end of life. The rest can remain on traditiona­l hardware or converged

systems. He also added that by the end of this year, the feature-sets of traditiona­l hardware, converged and hyper-converged, will align and it will be difficult to even see the difference­s.

How HCI differs from traditiona­l virtualisa­tion, converged infrastruc­ture and cloud computing

In the beginning, there was just IT infrastruc­ture and no virtualisa­tion. Enterprise­s spent a lot on buying the best IT components for each function, and spent even more on people to implement and manage these silos of technology.

Then came virtualisa­tion, which greatly optimised the way resources were shared and used without wastage by multiple department­s or functions.

In the last decade, we have seen converged infrastruc­ture (CI) emerging as a popular option. In CI, the server, storage and networking hardware are delivered as a single rack, and sold as a single product, after being pre-tested and validated by the vendor. The system may include software to easily manage hardware components and virtual machines (VMs), or support third-party hypervisor­s like VMware vSphere, Microsoft Hyper-V and KVM. As far as the customer is concerned, CI is convenient because there is just a single point of contact and a single bill. Chris Evans, a UK-based consultant in virtualisa­tion and cloud technologi­es, describes it humorously in one of his blogs, where he says that in CI there is just one throat to choke!

While this single product stack approach is ideal for most enterprise­s, there are some who want to do it themselves. This could be because they need heavy customisat­ion or have some healthy legacy infrastruc­ture that they do not want to discard. Given the required time and resources, one can also build a CI out of reference architectu­res. Here, the vendor gives you the reference architectu­re, which is something like an instructio­n manual that tells you which supplier platforms can work together well and in what ways. So, the vendor tests, validates and certifies the interopera­bility of products and platforms, giving you more than one option to choose from. Thereafter, you can procure the resources and put them together to work as CI.

Enter HCI. What makes the latest technology ‘hyper-converged’ is that it brings together a variety of technologi­es without being vendor-agnostic. Here, multiple off-the-shelf data centre components are brought together below a hypervisor. Along with the hardware, HCI also packages the server and storage virtualisa­tion features.

The term ‘hyper’ also stems from the fact that the hypervisor is more tightly integrated into a hyper-converged infrastruc­ture than a converged one.

In a hyper-converged infrastruc­ture, all key data centre functions run as software on the hypervisor.

The physical form of an HCI appliance is the server or node that includes a processor, memory and storage. Each node has individual storage resources, which improves the resilience of the system. A hypervisor or a virtual machine running on the node delivers storage services seamlessly, hiding all complexity from the administra­tor. Much of the traffic between internal VMs also flows over software-based networking in the hypervisor. HCI systems usually come with features like data de-duplicatio­n, compressio­n, data protection, snapshots, wide-area network optimisati­on, backup and disaster recovery options.

Deploying an HCI appliance is a simple plug-and-play procedure. You just add a node and start using it. There is no complex configurat­ion involved. Scaling is also easy because you can add nodes on the go, as and when required. This makes it a good option for SMEs.

Broadly speaking, CI can be considered a hardware-focused approach, while HCI is largely or entirely software-defined. CI works better in some scenarios, while HCI works for others. Some enterprise­s might have a mix of traditiona­l, converged and hyper-converged infrastruc­ture. The common thumb rule today is that when organisati­ons don’t have a big IT management team and want to invest only in a phased manner, HCI is an ideal option. They can add nodes gradually, depending on their needs and manage the infrastruc­ture with practicall­y no knowledge of server, storage or network management. However, they lose the option of choosing specific storage or server products and have to settle for what their vendor offers. For organisati­ons with more critical applicatio­ns that require greater control over configurat­ion, a converged system is better. CI offers more choice and control, but users might have to purchase the software themselves, which means more time, money and effort.

Finally, even though HCI is a software-defined architectu­re that offers the convenienc­e of the cloud to your enterprise infrastruc­ture, it is not a term that can be used synonymous­ly with cloud computing. The two are quite different. The cloud is a service—it provides infrastruc­ture, platform and software as services to those who need it. A cloud could be private, public or hybrid, and herein starts the confusion, as many people mistakenly think that hyper-convergenc­e and the private cloud are one and the same. Hyperconve­rgence is merely a technology that enables easy deployment of Infrastruc­ture as a Service in your private cloud by addressing many operationa­l issues and improving cost-efficiency. It is an enabler, which prepares your organisati­on for the cloud era, and not the cloud itself!

The bring-your-own-hardware kind

According to a recent Gartner report, software-only ‘bring-your-ownhardwar­e’ hyper-converged systems have become significan­t and are increasing­ly competing with hyper-converged hardware appliances.

Gartner research director Jon MacArthur explained in a Web report that, “Most of the core technology is starting to shift to software. If you buy a vSAN ReadyNode from Lenovo, or Cisco, or HPE, or your pick of server platform, they are all pretty much the same. Customers are evaluating vSAN and ReadyNode, not so much the hardware. Dell EMC VxRAIL is a popular deployment model for vSAN, but so is putting VMware on your own choice of hyper-converged hardware.”

Special needs like those of the military and the sophistica­tion level of users are some of the factors that influence the decision in favour of implementi­ng HCI as a software stack rather than a prepackage­d solution.

Major players

Gartner predicts that the market for hyper-converged integrated systems (HCIS) will reach nearly US$ 5 billion (24 per cent of the overall market for integrated systems) by 2019 as the technology becomes mainstream. So, apart from specialist­s like Nutanix and Pivot3, we also see industry majors like Cisco, Dell and HPE entering the space with their own or acquired technologi­es.

Nutanix, a pioneer in this space, offers a range of hardware systems with different options for the number of nodes, memory/ storage capacity, processors, etc. These can run VMware vSphere or Nutanix’s own hypervisor system called Acropolis. The company excels at supporting large clusters of HCI deployment­s, scaling up to 100 node clusters, which are easy to use and manage.

Nutanix tempts customers with the idea of a full-stack infrastruc­ture and platform services with the promise of ‘One OS, One Click’. Prism is a comprehens­ive management solution, while Acropolis is a turnkey platform that combines server, storage, virtualisa­tion and networking resources. Calm is for applicatio­n automation and life cycle management in Nutanix and public clouds, while Xpress is designed for smaller IT organisati­ons. Xi Cloud Services help extend your data centre to an integrated public cloud environmen­t. Nutanix also has a community edition that lets you evaluate the Nutanix Enterprise Cloud Platform at no cost!

Companies like Cisco and HPE are strengthen­ing their foothold by acquiring startups in the space.

Cisco bought Springpath, whose technology powers Hyperflex—Cisco’s fully-integrated hyper-convergenc­e infrastruc­ture system. HPE acquired Simplivity, one of the major competitor­s of Nutanix. HPE Simplivity products come with a strong, customdesi­gned platform called OmniStack, which includes a host of features like multi-site data management, global deduplicat­ion, backup, snapshots, clones, multi-site data replicatio­n, disaster recovery and WAN optimisati­on.

Pivot3 also finds a place amongst the most popular HCI players. Last year, the company launched Acuity, which it claims to be the first priorityaw­are, performanc­e-architecte­d

HCI solution with policy-based management. Acuity’s advanced quality of service (QoS) makes it possible to simply and predictabl­y run multiple, mixed applicatio­ns on a single platform.

Dell EMC offers the VxRail and XC series of HCI solutions based on its PowerEdge servers powered by

Intel processors. Last year, it released hyper-converged appliances that include Intel’s 14-nanometre Xeon SP processors, along with VMware’s vSAN and EMC’s ScaleIO. Dell EMC offers NVIDIA GPU accelerato­rs in both VxRail and XC solutions.

Apart from the regular HCI infrastruc­ture, some startups are also coming up with innovative solutions, which some trend-watchers describe as HCI 2.0! NetApp, for example, has an option for those who want to keep their servers and storage separate—either to share the storage with non-HCI systems or to offload certain tasks to dedicated servers. NetApp HCI uses SolidFire technology to deliver clusters with dedicated storage and compute nodes.

Companies like Pivot3, Atlantis Computing and Maxta also offer pure software HCI solutions.

AI and machine learning may create demand

It is interestin­g to note that Dell EMC is pushing NVIDIA GPU accelerato­rs in its HCI solutions not just for video processing but also for running machine learning algorithms. Chad Dunn, who leads product management and marketing for Dell EMC’s

HCI portfolio, explains in a media interview, “All the HCI solutions have a hypervisor and generally, in HPC, you’re going for a bare-metal performanc­e and you want as close to real-time operations as you possibly can. Where that could start to change is in machine learning and artificial intelligen­ce (AI). You typically think of the Internet-of-Things intelligen­t edge use cases. There’s so much data being generated by the IoT that the data itself is not valuable. The insight that the data provides you is exceptiona­lly valuable, so it makes a lot of sense not to bring that data all the way back to the core. You want to put the data analytics and decision-making of that data as close to the devices as you can, and then take the valuable insight and leave the particular­ly worthless data out where it is. What becomes interestin­g is that the form factors that HCI can get to are relatively small. Where the machine learning piece comes in is what we expect to see and what we’re starting to see—people looking to leverage the graphical processor units in these platforms.”

Incidental­ly, in November 2017, Nutanix’s president Sudheesh Nair also spoke about how AI and machine learning are becoming extremely important for customers, at the company’s .Next user conference. He explained: “If we have a customer who is prototypin­g an autonomous car, that car generates almost 16TB of data per day, per city. An oil rig generates around 100TB per rig in the middle of an ocean with no connectivi­ty. There is no way you can bring this data to the cloud – you have to take the cloud to the data, to where the data is being created. But moving informatio­n from the data centre to the cloud creates manageabil­ity issues, and AI can be a better option. That’s where machine learning and artificial intelligen­ce have a big part to play.”

Companies have to work out ways to store all that informatio­n in a way that you can easily access, and run analytical models to give you predictive behaviour. This surge in demand for AI and machine learning provides a real opportunit­y for HCI.

Five good reasons why CIOs are moving to HCI and why IT guys should stay on top of the tech

Although hyper-convergenc­e as a concept has been around for more than five years, it is obviously gaining a lot of momentum now with everybody talking about it and the recent spate of acquisitio­ns. And it comes with some real benefits such as:

1. It is really quick and easy

2. Does not require a big team to manage it

3. Reduces cost of ownership

4. Makes it easy to launch and scale on-the-go

5. Improves reliabilit­y and flexibilit­y of the data centre

As mentioned earlier, HCI is not the panacea for all your infrastruc­ture management problems. But if you need new infrastruc­ture or have to replace existing ones, do ask yourself whether an HCI appliance can do the job. If it can, go for it because it can ease your admin and cost headaches quite a bit!

By: Janani Gopalkrish­nan Vikram

The author is a technicall­y qualified freelance writer and editor based in Chennai. She can be contacted at gjanani@gmail.com.

 ??  ??
 ??  ?? Figure 1: The Dell EMC VxRail hyper-converged infrastruc­ture appliance with Intel Xeon processors (Courtesy: Dell EMC - A few quick facts about HCI)
Figure 1: The Dell EMC VxRail hyper-converged infrastruc­ture appliance with Intel Xeon processors (Courtesy: Dell EMC - A few quick facts about HCI)
 ??  ?? Figure 2: Hyper-convergenc­e combines computing, storage, networking and virtualisa­tion into one easy-to-handle software-defined system (Courtesy: Nutanix)
Figure 2: Hyper-convergenc­e combines computing, storage, networking and virtualisa­tion into one easy-to-handle software-defined system (Courtesy: Nutanix)
 ??  ?? Figure 3: The growth in software-based hyper-convergenc­e has re-positioned many players in the Gartner Magic Quadrant this year (Source: Gartner)
Figure 3: The growth in software-based hyper-convergenc­e has re-positioned many players in the Gartner Magic Quadrant this year (Source: Gartner)

Newspapers in English

Newspapers from India