Tuesday , 28 March 2017

Virtualization: Go Big or Go Home


Stefan Bernbo, founder and CEO of Compuverde, says:

The trend towards virtual environments and the accompanying technologies are showing no signs of cooling off. In 2012, Gartner predicted that enterprise adoption of virtualization would increase 14 percent by the end of the year. By becoming the new industry standard, virtualization has had a substantial impact on data center architecture and management.

The consumerization of IT is leading to stratospheric quantities of data being produced and has served as the impetus for the rise of the virtual machine (VM). In virtual environments, a software program reproduces the functions of physical hardware, which in turn creates new levels of hardware flexibility, utilization and cost savings. The growing popularity of virtualization enables organizations to run several applications simultaneously, creating a need for heightened levels of storage. This sudden spike in demand has warranted a novel approach to storage; specifically, a solution that offers effective management, efficiency and interoperability.

What’s to Gain?

Enterprises have gained numerous benefits from virtualizing their servers, namely in cost savings and flexibility. Virtualization allows organizations to efficiently utilize data center hardware. In a typical data center setup, physical servers are simply idling for  significant percentage of the time. By implementing virtual servers within the hardware, the organization can optimize the use of its central processing units (CPUs). This solution provides enterprises with an ideal use for virtualization’s benefits and cost efficiencies.

Virtualization also increases an enterprise’s network options. It provides organizations with the convenience of reducing the need for physical machines within their infrastructure, enabling an easy transition to virtual machines. For example, if an organization decides to change hardware, the data center administrator could easily migrate the virtual server to the newer, more advanced hardware, achieving improved performance for a reduced cost. Before virtual servers, administrators had to install the new server and then reinstall and migrate all the data stored on the old server, a complex and time consuming approach. It is remarkably simpler to migrate a virtual machine than it is to migrate a physical machine, which translates into a big advantage at scale.

Scaling Up

The popularity of virtualization’s added benefits are widespread. Demand for virtualization is spiking by data centers that host a large number of servers – somewhere in the range of 20-50 or above. By embracing virtualization, organizations can achieve significant levels of the cost-savings and flexibility benefits defined earlier. Moreover, servers are far easier to manage once virtualized. The sheer challenge of physically managing a large number of servers can become arduous for data center staff. Virtualization empowers administrators to run the same number of servers on fewer physical machines, simplifying data center management.

Despite the benefits of virtualization, the growing adoption of virtual servers is straining traditional data center infrastructure and storage devices.

The original VM models used local storage found within the physical server, making it impossible for administrators to migrate a virtual machine from one physical server to an upgraded one with a more powerful CPU. The introduction of shared storage – either network-attached storage (NAS) or a storage area network (SAN) – to the VM hosts solved this problem, introducing the ability to stack on several virtual machines. This eventually evolved to the current server virtualization scenario, where all physical servers and VMs are connected to a unified storage infrastructure.

The drawback to this approach? Data congestion.

A single point of entry can quickly lead to problems. Since data is moving through one access point, data gets bottlenecked during periods of excessive demand. Considering that the amount of VMs and data are only expected to increase, it is obvious that storage architecture must be improved. Infrastructure must keep up with the pace of data growth.

What to Expect from Virtualization

Organizations looking to virtualize their data centers will face these growing pains. The early adopters of virtualized servers already experience the problems associated with single entry points and are working towards mitigating its impact.

Fortunately, there is hope for organizations looking to maximize the benefits of virtualization. These organizations are able to prevent data congestion created by traditional scale-out environments by eliminating the single point of entry. Current NAS or SAN storage solutions unavoidably have a single access point that regulates the flow of data, leading to congestion during heightened demand. Alternatively, organizations should opt for a solution that has several entry points and distributes data uniformly across all servers. Even if several users are accessing the system at any given time, it will retain optimal performance while reducing lag time.

As of now, this is the most direct solution; however, the next generation of storage infrastructure presents novel alternatives.

The Confluence of Computing and Storage

The next generation of storage infrastructure has brought about a new strategy to combat the storage challenges scale-out virtual environments encounter. This new approach involves running VMs within the storage node themselves (or running the storage inside the VM hosts) – consequently turning it into a compute node.

This method effectively flattens out the entire infrastructure. For instance, if an organization utilizes shared storage in a SAN, normally the VM hosts from the highest storage layer, ultimately reconstructing it into a unified, single-entry storage system. In order to solve the data bottleneck issues associated with this approach, many organizations are moving away from the traditional two-layer architecture that has both the virtual machines and storage running on the same layer.

Looking Forward

Despite the challenges organizations encountered during the early developmental stages of virtualization, the technology has proven its worth. The efficacy, flexibility and cost savings that accompany infrastructure virtualization have made a lasting impression. If organizations continue to improve upon lessons learned from early iterations, they will be able to develop an effective scale-out virtual environment that decreases infrastructure expenditures and enhances performance.

About the Author:

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications equipment and services to mobile and fixed network operators.

Host in Ireland