Wednesday , 28 June 2017


Meeting Storage Demands with Web-Scale Architecture

Storage Demands

Stefan Bernbo, founder and CEO of Compuverde, says:

Gartner predicts that the Internet of Things (IoT) will include 26 billion units installed by 2020 and that this fact will transform the data center.  The impact of the IoT on storage infrastructure will create an increasing demand for more storage capacity. The analyst firm also predicts that by 2016, cloud computing growth will become the bulk of new IT spending and that nearly half of large enterprises will have hybrid cloud deployments by the end of 2017.

The massive, widespread adoption of cloud-based services and the emergence of Big Data and the IoT require the storage of exponentially greater volumes of data than ever before. It has become obvious that current storage methods will not suffice. Current architectures have bottlenecks that, while merely inconvenient for legacy data, are simply untenable for the scale of storage needed today. New approaches to storage must be considered.

Organizations are preparing themselves for extreme storage needs by deploying web-scale architectures that enable virtualization, compute and storage functionality on a tremendous scale.

A Single Point of Failure

Removing bottlenecks is a major focal point in improving storage architecture, and that is

a key feature of web-scale storage design. A bottleneck that functions as a single point of entry can become a single point of failure, especially with the demands of cloud computing on Big Data storage. Adding redundant, expensive, high-performance components to alleviate the bottleneck, as most service providers presently do, adds cost and complexity to a system very quickly. On the other hand, a horizontally scalable web-scale system designed to distribute data among all nodes makes it possible to choose cheaper, lower-energy hardware.

Cloud providers need to manage far more users and greater performance demands than do enterprises, so solving performance problems like data bottlenecks is a big concern for them. While the average user of an enterprise system demands high performance, these systems typically have fewer users, and those users can access their files directly through the local network. Furthermore, enterprise system users are typically accessing, sending and saving relatively low-volume files like document files and spreadsheets, using less storage capacity and alleviating performance load.

It’s another matter entirely for someone using the cloud outside the enterprise. The system is being accessed simultaneously over the Internet by an order of magnitude more users, which itself becomes a performance bottleneck. The cloud provider’s storage system not only has to scale to each additional user, but must also maintain performance across the aggregate of all users. Significantly, the average cloud user is accessing and storing far larger files—music, photo and video files—than does the average enterprise user. Web-scale architectures are designed to prevent the bottlenecks that this volume of usage causes in traditional legacy storage setups.

Cost and Complexity

In order for web-scale architecture to work as designed, it must be built on software alone, with no reliance on hardware. Since hardware inevitably fails (at a number of points within the machine), traditional appliances—storage hardware that has proprietary software built in—typically include multiple copies of expensive components to anticipate and prevent failure. These extra layers of identical hardware extract higher costs in energy usage, and add layers of complication to a single appliance.  Because the actual cost per appliance is quite high compared with commodity servers, cost estimates often skyrocket when companies begin examining how to scale out their data centers. One way to avoid this is by using software-defined vNAS or vNAS in a hypervisor environment, both of which offer a way to build out servers at a web-scale rate.

Staying in Sync

Using a distributed storage model offers the best method for building at web-scale levels. This is because there are now ways to improve performance at the software level that neutralize the performance advantage of a centralized data storage approach.

Service providers need the capability to offer data centers located across the globe to minimize load time because cloud services need to be accessed from locations all over the world. However, with global availability come a number of challenges. Load is active in the data center in a company’s region. This creates a problem, since all data stored in all locations must be in sync. From an architecture point of view, it’s important to solve these problems at the storage layer instead of up at the application layer, where it becomes more difficult and complicated to solve.

A local server farm can be knocked offline by a localized event like a power outage, and global data centers must be prepared for such events. If a local data center or server goes down, global data centers must reroute data quickly to available servers to minimize downtime. While there are certainly solutions today that solve these problems, they do so at the application layer.  Attempting to solve these issues that high up in the hierarchy of data center infrastructure—instead of solving them at the storage level—presents significant cost and complexity disadvantages. Solving these issues directly at the storage level through web-scale architectures delivers significant benefits in efficiency, time and cost savings.

Market Flexibility with Web-scale

The writing on the data center wall is clear: greater storage capacity is needed. If companies continue to rely on expensive, inflexible appliances in their data centers, they will be forced to outlay significant funds to develop the storage capacity they need to meet customer needs.

The market dictates priorities in terms of network environments, corporate direction and budgets. However, having an expansive, rigid network environment locked into configurations determined by an outside vendor severely curtails the ability of the organization to react nimbly to market demands, much less anticipate them in a proactive manner.  Web-scale storage philosophies enable enterprises to “future proof” their data centers.  Since the hardware and the software are separate investments, either may be switched out to a better, more appropriate option as the market dictates, at minimal cost.

Virtualize the Future

Web-scale architecture is the key to managing and storing the daily petabytes of data generated by the IoT, housed by cloud services and crunched by Big Data analysts. Hyper-converged infrastructures, software-defined storage and other storage approaches are empowering ISPs and enterprises to use integrated virtualization components to create massive compute environments. This is great news for any organization that sees the deluge of data headed its way and wants to prepare for current and future storage needs.

About the Author:

Stefan Bernbo is the founder and CEO of Compuverde. For 20 years, Stefan has designed and built numerous enterprise scale data storage solutions designed to be cost effective for storing huge data sets. From 2004 to 2010 Stefan worked within this field for Storegate, the wide-reaching Internet based storage solution for consumer and business markets, with the highest possible availability and scalability requirements. Previously, Stefan has worked with system and software architecture on several projects with Swedish giant Ericsson, the world-leading provider of telecommunications

Host in Ireland