Wednesday , 23 August 2017


How to Use the Cloud to Replace On-Premise Storage

By Barry Phillips, Chief Marketing Officer, Panzura

Barry Phillips, Chief Marketing Officer, PanzuraFor organizations with a cloud-first initiative, meaning that IT must first try the cloud before reverting to the legacy on-premises infrastructure, file systems are the natural infrastructure to consider moving to the cloud. However, to remove file systems completely from data centers and offices, businesses and governments will have to use a multi-cloud approach, reducing the effect of latency on performance.

Despite all the benefits of the cloud, it is difficult for most businesses to realize a cloud-first strategy because the distance between offices and data centers creates isolated environments that cannot act as one. This distance, which is best measured in terms of the time it takes for data to travel it, is the reason cloud providers are adding additional data centers at an unprecedented rate. In order to completely replace on-premises infrastructure, each provider wants to have data centers located near metropolitan areas to eliminate the effect of latency. However, most organizations don’t have to wait for each cloud provider to complete the build out of all their data centers if they are able to span multiple cloud providers together as a mesh so that there is a data center from any cloud provider near the organization’s offices.

Cloud Competition Heats Up Again

The first race between cloud providers, pricing, was kicked off by Google in March 2014 when it dropped the price on cloud storage from over 8 cents per GB to 2.6 cents per GB. AWS and Azure quickly followed suit. While there have been some minor additional price drops and some tactical pruning of some compute instances, there is little room to make more dramatic discounts.

More recently, Google started the race for the most data centers near metropolitan areas. Google added two additional cloud regions last year and has plans to add eight more regions in 2017 for a total of 14. As with the race to the bottom on price, other providers have quickly followed by opening, and announcing plans for more data centers. AWS opened two additional regions in 2016 for a total of 14 and has plans to open four additional regions in 2017. IBM recently announced that it is tripling the number of data centers it has in Britain. Microsoft Azure has 30 regions around the world and has announced plans for an additional eight more.

Although this latest race is increasing the total number of cloud data centers, it is really all about locating as many data centers as close as possible to metropolitan areas. With price drops and the sheer scale of cloud infrastructure, the only thing standing in the way of eliminating the need for private data centers is the distance to cloud data centers. Speed matters, and the farther away a data center is in terms of latency, the slower the performance.  Latency remains a killer despite all the increases in bandwidth to cloud data centers.

“Can you hear me now?”

This latest race is strikingly similar to how wireless carriers have competed with each other. First there were price drops, then tactical price drops or bundles, and finally the battle around increasing coverage areas. Almost every television commercial from wireless carriers showed the obligatory coverage map and how they were increasing that coverage map by adding more towers. Coverage area is the war that wireless carriers fight to gain market share and the combination of inexpensive cellular phone service with expanded coverage areas has since enabled consumers to eliminate the need for a landline phone at home and the office.

Just as expanded wireless coverage and inexpensive service have enabled people to eliminate having a home phone, expanding the number of data centers, along with inexpensive cloud services, will enable some businesses to completely eliminate putting infrastructure in the office, by collapsing all IT infrastructure into the cloud.

We have seen Software-as-a-Service and Platform-as-a-Service be quite successful by serving on-premise users completely from the cloud, but that has not really been possible with Infrastructure-as-a-Service (IaaS) because of the issue with latency. This is especially the case with storage when trying to use cloud storage to eliminate an on-premise NAS system or file system. This is very different than replicating data from an on-premise NAS or file system to a file sync-and-share solution for sharing files externally or accessing files remotely. The cloud-first approach is to completely remove all unstructured data, NAS or file systems, from an office, and have users or applications access that file data directly from the cloud. In order for that to happen, the distance in time between an office and a cloud data center must be 10 milliseconds or less.

Even with the addition of new data centers by all the biggest cloud providers, not all metropolitan areas are covered by a single provider – not even close. As an example, a large entertainment company recently wanted to collapse 100 offices into the cloud using a single cloud provider. Because of latency, they were only able to collapse 83 of these offices into the cloud. The other 17 could still leverage cloud storage, but needed to cache the active data on-premise to provide the performance needed by users and applications in those offices. If this customer could leverage data centers from all providers instead of just one, they might have been able to collapse those other 17 offices into the cloud, as data centers from other providers might have been within the 10-millisecond radius.

Today in the United States, if organizations are willing to cache active data to these in-cloud NAS systems across multiple clouds, they can achieve the 10-millisecond standard, and can do so even if a single cloud houses all the inactive data. This is possible as some cloud providers have large presences on the East and West Coast while others have large presences in the Central and Southeast United States. This would be analogous to being able to use wireless coverage from every cell tower from every wireless provider.

No Need to Wait with Multiple Clouds

The cloud data center race will dramatically accelerate the amount of IT infrastructure that will be subsumed by the cloud — eventually. As each provider adds more data centers, more metropolitan areas will be in the 10-millisecond radius, where the performance is essentially the same as if the infrastructure was located in the office.

However, most organizations can accomplish this today without waiting for each provider to completely build out its data center portfolio. There are enough cloud data centers between all the major cloud vendors to cover most metropolitan areas. All that is required is to have data access from in-cloud NAS systems that span multiple cloud providers together as a mesh.

About the Author:

Barry Phillips is responsible for Marketing and Product Management at Panzura. Barry was most recently the CMO of Egnyte. Prior to Egnyte, Barry was the CMO of Wanova (acquired by VMware) where he led Marketing, Sales, and Business Development. Barry came to Wanova from Citrix Systems, where he was the Group Vice President and GM of the Delivery Center Product Group. Barry has held executive roles at Net6, Nortel Networks, Everypath, and Cranite Systems. He began his career in United States Naval Aviation where he logged over 1,000 hours in a P-3C Orion. Barry holds a Bachelors of Computer Science from the United States Naval Academy and a Masters of Computer Science from UCLA.

Host in Ireland