In the late 1990s and early 2000s, it was often said that virtualized server technology would never be deployed in production. History proved this wrong: actual adoption became very strong and VMware is now a standard in IT organizations.
Today, we are seeing the same trend in enterprise use of public cloud infrastructure, including multiple clouds. As VMware did for servers, a multi-cloud data management strategy abstracts the underlying infrastructure resources so applications are free to move and scale, allowing new capabilities, new agility and flexibility, and new economics. Cost savings as it was with server virtualization are generally the primary driver for hybrid and multi-cloud adoption, though benefits are numerous.
Two clouds are better than one
A single namespace for accessing data is the key enabler to leveraging multiple clouds transparently. Multi-cloud use offers many of the same features of a single cloud, without some of the downsides such as lock-in or temporary service interruptions.
Workloads and data are placed on the appropriate cloud based on policies according to data attributes, in support of the application requirements, in the best location to serve that data. Multi-cloud is not just AWS + Google, it can be data distributed across private and public resources for redundancy or to have the data closest to the application or users. By using policies to automatically assign data to the right location, applications can operate without the need for IT action, and business units get the agility and responsiveness they prefer.
Policies can also assign resources to the lowest-cost cloud, with no appreciable difference to users – for example, when the price for Google Cloud Storage is lower than Amazon S3, the system simply places data in Google, transparent to users and applications. If a specific cloud delivers unique value to a workload, such as genome analysis or media transcoding, a multi-cloud data management system can also make that choice to place data where it can best be leveraged. In this way, organizations can take advantage of the public cloud for “compute bursting”.
Replication between multiple locations is something of great importance to enterprises. Data is automatically replicated between geographically dispersed data centers or between public cloud buckets, or any combination of those; if one goes down, data remains available. What’s more, all data regardless of location is accessible from the single namespace. Aside from backup/DR, a multi-cloud replication policy can also serve collaborative workflows and teams that span different regions.
Data across multiple locations, including multiple clouds, should be consolidated in a single namespace for accessibility, as well as deeper insights, analysis and decision-making based on that data.
Unless it’s not
Not all enterprise data sets and workloads are a good fit for multi-cloud or even single cloud. There are legacy applications that expect their storage to be internal or local, or appear so, and likely they are “traditional” applications written before cloud even existed. This may be also the case for specialized apps used in a specialized industry, or custom-built solutions.
Security can be a drawback whenever data is stored off-premises, but to turn a negative into a positive, a multi-cloud strategy driven by policies can ensure sensitive data never leaves the data center. Public cloud providers like Amazon and Google boast more than a hundred times the security, development and operations resources than most organizations, so it’s fair to say they do security better than internal IT departments.
Even in the ideal scenarios and use cases, some work must go into determining the right place for the right data, and setting up policies. This is work that an organization would likely do sooner or later to gain the benefits of cloud economics, as well as enable analytics and monetization of data. As much as there is to say for cloud and multi-cloud, most organizations will keep some amount of data on-premises. This is how multi-cloud enterprises can bring the virtues of the public cloud into their own data centers, and it’s where the real cost savings begin to accrue.
Clearly the main benefit of multi-cloud is the elastic, agile use of compute and storage resources. With the right solution, consumption-based pricing can also extend to on-premises infrastructure: pay only for what’s used, skip upfront investment that will be needed intermittently, or capacity that will be needed months or years from now. Multi-cloud can start small and scale huge, both on-premises and in the public cloud.
Multi-cloud platforms are often “software-defined” in this sense: for on-premises they enable the use of the server and storage infrastructure that is already in place, and standard hardware, to further contain costs. A vendor that requires specific appliances or hardware negates the value of multi-cloud and the flexibility afforded by truly open data management. Proprietary systems can still work, and can still claim “compatibility” with standard access protocols and cloud APIs, but they are not architecturally ideal. To make cloud bursting work, and otherwise make use of data in the public cloud, on-premises data must not be locked inside a proprietary or single-vendor platform.
This is also the case with the above example of multi-cloud/multi-location replication. The ability to replicate across geographies, without paying a premium, is a fundamental benefit of multi-cloud. Different data management products have different ways of handling replication. Enterprises that need geographic redundancy should not have to make decisions about whether data is valuable enough to justify its protection.
Multi-cloud use cases are evolving, but the underlying objective is likely to stay the same: let the business value determine where data and workloads live, when they move, and how they should be managed. As VMware eliminated the need to manage each physical component separately, multi-cloud is eliminating the need to manage data across on-premises, remote data centers and public clouds separately.
About the Author:
Erik Pounds is Vice President of Marketing at San Francisco-based SwiftStack. Erik is an avid technology geek, attacks opportunities by building things, and currently leads product marketing efforts at SwiftStack. Prior to SwiftStack, he was a vice president at BitTorrent, ran product management at Drobo, and held various product and marketing roles at Brocade and EMC. He graduated from the University of San Francisco, where he captained their Division 1 Golf Team.