By Jerome Wendt, President and Founder of DCIG

Recently, DCIG worked closely with Unitrends to compile research around cloud cost and complexity creep, as the concerns companies associate with cloud adoption are well-founded. The most successful companies control and minimize them by having a solid grasp over backing up and recovering their data in the cloud.

Storage

Companies can likely expect a monthly cost using cloud service providers such as Amazon Web Services (AWS) and Microsoft Azure. Such monthly fees vary based upon the amount and type of storage used.

For example, AWS offers multiple storage pricing tiers in the form of GB per month – meaning that companies pay for each tier relatively little regardless of which storage tier they use. Those that take the time to allocate resources and manage the storage tiers where their backup data falls on Amazon’s S3 storage offering can see significant savings.

A company that stores 10TB of backup data with AWS will, by default, store this data on Amazon’s Standard S3 storage offering. This will cost $250/month. However, if that same company tiers its data across Amazon’s Infrequently Accessed and Glacier storage tiers, it could reduce its monthly storage costs by over 50 percent. This storage tiering feature is available at no charge. It behooves a company to utilize this feature if it can, especially if it will use the cloud for long term backup data retention.

Data Egress Fees

An unexpected cost that may catch companies using AWS to store and recover backup data are fees associated with data transfer out—what is commonly known as data egress. Providers such as AWS do not charge for data uploaded via the internet, so users may be unaware of any fees until they retrieve their data for restores.

But almost as soon as companies retrieve data from the AWS cloud to their site, those costs become clear and over time, can add up. Users currently can retrieve the first gigabyte (GB) of data at no charge; however the cost goes to eleven cents per GB after then. After retrieving the first 10TB of data, the per GB cost declines slightly. Companies incur these data egress fees for every event. These charges can re-appear in the case of repeating events—such as multiple recoveries or moving data between different regions with the provider’s own cloud.

Additionally, companies often first think about the storage costs that they incur when they store their backup data in the cloud. But compute costs incurred when they recover VMs in the cloud as well as the complexity associated with creating and managing virtual private clouds in the cloud service provider’s cloud can get overlooked.

Compute

Companies planning to use AWS for disaster recovery need to understand the various types of virtual machines available for recovering applications, along with the costs associated with each VM type. While AWS does offer select VMs at no cost, they are usually appropriate only for testing purposes. In the case of planning for tests or conducting real disaster recoveries, AWS provides different VM types along with pricing associated with each.

On-Demand instances often come to mind first, of which they are many. AWS currently makes over 90 different On-Demand VM configurations available. AWS also makes VMs available as either Reserved or Spot instances. While companies can acquire these at a lower price, they come with certain restrictions. Additionally, some are not always available. Users however can purchase dedicated physical machines from AWS to host their applications in the cloud to ensure guaranteed availability.

Due to the number of VM configurations and the price variance per hour AWS charges for each VM type, companies should align each of their application requirements accordingly. This means selecting the type of VM instance needed for recovering an application each time. They must then forecast compute costs of these VMs to adequately calculate the total cost of recovering their applications with AWS.

Virtual Private Cloud (VPC) Management

As companies adopt the cloud, they must also determine how they intend to manage the virtual private clouds (VPCs) they will create with a cloud service provider such as AWS. This needs to happen beforehand. At the very minimum, companies should set up a corporate account and have a designated in-house VPC administrator.

This administrator assumes responsibility for:

  •      Creating groups;
  •      Assigning different security settings to each group;
  •      Creating user logins;
  •      Putting user logins in each group; and,
  •      Overseeing management of the company’s VPC in the cloud.

The administrator also manages and controls user access to AWS’ compute, network, and storage resources within the VPC. Additionally, this individual will need to monitor billing costs associated with corporate use of the AWS VPC resources.

Companies can permit creation of individual AWS accounts—these are easier and faster to setup than corporate accounts. Yet with the creation of every AWS account (whether it’s corporate or individual,) AWS by default creates a VPC for each login. This login provides each individual access to all the features and resources AWS has to offer. Further, there is no way to have billing and individual account management fall under a single corporate account. This makes it difficult to track costs and security when setting up multiple individual accounts.

When it comes to mastering the cloud, companies may think accounting for compute, network, and storage costs (as well as the overhead associated with managing virtual private clouds created) is enough. Unfortunately this is not true given that there are other budget busters that are more difficult to anticipate and forecast.

The Budget Buster Gotchas

Even when companies properly forecast costs and complexities associated with their VPC management overhead, storage usage, data transfer rates out, along with how many, how often, and what types of VMs that they need to spin up in the cloud to recover applications, other costs can still blindside them.

For instance, VMs may need public IP addresses. The good news is there is no cost associated with public IP addresses while the VMs are running. The gotcha occurs when companies shut down the VM and do not release the public IP addresses back to AWS for AWS to re-use. If they fail to release the public IP addresses, they continue to pay for this allocated but unused feature. While the cost each public IP address incurs is nominal, (about two cents per hour per public IP address,) incremental costs incurred by oversights like this can be difficult to detect and correct.

Another even bigger potential budget buster is the cost of a test or real DR event. In either of these scenarios, companies will need to utilize new cloud compute, database, storage, networking, and potentially many other resources in the cloud for which they do not currently pay when they only store backup data in the cloud.

Adding insult to injury, they must recover and run their production data on elastic block storage (EBS) which costs more per GB than the cost of the object storage on which they store their backup data. The number of VMs coupled with the length of time that they run in the cloud and the amount of elastic block storage they use can contribute to send cloud costs for these DR events into the stratosphere.

These difficult to detect charges can mount over time while the periodic test or real DR can result in unexpected, eye-popping numbers. Combined they can result in companies spending much more money per month than they budgeted unless they actively monitor, audit, and understand their AWS billing statement for services usage.

Most companies will likely have neither the time nor the expertise for this type of ongoing, detailed billing analysis. Further, they may also not necessarily take the time at the outset to forecast how much a test or real DR exercise in the cloud will ultimately cost or even know how to accurately forecast for these two types of DR events.

 

About the Author

Jerome Wendt currently serves as the president and founder of DCIG, LLC, which he founded in 2007. Mr. Wendt is an avid writer who has written thousands of articles that have appeared in multiple magazines, on-line publications, and websites. Mr. Wendt is recognized as one of the foremost technology analysts in the enterprise data storage and data protection industries. Mr. Wendt covers topics related to enterprise and cloud infrastructures to include all-flash and hybrid arrays, cloud computing, cloud storage, data protection, hyperconverged infrastructures, and software-defined storage (SDS).