By Don Boxley, CEO and Co-Founder, DH2i (www.dh2i.com)

Downtime, whether planned or not, is becoming less and less acceptable for today’s pace of business.  Even 5 minutes of inoperability can lead to considerable data and/or business loss. Regardless of compute environment, workload, or OS, availability is one of the most critical elements of operational and consequently, business success.

Unfortunately, what many IT professionals are learning the hard way is that traditional options for high availability (HA) have limits. The continuous operational efficiency needed to capitalize on digital transformation should not monopolize an organization’s financial or personnel resources with interminable testing and retesting of availability.

What’s required is a new approach to dynamically transfer workloads in virtual environments based on the particular job at hand. Accomplishing this objective requires an innate flexibility,  vigilant avoidance of downtime, and cost-effective methodology. In essence, what’s required is Smart Availability, which leverages and then improves upon the basic principles of HA to deliver the previously mentioned advantages—and much more.

Smart Availability is the future of high availability and a critical component in the blueprint for creating business value through digital transformation.  

High Availability’s Inherent Drawbacks

By definition, high availability is the continuous operation of applications and system components. Traditionally this goal was achieved in a variety of different ways attended by an assortment of drawbacks. One of the more common involves failovers, in which system components are transferred to those of a secondary system for scheduled downtime or failures. Clustering techniques are often used with this approach to make resources between systems—including databases, servers, processors and others—available to one another. Clustering is applicable to VMs and physical servers and can help failovers enable resilience for OS, host, and guest failures. Failovers involve a degree of redundancy, which involves maintaining high availability by relating backups of system components. Redundant networking and storage options may be leveraged with VMs to encompass system components or data copies.

The most serious problem with many of these issues is cost, particularly as there are some instances in which high availability is unnecessary. These relate to the actual use and importance of servers, as well as other factors pertaining to what virtualization techniques are used. Low priority servers that don’t affect end users—such as those for testing—do not need high availability, nor do those with recovery time objectives significantly greater than their restore times. Certain high availability solutions, such as some of the more comprehensive hypervisor-based platforms, are indiscriminate in this regard. Consequently, users could end up paying for high availability for components that don’t require them. Also, traditional high availability approaches involve continuous testing that can drain human and financial resources. Even worse, neglecting this obligation can result in unplanned downtime. Arbitrarily implementing redundancy for system components broadens organization’s data landscapes, resulting in more copies and potential weaknesses for security and data governance.

Realizing Digital Transformation

Many of these virtualization measures for high availability are losing relevance because of digital transformation. To truly transform the way your organization does business with digitization technologies, one must deploy them strategically. Conventional high availability tactics simply do not allow for the fine-grained flexibility needed to optimize business value from digitization. Digital transformation means accounting for the varied computing environments of Linux and Windows operating systems alongside containers. It means integrating an array of legacy systems with newer ones explicitly designed to handle the arrival of big data and modern transactions systems.

Most of all, it means aligning that infrastructure for business objectives in an adaptive way for evolving domain or customer needs. Such flexibility is critical to optimizing IT processes around the goals of end users. The reality is that most traditional methods of high availability simply add to the infrastructural complexity of digital transformation, but don’t address the primary need of adapting to changing business requirements. In the wake of digital transformation, organizations need to streamline their various IT systems around domain objectives as opposed to doing the opposite, which simply decreases efficiency while increasing cost.

Smart Availability

For digital transformation, Smart Availability is ideal as it enables workloads to always run on the best execution venue (BEV) i.e., environment. It couples this advantage with the continuous operations of high availability but takes a profoundly different approach in doing so. Smart Availability takes the central idea of high availability, to dedicate resources between systems to prevent downtime, and extends it to moving them for maximizing competitive advantage. It enables organizations to seamlessly move workloads between operating systems, servers, and physical and virtual environments—with minimal downtime, if any. The foundation of this approach is in the capacity of Smart Availability technologies to move workloads independent of one another, which is a fundamental drawback to traditional physical or virtualized approaches to workload management. By disengaging an array of system components (application workloads, containers, services and share files) without having to standardize on just one OS or database, these technologies transfer them to the environment which works the best.

It’s critical to remember that this judgment call is predicated on how to best achieve a defined business objective. Furthermore, these technologies provide this flexibility for individual instances to ensure negligible downtime and a smooth transition from one environment to another. The use cases for this instantaneous portability are plentiful. Organizations can use these techniques for uninterrupted availability, integration with new or legacy systems, or the incorporation of additional data sources. Most of all, they can do so with the assurance that the intelligent routing of the underlying technologies is selecting the optimal setting to execute workloads. Once properly designed, the process takes no longer than a simple stop and start of a container or an application.

The Evidence is Compelling

To reap all the IT and business benefits of high availability, with greater speed, efficiency and flexibility, as well as at a lesser cost, the evidence is impossible to ignore – Smart Availability is the answer.

 

 

About the Author: 

Don Boxley Jr. is the CEO and a co-founder of DH2i. Prior to DH2i, Don held senior marketing roles at Hewlett-Packard where he was instrumental in sales and marketing strategies that resulted in significant revenue growth in the scale-out NAS business. Boxley spent more than 20 years in management positions for leading technology companies, including Hewlett-Packard, CoCreate Software, Iomega, TapeWorks Data Storage Systems and Colorado Memory Systems.  Boxley earned his MBA from the Johnson School of Management, Cornell University.