Colo Data Centers
– Peter Panfil, Vice President, Global Power at Emerson Network Power, says:
Compared to traditional power system architectures such as 2N or 2N+1, a reserve power system architecture offers comparable availability, greater efficiency and scalability, and higher resource utilization at lower cost. Reserve architectures also can be designed to enable rapid deployment and modular growth. These benefits have made reserve power systems very attractive to colo data centers and cloud hosting facilities because they support a business model that depends on providing exceptional customer service at best cost.
While enterprise data centers may operate on a different business model and a smaller scale than many cloud and colo facilities, they still can benefit from implementing reserve architectures. The benefits of a reserve architecture aren’t limited to large data centers, and it’s hard to imagine a data center manager who wouldn’t want to achieve high availability and efficiency, improved capital costs, and great scalability.
How Reserve Architectures Work
A reserve architecture typically creates an N+1 or N+2 architecture within the AC power system and maintains availability by using static transfer switches (STSs). The STS allows a redundant (or “reserve”) system to be brought online to pick up the load from a primary system if there is a failure or the power system needs maintenance.
Any data center that employs a single bus can deploy a reserve architecture. For example, a Tier 2 data center delivering power via a single bus to a single distribution path is a good candidate for switching to a reserve architecture to improve availability. The key is the use of a downstream STS. The STS uses a primary bus as the main power source and uses the reserve system as its alternate source. If the primary bus goes offline, the STS will transfer power over to the reserve UPS system, and the IT systems will stay on protected power, providing the Tier 2 data center a level of availability it did not previously enjoy.
Now, what about higher-tier data centers? Managers of these data centers often select a dual-bus architecture because it provides the required level of fault tolerance and concurrent maintainability. It traditionally features multiple utility feeds, generators, UPS systems and power distribution systems supporting dual-powered IT equipment. When properly designed, the dual-bus system will eliminate every single point of failure. Maintenance can be performed on any component while continuing to power the load.
Providing the highest levels of availability has downsides, however, and one of them is low utilization rates for power system components. Utilization will be discussed below, but suffice it to say that using a reserve architecture can significantly improve UPS efficiency.
Why Colocation Service Providers Favor Reserve Architectures and You Might, Too
First, they save capital and operating costs. Increasing power system utilization translates into greater efficiency and lower energy costs. Utilization rates are inherently low on dual-bus power systems because of the way redundancy is achieved. At best, the dual-bus system operates at utilization rates of 50 percent, and utilization may be well below that depending upon the load on the data center.
In short, redundancy reduces system efficiency – increasing energy costs – and adds to system costs. Most UPS systems operate most efficiently when utilization is above 30 percent. Efficiency starts to drop at 20 percent utilization. Switching to a reserve architecture can raise UPS system utilization to approximately 75 percent, with some reserve architectures achieving better than 85 percent utilization.
Just a one percent efficiency improvement at .10/kWh for a single 800kVA UPS translates into approximately $10,000 annual savings. In this case, improving efficiency by 25 or more percentage points makes a substantial difference in operating costs: $250,000 or more in annual energy cost savings. Multiply the savings by the number of UPS systems in use.
Second, reserve architectures help colocation and hosting facilities meet their service level agreements, which may be 100 percent uptime. Never having to notify a customer that their servers will be on unprotected power during maintenance is a major advantage. Availability, availability, availability.
Third, colocation facilities can scale up smoothly as new customers are added when reserve power system architectures are in place. With an architecture of single buses being deployed, the simplest path is to standardize to a rapid deployment of that exact bus. Growth is then an easy process of repeating that deployment.
With a scalable power system, you can purchase and deploy assets based on predicted growth, conserving capital until real business growth spurs expansion.
Given their many benefits and ease of implementation, reserve architectures can be considered a power system best practice for enterprise data centers.
The Emerson Network Power white paper “Using a Reserve Power Architecture to Increase Data Center Infrastructure Utilization and Efficiency” covers reserve architectures in greater detail, including a comparison of dual-bus and reserve power system architectures.