Cooling-Requirements-for-Data-Center

How to Calculate Cooling Requirements for a Data Center

April 16, 2019

Heat generation is a normal side-effect of running any electrical equipment, including data center equipment. In a data center, however, excessive heat buildup can damage your servers. Servers may shut down if temperatures climb too high, and regularly operating under higher-than-acceptable temperatures can shorten the life of your equipment.

A related problem is high humidity. If the humidity level is too low, it can lead to electrostatic discharge, a sudden flow of electricity between two objects that can damage equipment. If the humidity level rises too high, it can cause condensation and the corrosion of your equipment. Contaminants such as dust are also more likely to gather on equipment in high humidity, reducing heat transfer. To help prevent these issues, keep your data center at the correct temperature, which you can accomplish through the use of a cooling system.

Temperature-Guidelines

Data Center Cooling Standards

The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) publishes guidelines for the temperatures at which you can reliably operate a data center. The most recent recommendation for most classes of information technology (IT) equipment is a temperature between 18 and 27 degrees Celsius (°C) or 64 and 81 degrees Fahrenheit (°F), a dew point (DP) of -9˚C DP to 15˚C DP and a relative humidity (RH) of 60 percent. Those recommendations apply to equipment in the ASHRAE categories of A1 to A4.

ASHRAE also provides specific recommendations for its various classes of equipment. These recommendations apply when the equipment is powered on and applies to IT equipment, rather than power equipment.

  • For class A1, the recommended temperature is 15 to 32˚C. The recommended dew point and relative humidity ranges are -12˚C DP and 8 percent RH to 17˚C DP and 80 percent RH.
  • For class A2, the recommended temperature is 10 to 35˚C. The recommended dew point and relative humidity ranges are -12˚C DP and 8 percent RH to 21˚C DP and 80 percent RH.
  • For class A3, the recommended temperature is 5 to 40˚C. The recommended dew point and relative humidity ranges are -12˚C DP and 8 percent RH to 24˚C DP and 85 percent RH.
  • For class A4, the recommended temperature is 5 to 45˚C. The recommended dew point and relative humidity ranges are -12˚C DP and 8 percent RH to 24˚C DP and 90 percent RH.
  • For class B, the recommended temperature is 5 to 35˚C. The recommended dew point and relative humidity ranges are 8 percent RH to 28˚C DP and 80 percent RH.
  • For class C, the recommended temperature is 5 to 40˚C. The recommended dew point and relative humidity ranges are 8 percent RH to 28˚C DP and 80 percent RH.

In previous versions of its guidelines, ASHRAE had recommended a narrower temperature range. The recommendations primarily considered reliability and uptime rather than energy costs. As data centers started to focus more on energy-saving techniques, ASHRAE published classes that allowed a wider temperature range.

Some older equipment may be designed to older versions of the ASHRAE standard. When a data center has a mix of older and newer equipment, it can be more challenging to figure out which recommendations to use. If you have a mix of equipment, you need to figure out a temperature and humidity range that will work for all the equipment in your facility.

Calculating-Cooling-Requirements

Calculating Total Cooling Requirements for Data Centers

Once you decide on an ideal temperature range, you need to determine the heat output of your system so that you can figure out how much cooling capacity you need. To do this, you estimate the heat output from all of the IT equipment and other heat sources in your data center. This information will tell you how much cooling power you need.

Determining this will help you to choose a cooling system that can reliably meet your needs while allowing you to avoid overspending on the capacity you don’t need. Using the method described below, anyone can calculate a data center’s cooling needs to help protect its equipment and data. Here’s how to calculate cooling requirements for your data center.

Measuring-Heat-Output

Measuring Heat Output

Heat, which is energy, can be expressed using various measures, including British thermal units (BTU), tons, calories and joules. Heat output can be measured using BTU per hour, tons per day and joules per second, which is equal to watts.

Having so many different measures to express heat and heat output can cause some confusion, especially if multiple measurement units are used together. Currently, there’s a movement toward making the watt the standard way to measure heat output. BTU and tons are starting to be phased out.

You may still have some data that uses other measurements. If you have data that uses multiple units, you’ll need to convert them into a common format. You can convert them to the standard of the watt, or you may want to convert them into whichever measurement is most common in your data. Here’s how to make some conversions you may need:

  • To convert BTU per hour into watts, multiply by 0.293.
  • To convert tons into watts, multiply by 3,530.
  • To convert watts to BTU per hour, multiply by 3.41.
  • To convert watts to tons, multiply by 0.000283.

The power consumed from IT equipment’s AC power mains is almost all converted into heat, while the power sent through data lines is negligible. Because of this, the thermal output of a piece of equipment in watts is equal to the unit’s power consumption. Sometimes, data sheets also provide heat output in BTU per hour, but you only need to use one of these numbers in your calculations. It’s typically easiest to use watts.

There is one exception to this rule — voice over internet protocol (VoIP) routers. As much as one-third of the power these routers consume may be sent to remote terminals, causing their heat output to be lower than the power they consume. The difference between the heat output and power of VoIP routers is typically not enough to make a significant difference in your calculation, but you can include it if you want a more precise outcome.

Determining the Heat Output of a Complete System

To calculate the total heat output of a system, such as a data center, you simply need to add up the heat output of all of the components in the system. In a data center, these components include the IT equipment and other devices such as uninterruptible power supply (UPS) systems, power distribution systems and air conditioning units. It also includes lighting and people. There are some simple rules you can use to determine the heat output of these components.

Determining-Heat-Output

The heat output of UPS and power distribution systems consists of a fixed loss and a loss that is proportional to operating power. These losses are relatively consistent across all brands and models of this type of equipment. You can also use standard values for the heat output of lighting and values. These values are estimates, but they’re consistent enough that they won’t cause significant error in your cooling requirement calculations.

The fans and compressors in air conditioning units create a substantial amount of heat, but that heat is released into the outdoors rather than into the data center. Because of this, air conditioners don’t add to the thermal load of the data center. The heat they produce does, however, impact their efficiency. You should account for this loss in efficiency when sizing your air conditioner.

The other pieces of data you’ll need to calculate the cooling load are the floor area of the center in square feet and the rated electrical system power.

You can conduct an in-depth thermal analysis to determine the exact thermal output of every component in your data center, but a quick estimate using the standards listed above is all you need to calculate your data center or server room temperature requirements. The result using the estimate will fall within the typical margin of error of a more detailed analysis. Also, the fact that anyone can conduct the calculation using the estimates without special training is an advantage.

To calculate the total heat output for your data center or server room, start by making the following calculations.

  • Add up the load power of all of your IT equipment. This number is equal to the heat output.
  • Use the following formula for a UPS system with a battery: (0.04 x Power system rating) + (0.05 x Total IT load power). If a redundant system is used, do not include the capacity of the redundant UPS.
  • Use the following formula for a power distribution system: (0.01 x Power system rating) + (0.02 x Total IT load power).
  • To calculate the heat output of your lighting, use the floor area in square feet or square meters. Then, one of the following formulas: 2.0 x floor area in square feet or 21.53 x floor area in square meters.
  • To calculate the heat produced by people in your data center, multiply the maximum number of people who would be in the facility at one time by 100.

Next, add up the subtotals from the calculations listed above. This will give you the total heat source output of your room or facility.

Request Information

 

Other-Heat-Sources

Other Heat Sources

Until now, we haven’t considered the possibility of heat from sources outside the data center, such as sunlight through windows and heat that is conducted in through outside walls. For many small data centers and server rooms, this isn’t an issue, since many of them have no windows or outside walls. Some small data centers, however, do have these things. Larger data centers typically have windows, walls and a roof that are exposed to the outside that let in additional heat.

If a significant portion of the walls or ceiling of your data center or room is exposed to the outdoors or has a substantial number of windows, consult an HVAC consultant. An HVAC professional can assess the maximum thermal load of the room. Add the load determined by the HVAC consultant to the total heat output you calculated earlier.

Humidification

Humidification

You’ll also need to consider supplemental humidification when calculating your cooling requirements. Air conditioner systems are designed to control humidity in addition to removing heat. In an ideal situation, the system would keep the amount of water in the air constant, eliminating the need for additional humidification. However, the air-cooling function of most air conditioning systems creates substantial condensation and, therefore, a decrease in humidity. You need to use supplemental humidification equipment to make up for this loss in humidity. This humidification equipment adds more heat load, which you’ll need to compensate for by increasing the capacity of your cooling equipment.

Under some conditions, though, the air conditioning system might not cause any condensation. In many wiring closets and small data rooms, the air conditioning system uses ducting to separate the bulk return air from the bulk supply air. In this setup, no condensation forms, and you don’t need additional humidification. Instead, the air conditioning unit can operate at 100 percent capacity.

In a large data center with lots of air mixing, however, the air condition system needs to provide air at lower temperatures to compensate for the recirculation of the higher-temperature exhaust air. This causes significant dehumidification and the need for supplemental humidification, which causes the performance of the air conditioning system to decrease. To accommodate for this, you need to oversize your air conditioning system by up to 30 percent.

So, if you have a small system with ducted air return, you likely do not need to account for humidity. If, however, you have a larger system that mixes the air, you may need to oversize your air conditioning system by up to 30 percent.

Oversizing-Requirements

Other Oversizing Requirements

You will also need to add extra capacity to your cooling system to account for potential equipment failures and load growth.

No piece of equipment is infallible, and at some point, some of your cooling equipment will fail. You can’t afford to let the temperature in your data center increase due to a technical issue with an air conditioning unit. You will need to take each cooling unit offline for maintenance periodically. You can plan for these needs by adding redundant capacity to your cooling system. The rule of thumb is to add as much redundant capacity as your budget allows. You should have at least n+1 redundancy, which means you have one more unit than you need.

You should also add extra capacity to accommodate potential future load growth. The amount of data that companies are generating is expanding rapidly, and demand for data storage is growing with it. Oversizing your cooling capacity ahead of time will enable you to meet increasing demand more readily and expand more quickly in the future. The amount of oversizing you should add for potential growth depends on the forecasts for your data center.

Air-Cooling-Equipment-Sizes

Determining Air Cooling Equipment Sizes

Once you determine your cooling requirements by considering all of the factors listed above, you can accurately size an air conditioning system. These factors are:

  • The cooling load, or heat output, of your equipment
  • The heat output of your lighting
  • The heat output of personnel
  • The cooling load of your building, if necessary
  • Any oversizing required due to humidification effects
  • Oversizing for redundancy
  • Oversizing for potential future growth

Once you have all of the above numbers that are relevant to your data center, simply add them up. The result is the cooling capacity you need for your data center.

Often, the cooling capacity required is about 1.3 times the expected IT load plus any redundant capacity, especially for smaller server rooms. The cooling load you calculate may differ from this though, especially if you operate a larger data center.

Cooling-Methods

Cooling Methods

There are many different products and techniques you can use to maintain a proper temperature in your data center. The best products for you will depend on your cooling load, the setup of your facility and other factors. Some of the cooling technologies you can choose from are:

  • Chillers: Chillers keep servers cool by removing heat from one element and transferring into another element.
  • Cold aisle containment systems: Some airflow management solutions focus on containing cold air. Cold aisle containment solutions separate the supply airflow from cooling units, enabling more precise temperature control and increasing efficiency.
  • Hot aisle containment systems: Hot aisle containment solutions contain the hot exhaust air and return it directly to the air conditioning unit, preventing it from mixing with the supply air. Returning warmer air to the air conditioning units improves their performance.
  • Blanking Panels: Blanking panels block off space between racks, preventing air recirculation between and helping you to maintain a consistent temperature.
  • Directional or high-flow floor tiles: Directional and high-flow floor tiles help direct air toward your equipment, increasing the efficiency of your system and helping you make the most of your cooling capacity.
  • Downflow cooling: Downflow cooling systems direct cool air downwards from the bottom of the unit. Hot exhaust air enters through the top of the unit and then passes over the internal cooling mechanisms before entering the data center.
  • In-row cooling: In-row cooling units are installed in close proximity to the equipment they’re cooling. You can install in-row units on the floor or ceiling. In-row cooling solutions allow for a highly scalable cooling system and enable you to remove high heat loads quickly because of their proximity to your equipment.
  • Portable cooling: Portable cooling units allow you to add flexibility to your cooling system. You can add cooling capacity exactly where you need it at any time. You can use portable cooling units for both spot cooling and area cooling.
  • Rack door heat exchangers: Rack door heat exchangers are affixed directly to server racks. They take server rack heat output and exchange it from the air before discharging it into the data center. They can feature active, passive or microchannel heat exchangers.
  • Rack-mounted cooling: You can install air conditioning units directly onto a rack, which enables exceptionally precise cooling.

The way you design your cooling system can also have a significant impact on cooling efficiency and effectiveness. The way you set up your data center and place your cooling equipment impacts the efficiency of your cooling system. You can also design your system to take advantage of natural cooling if the climate where your data center is located allows.

Working-With-DataSpan

Working With DataSpan

Properly calculating your cooling requirements is crucial to the reliable, cost-effective operation of your data center equipment. Excessive heat and humidity can cause equipment to shut down and reduce its overall lifespan. Luckily, you can calculate heat load in a room or data center using a fairly simple process that doesn’t require any specialized training or knowledge. You simply need to account for your equipment’s cooling load, building-related heat sources, lighting, people, humidification effects, redundancy and potential future growth.

Working with data center experts like those at DataSpan is another smart strategy for ensuring optimal cooling of your data center. We offer the latest cooling technologies and can also help you to design, install and maintain your cooling system. We’ll work around your schedule to complete installations and maintenance in a way that minimizes disruptions to your operations. Working with the experts at DataSpan is a sure way to make certain you end up with an optimized, efficient and reliable system.

At DataSpan, we deliver customized cooling solutions to meet your company’s unique goals. Want to learn more about how we can help you to optimize your data center cooling? Contact us today at 800-660-3586 or click here to find your local DataSpan representative.

  • SHARE