Many companies rely on data center storage to keep their information secure in an on-site location, off-premises building or a combination of the two. With the amount of running equipment housed within these buildings, temperature regulation is essential. Between the heat from your IT loads, the external environment and weather patterns, many factors can influence the efficiency of your equipment.
What Is a Data Center Temperature Sensor?
Temperature sensors make it easy to monitor the environment in your facility and respond appropriately to any changes. With the proper placement, you’ll receive notifications whenever there’s a significant increase or decrease that might affect equipment efficiency. These sensors monitor temperature, humidity and airflow. The goal is to balance the coolness in your facility, maintain humidity at a moderate level to prevent electrostatic discharge or condensation, and control free airflow throughout the server racks. It’s key to place these sensors in hot zones at the top, bottom and middle racks to gain a complete picture of temperature status. Be sure to place sensors near the air conditioning equipment as well.
There are several types of temperature sensors that vary based on what you need to measure and the location. The main types include:
- Thermocouples: Thermocouples are a common temperature sensor solution utilized in industries from automotive to manufacturing. These tools function at varying temperatures, have their own power supply, offer quick response times and don’t require any form of stimulation.
- Resistance temperature detectors: The resistance temperature detector (RTD) uses the resistance in metal to measure changes in temperature for precision applications. They come in two-, three- and four-wire options, with the four-wire option serving as the most accurate solution.
- Thermistors: The thermistor is typically a polymer or ceramic tool that measures the drop in the resistance of the thermistor as the temperature rises.
- Semiconductor-based ICS: There are two subcategories of temperature sensors based on the semiconductor, including distant digital types and local temperature. Local sensors have analog or digital outputs, while digital outputs come with multiple options, including I2C, SMBus, 1-Wire and Serial Peripheral Interface (SPI).
This guide will take you through the importance of temperature and humidity sensors and monitoring, the best placements for data center temperature sensors and how to ensure you have a reliable cooling system.
Why Is Temperature Monitoring Important for Data Centers?
When it comes to running a reliable data center, environmental control is an essential part of maintenance and monitoring. Your centers contain a large amount of valuable technology and equipment, all of which are sensitive to outside elements. They depend on having a controlled environment to run at full efficiency.
Countering Weather and Climate
Since data centers are physical facilities, they are subject to a multitude of environmental factors. One of the most significant influences on how well your center runs is the weather. The location of the center can make a difference as to what elements you need to pay the most attention to.
For example, you’ll need to focus more on keeping the center cooled if it’s located in a hot region or adjusting your control systems in areas where the temperature fluctuates frequently. But heat isn’t the only concern when it comes to weather. You also need to consider the prevalence of storms, periods of heavy rainfall, humidity and the potential for any natural disasters. Regardless of the general climate, attentiveness is key to maintaining your data center’s internal environment.
Temperature and humidity levels can have severe effects on your equipment, which is why it’s crucial to keep them balanced and well within the unit thresholds. As far as temperature goes, computers have a minimum and maximum within which they operate most efficiently. If the center gets too hot or you can’t properly control airflow, it can result in outages and inefficiency.
While most center operators understand the importance of temperature regulation, some may overlook the necessity of controlling the amount of moisture in the air. A humid data center can result in condensation forming in your units — in hard drives, on motherboards and in sockets. If condensation does occur, it can damage your computers and potentially cause costly outages.
Balancing Rack Density
In addition to external factors like weather and general climate, the density of your racks also plays a significant role. Both high and low-density racks can cause issues with equipment efficiency and security.
Some center managers may choose to leave areas of racks open for the sake of potential future growth. However, low-density racks mean gaps in airflow. These spaces can cause pockets of coldness, which contaminates the air running through hot racks and can result in inefficiency. It also requires a higher level of monitoring and more energy to keep the air regulated — which means higher expenses.
According to Uptime Institute’s 2018 survey, more data centers are functioning with high rack densities. Of the respondents, 19% reported their highest server density as over 30 kilowatts (kW) per rack, with 5% accounting for over 50 kW. While properly configured high-density racks can create a more self-regulated airflow within your data center, they also produce higher amounts of heat. In turn, you’ll have to pay more attention to cooling management.
The best options for high-density zones are to provide cooling solutions for each row or rack, rather than relying on ambient temperature regulators.
Outage Prevention and Environment Monitoring
Climate control and temperature sensors help you monitor and adjust the conditions within your data center. They make it relatively straightforward to observe temperature and moisture content shifts for your data center as a whole and individual racks.
By keeping track of patterns and consistently monitoring the shifts, you can modify the internal conditions to support optimal efficiency and prevent downtime. But before you can place sensors or keep accurate tabs on your units, you have to understand the current airflow and threshold of your center.
Some units may have different tolerances than others when it comes to temperature, and layout can affect the threshold levels and airflow pattern. It’s critical to know the general temperature limits of your center as well as the individual needs of each computer or rack so you can provide effective cooling solutions.
As a starting point, it’s essential to meet all national and local codes that require compliance, such as the regulations of the National Fire Protection Association (NFPA) — particularly NFPA 75 and 76, which relate to fire suppression and cooling. Data center managers can also follow design, performance and green standards for optimal environment control, as well as international infrastructure standards and certifications, such as those of the American National Standards Institute (ANSI).
Wireless sensor networks (WSNs) can also help you retrofit your center to create a more energy-efficient cooling environment. Developed as early as 2001, energy efficiency retrofitting involves installing a meshed system of individual sensors linked by a common wireless fidelity network. WSNs actively measure temperatures within your data center and allow your operator to gain a more in-depth view of environmental conditions and trends.
In-Rack vs. In-Room Placement
Having an understanding of your layout and how it affects the airflow and the average temperature is essential to installing the best solutions possible. To have optimal control over your data center’s internal temperature, you need to place sensors in areas that will provide the most accurate and thorough readings.
Depending on your rack density, room size and preferences, there are two major options for sensor placement:
The first option is to place sensors within your racks. Air temperatures can vary from rack to rack or even in different areas of the same group. By putting them closer to the units, you can get accurate readings of the general, inlet and outlet temperatures.
If you want to measure the highest temperatures, you can place sensors at the tops of racks. The heat will naturally rise to the top of the room, and, in theory, the units closer to the floor will always have a lower reading.
However, this doesn’t mean you should neglect the other areas. With server rack temperature sensors, you can place them at the top, bottom and center for more accurate readings and airflow mapping. The sensors at the top should serve as worst-case readings, while those placed around the center will garner a reading closer to the room average.
When you’re using in-rack sensors, it’s also essential to understand how air flows throughout your racks. The pattern should determine the optimal setup for any cooling solutions.
There are many possible computer thermal probe placements, allowing you to obtain readings of specific target areas. You can take accurate readings of the air intake and exhaust, or apply them to the internal processors of units to monitor the temperature of independent parts, such as CPUs and GPUs.
But it’s crucial to place your sensors away from the direct airflow entering and leaving your computers. These areas experience the most significant amount of fluctuation in terms of conditions. The proximity can affect readings, providing inaccurate temperatures and humidity data, and potentially setting off alarms unnecessarily.
For the most thorough level of protection and risk mitigation, it’s best to place multiple rack temperature sensors, with a focus on those located on the ends and in the center. Each detector should have a different threshold setting based on its placement and the temperature variation of the area surrounding it. With proper installation, you’ll be able to make a map of the conditions across the room, ensuring thorough monitoring and notification of any potential risks.
While in-rack sensors are excellent for providing readings of small areas, in-room sensors are great for monitoring ambient temperature. They may not give you a specific mapping of temperature from rack to rack, but they will help you track overall conditions and provide information on the environment as a whole.
Just as with the in-rack variety, it’s important to place the sensors away from direct airflow, as it can affect readings negatively and result in false alarms. However, it can be more challenging to find the best temperature sensor location. There is a variety of factors that can affect ambient readings.
Depending on the geographic location of your center and the internal building layout, you may also have to avoid areas where the sensor will be exposed to direct sunlight and place it away from any frequently used doors.
If your center does have windows, doors or any other elements that might cause fluctuations in temperature or humidity readings, you may have to perform some trial and error. To figure out the best placement, you can move the sensors to different locations for set periods. The positioning that provides the most reliable readings is the best option for your center.
It’s also important to remember that every data center is different. From geographic location to the internal layout, the conditions of one center varies differently from the next. Some may require closer monitoring, while others may naturally have better temperature control. Depending on the many factors that contribute to the environment in your center, you may find that one type of sensor placement works better than the other, or that you need a combination of the two.
Temperature and Humidity Thresholds
Even more essential than having well-placed sensors is understanding how to set alarms properly. For this, you must first know the temperature and humidity thresholds of your center.
Since data centers contain a large number of electronic devices, all of which are sensitive to temperature and moisture, it’s essential to observe and remain within the recommended thresholds. If the levels surpass the limit, it can result in anything from inefficiencies and outages to permanent hardware damage. By paying attention to the thresholds and maintaining the environment, you can ensure more uptime and efficiency.
The American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) is one of the industry leaders in determining temperature thresholds. As stated in the 2016 guidelines, ASHRAE advises keeping data center temperatures over 64 degrees and below 81 degrees Fahrenheit – a range of only 17 degrees. Any changes in the thresholds are within ASHRAE’s 2019 update to energy standard 90.4.
However, thresholds can vary with computer brand and age. Typically, older monitors have a smaller margin to work within. Some modern brands even allow for higher limits, such as Dell. Some Dell servers operate better at higher temperatures, with their most efficient equipment running at about 80 degrees Fahrenheit. It’s essential to check your hardware and formulate your threshold based on a combination of industry standards and what suits your specific center.
For the best possible results in your data center, you should set several sensors to alert you at multiple levels of temperature increases. For example, you can program them to warn you of elevating temperature at different stages and a final critical alert for if it approaches the maximum threshold. Preparing with multiple notification levels will allow you to catch signs of overheating early, monitor how fast the temperature is rising and, if it cools back down on its own, where the maximum air temperature hits.
As far as humidity goes, you need to maintain a proper balance of moisture. While too much moisture can cause problems, so can not enough. Low humidity can result in electrostatic discharge, which can do significant damage to critical components of your server. Too much water in the air can cause condensation to form, which can damage or corrode your hardware and end in equipment failures.
But, with consistent monitoring and control, you can extend the lifespans of your equipment and increase uptime. ASHRAE recommends a minimum humidity of 20% and a maximum of 80%, with the sweet spot sitting in the center at 50%. The goal is to stay as close to 50% as possible. It will ensure optimal performance and reduce the risk of damage or downtime.
Just as with temperature, you should create early and emergency alerts to ensure you can respond in time. For early warnings, it’s best to set alarms to trigger around 40% and 60% humidity, whereas critical notifications should trigger around 30% and 70%.
Hot Air Recirculation
The basic process of cooling your data center involves manipulating the airflow. Data centers are full of cool and warm air. Ideally, the equipment takes in the chilled air from your cooling system, which moves through the IT load and performs a heat transfer to cool the units, then exits as hot air on the other side. However, rack placement alone can’t provide the proper conditions for optimal recirculation.
While positioning your racks to form hot-aisles and cold-aisles may assist in better general airflow, it can also contribute to hot air recirculation issues. If you only rely on the layout of your racks and equipment, there’s no guarantee that the hot air will reach the cooling unit and recirculate through the IT loads. Without a system that directly influences this circuit, you could end up with hot spots throughout your data center.
To effectively cool your equipment, your air treatment units have to encourage the air to complete the full circuit. First, it has to help the hot air rise. Then, it must be able to push cooled air back to an area where it will circulate back through the IT loads. For the best results, you can directly transfer hot air from the returns to your cooling unit using containment chambers.
Containment chambers can help you directly regulate hot air recirculation. These units sit on top of each rack and respond to changes in air pressure and flow with small fans. These fans increase or decrease their RPMs based on whether or not enough air is circulating. For example, if they sense a shortage of cooled air moving through the IT loads, they’ll increase RPM to meet the needs of the unit and provide the proper outtake pressure.
You can apply these containment chambers to either cold or hot air, creating a cold aisle containment system (CACS) or hot aisle containment system (HACS). It’s essential to keep the two from intermixing, as the hot air would raise the temperature of the cold, resulting in a less effective cooling system. If you use the hot-aisle and cold-aisle layout in conjunction with a containment system, you can reduce your fan energy use by approximately 20% to 25%.
Overhead Air Delivery
One of the main methods of effective cooling involves overhead air delivery. There are multiple varieties, but the major two are from raised floor setups and above-rack ceiling ducts. Overhead distribution helps move cooled air to the intakes of equipment, allowing it to flow through the IT load and cool the interior hardware. Installing a reliable HVAC system is an excellent way to maintain a stable environment temperature.
As the hot air from the exhausts of your units rises, it’ll enter the plenum, or HVAC system. These are most often located between a structural ceiling and a specially constructed drop-down ceiling or beneath a raised floor with tiling. The cooling system will intake the hot air, cool it, and disperse it back down to your units, completing the recirculation cycle.
Locating the air delivery system above your racks will also allow the cooled air to sink past your equipment intakes. Since cold air naturally sinks, you’ll spend less energy trying to move it than you would with an in-floor delivery system, where your cooling unit would push the air up from the floor.
However, overhead systems also tend to compete with the rising hot air, which can sometimes render them ineffective or inefficient. Raised floor systems locate the HVAC in between the solid — usually concrete — floor and an elevated tile and grid floor, customizable to your space. These layouts are beneficial for cable management, their flexibility for upgrades and design changes, building ground access and overall cooling efficiency. They also allow you to create cold and hot aisle arrangements, delivering cold air from underneath your racks.
The Benefits of Efficient Data Center Cooling Systems
Keeping the environment temperate and maintaining proper airflow is essential to data center efficiency. But it’s not the only benefit of installing the best sensor configuration and cooling system for your center. It’ll also help you:
Maintain an Optimal Environment
Data centers ask a lot from the computers and equipment they contain. The hardware is continually running, storing and processing large amounts of information. Providing it with the best possible environment will encourage the devices to run smoothly and efficiently.
Follow Industry Best Practices
In the data center industry, many of the best practices revolve around cooling technology and energy consumption. Since the environment surrounding your equipment affects efficiency, your cooling unit can contribute to bringing down energy usage by being efficient and providing air to cool the devices’ internal temperature.
Humidity and temperature can both contribute to downtime and hardware damage. By consistently monitoring and adjusting your airflow and cooling system, you can minimize the possibility of outages, data loss or necessary repairs due to overheating or condensation.
Data center managers know it’s important to plan for potential future growth. But more equipment means more hot air production and an increased need for temperature regulation. Efficient cooling systems will allow you to scale with ease, providing a reliable data storage solution for more companies while maintaining a facility environment that’s optimal for your hardware.
Technology Life Span
Every computer and data server has an expected end of life (EOL). Proper maintenance protects this valuable equipment from becoming damaged over time and extends that service life. That’s why the investment in the correct data center temperature and humidity sensors and cooling systems provides a significant return on investment — a longer equipment life span. The goal is to protect your equipment in every way possible and minimize potential threats to your technology assets.
Your investment stakeholders and clients expect the highest-quality security and safety in the data center environment. You can provide them peace of mind when you create and manage an efficient cooling system. Some insurance companies may even lower risk-related premiums when you have cutting-edge cooling technologies to extend the useful life of your machinery.
In the event of an emergency, you also have the systems in place to cool your data center and technology as maintenance experts perform repairs. If you must make an insurance claim, you can provide documentation that you took every reasonable precaution against overheating and equipment malfunction.
Reduced Energy Use and Costs
Cooling a data center is typically an expensive investment, but it’s also a necessity. You can bring down your cooling expenses significantly by consistently monitoring environmental conditions and setting up a reliable, efficient cooling system.
Choose DataSpan Data Center Services
When it comes to your data center, you shouldn’t have to compromise. At DataSpan, we know the importance of temperature regulation and monitoring. Since 1974, we’ve been providing companies with the resources they need to run a thriving data center. Our custom containment solutions make it easy to meet national standards and regulations while providing a flexible foundation for expansion and upward scaling.