There is a certain moment when businesses grow up to the level when their internal informational systems do not fit a single server cabinet anymore. And then the leader of the IT department has to consider all pros and cons and decide if to build a server room or not. There can be several variants: from getting rid of own capacities totally and moving them to clouds or colocating in a big data center, to building own mini (or not mini) data center with blackjack.
The process of calculating, planning and constructing a server room is crucial and costly enough. There will be a need to invest even when designing, and it is possible to save some funds by giving all procedures in a server room, from designing to building, to the same contractor. A natural wish of a director in this situation is to fit into the minimum available sum. Any rise in price for the project is perceived hostilely. Directors often forget that the object needs to be maintained after it is built, and this can cost in two-three years as much as building one more not existing server room if the object is improperly planned.
In a server room, a cooling system is the second biggest resource consumer (particularly electricity and consumables). It is not a secret that a capacity of a cooling system must at least coincide, or better exceed a peak power of all equipment installed in the server for a minimum of a couple of dozen percent. Therefore, this article is about what cooling systems can be and how to save funds when using such systems.
Classification of Room Cooling Systems
The most common conditioners to use and understand are air conditioner compressors. Inside them, a refrigerant (mostly Freon) transports warmth from a radiator of an internal block to an external one, where it gives heat off to the environment. More detailed information about a principle of work of a conditioner you can read here. A little less common in use are liquid and combined systems, which use water or ethylene glycol as a main refrigerant. The most effective solution in certain circumstances is a free-cooling system. They are extremely precise devices developed from scratch in each case.
Also, there is a need to pay attention to a form factor classification. All systems can be roughly divided into two types: household and precision ones. Common household systems, which we got used to, are usually installed in offices and flats, being hung up at the ceiling or on the wall, can serve as cooling systems for specialized rooms. Precision systems are specialized conditioning systems, such as all free-cooling and fluid systems.
Inside precision systems, there is a certain systematization according to the principle and the way of transporting cold to its consumers. If fundamental differences are more or less clear, the number of ways to cool devices is extremely vast. One of the common classic cases is an allocation of a cold room with installed racks, where household air conditioners will suit. Classic variants of precision solutions are devices with inline airways as well as cold and hot aisles, where the racks are installed in rows in the way to catch cold air coming from a raised floor, for example. They give off a warmed air into the aisles, from which it is forcibly removed. There are also variants with airways to each rack, where air is fed to each single rack from top or bottom and then is taken away the same actively.
There are more than many non-classic solutions. And there is no need to say, all of them are precision ones. The majority of solutions are combinations of systems named above, which are used to increase efficiency and decrease expenses. The spread is vast: from individual air conditioners for each server closet to fluid cooling of each separate server or even processor. There is a need to note the systems with direct contact of a consumer with a fluid. In this case, servers are completely immersed into special oil. This oil does not have any smell and does not conduct electricity absolutely. The fluid circulates constantly inside the basins with equipment and passes through heat sinks. There is a need to think a lot about a necessity to build a server room. There is an opinion that a separate server room is unnecessary for capacities less than 5 kWt. Usually, all equipment can fit in a 42-47-unit rack-closet, with a necessary maximum of a detached single-frame rack with a cross bar. All this may be separated from an administrator’s room or any other room (importantly, not from accounting department) by a glass or plasterboard wall with an airtight door. Afterwards, you can install a paired household air conditioner and go for a drink.
However, what we build is a server room. First of all, there is a need to decide, which cooling system to use, and it is not only about its price. What cooling system you choose depends on a lot of factors, such as a capacity of equipment, location of a server room in the building, geographic location of the building and even from a biased approach to certain types of cooling devices as well as from shortsightedness of directors. It is widely believed it is enough to have a household air conditioner for systems up to 10 kilowatts. It is understandable because household split-systems of bigger capacity are not sold at every shop and their price is close to or even bigger than the price of precision air conditioners of the same capacity.
A possibility of installing of a certain cooling system depends on a location of a server room in the building as well as a possibility to bring communications and air flows for specialized systems, make a raised floor or install turbines. When the ceiling is not high enough, it is impossible to make a raised floor of a necessary depth to install airflow ducts and intake air for a precision system. A location inside the building will generate problems when bringing airflow ducts, which are one of the variants of a free-cooling system. Location not far from accounting department can be fatal to building a server room because they say there is a lot of noise. Geographic location plays a paramount role and often does not allow installing free-cooling systems, if you are located in, for example, a tropical zone. Therefore, data center builders are so fond of Northern regions of our planet because it is possible to avoid using conditioners totally there.
In addition, certain technical specialists have their own solid belief that one system is totally useful while other variants of cooling are completely unacceptable. They will calmly and confidently prove they are right, finding more and more advantages of their variant and more disadvantages of other offers, from real to fictitious ones.
As a result, we will start from a chosen strategy and continue to design a structure of a server room.
A Cooling Strategy Using Household Air Conditioners
If you have a small park of servers, 2-3 racks with them will stand in a detached room. You do not have a perspective to increase your capacities or you do not want to bother or just do not have funds for using more efficient and ecologic solutions.
First of all you need to decide how to place racks with equipment in your server room relatively to air conditioners. In this case, the best variant is to install internal modules of a split system opposite to a row of racks, directed at a front side of an open rack or a cabinet with a screen door. Equipment inside a rack needs to be installed by the side, which intakes air to cool internal components. Certain devices that can be installed into racks may be rebuilt or even manufactured with a possibility to intake or throw off air from the front or one of sides. There is a need to think about it when purchasing a device.
Even if you do not foresee any growth of a total capacity, it is still recommended to get air conditioners with a power reserve, for example, by choosing a maximum peak consumption of the dispersion of the hottest racks, multiplied by the amount of racks.
A minimal fail-over of this strategy is N+1. Practically it looks like two air conditioners of the similar capacity, each of them able to support the operating temperature in a server room. A rotating device for conditioners should be used to extend a life of both air conditioners. The device should switch from one air conditioner to another after certain periods of time, track their start and monitor their performance. When one of the air conditioners fails, the device must automatically switch on another one and inform a person in charge about the problem. There is a need to admit that this function is not supported by all models of household air conditioners.
All server split-systems, installed in the latitudes of our country, should have a so-called winter kit. It is a control unit with a certain improvement of a radiator of an external block of an air conditioner and a heating system of a sump pump, which operates automatically.
Picture 1. Cooling using household air conditioners.
Precision Cooling Facilities
A precision air conditioner (or another cooler) is created exactly the way to operate with maximum efficiency in the infrastructure of specified target parameters. In other words, when we mention a precision air conditioner, we mean that a room, an equipment of a server room and a cooling system are designed as an entire project. This is treated as a set of technologies, which allow to ensure the efficiency, safety and durability of expensive equipment. Needless to say, devices that are individually designed are expensive. Holy wars take place between supporters of different camps. Ones assert that it is enough to have an industrial version of a paired household air conditioner for a usual server room, such as from Daikin (FT and FAQ series) or Mitsubishi (Heavy series). When choosing such variant, it is important to take into account such cons as local stagnation of hot air in the corners or in units of racks that are not occupied by active equipment. Another dangerous factor is low humidity because air conditioning dries air out. Dry air contributes to the accumulation of static electricity while a static potential on the thin electronics affects the chip adversely and increases a risk of being destructed by a discharge. Certainly, the majority these factors are removable, but additional fans and humidifiers are proliferating points of failure, which cost energy and maintenance. For example, maintenance of a humidifier is not as costly as it is time-consuming. There is a need to clean it and add water on a daily basis.
Precision systems have their own cons. First of all, they are big enough: Freon air conditioners have dimensions of two or three full-sized racks. Since humidity control is one of the main functions of a specialized air conditioner, there is a need to bring water to internal blocks, which is totally unacceptable for some IT specialists. Cold air from these units is supplied to racks by air ducts, which are brought under a raised floor (the most frequent and the most expensive variant) or under the ceiling, which implies having high ceiling and puts additional restrictions on the laying of cable communications. Condensers-coolers of such air conditioners have got significant dimensions, therefore the issue of their placement and bringing a system of pipes from an internal block is immediately raised.
These systems have no more cons hence let us continue to their pros. They have got such advantages as high performance, a possibility to reserve only active parts of an air conditioner (for example, I think there is no need to reserve air ducts), a precise control over a temperature and humidity levels, and detailed monitoring. Moreover, we get the next advantages: a relative economy of funds, a guaranteed delivery of cold air to its consumer, and a support of high density of consumers for a rack (more precisely, it is a rule because if a rack is empty, it will not operate effectively and will affect the entire ecosystem). Generally, pros are obvious. We pay more and we get a more efficient system.
As I have already mentioned above, the most widespread precision conditioning system is an aisle system, with racks located in rows and installed the way to intake air from cold aisles (where air is supplied by an air conditioner) and throw air to hot aisles (from where it is taken by a ventilation system). An air duct for such system is a raised floor. Floor panels are mostly solid, all possible cable communications are brought to under the ceiling. In front of rows with racks, lattice panels are arranged in the floor, from where cooled air is supplied to the front of the rack. When arranged this way, doors of server cabinets are made meshed or server cabinets do not have doors at all. Afterward a hot air, heated by servers, is blown into a hot aisle, from where it is exhausted by a system of a forced ventilation. Ideally an exhaust must be placed at the top of a hot aisle according to the principles of thermodynamics. However, it is often made in a raised floor with the aim to save some space over the racks for cable communications. Relatively recently an opinion appeared to make cold and hot aisles airtight from a server room. This helped to save a dispersion of valuable cold significantly. Free unit spaces in cabinets need to have stubs installed because hot air can easily mix up with a cooled air. Using stubs can increase the cooling efficiency in one and half to two times.
Picture 2. A system with open aisles, loss of precious cold air is obvious.
Picture 3. More efficient system with airtight aisles.
For example, Intel had gone further in chasing the idea of creating simple and efficient cooling of equipment and patented a rack with an exhaust. A rack is a usual 19" cabinet, but it is deeper than its analogues and has got an air duct in its top cover, which is opened into a dropped ceiling, from where a hot air is taken out by air conditioners. The entire system is totally passive except air conditioners, however, Intel assures it is capable to cool 32 kilowatts of equipment for a rack.
Considering the climate of our country, precision air conditioners have one more big advantage: their scheme can be relatively easily modified by adding full or partial fluid circuit. When using ethylene glycol as the refrigerant, a parallel contour is built with fluid cooling, thus reducing power consumption and air conditioner maintenance, as well as increasing their life. Glycol contour gains efficiency when the temperature falls below +20 °C, which is very frequent in Russia even in summer.
Additional liquid contour duplicates Freon one and can operate round-the-clock, cooling a compressor and a condenser of an air conditioner in afternoons during the hottest hours. When a temperature falls outside, it switches from partial and complete cooling to an internal cooling of a heat exchanger.
Leading manufacturers of precision cooling systems are APC, STULZ, Liebert Hiross, RC Group. They have got combined systems ready among their solutions
Fluid Cooling Systems
The fundamental difference of fluid cooling and Freon cooling is the fact that fluid does not change the phase state in a contour, hence with the equal power, water and glycol systems are less efficient than Freon systems. However, fluid systems have got such unquestionable advantages as capacity and versatility. Systems with fluid cooling can use a fan coil unit on the roof or in the yard of the building or a heating system of the building as a refrigerant. A liquid can cool air in a server room or can be used as a refrigerant for a separate processor. An unquestionable advantage of fluid cooling is almost unlimited trace length due to the low price of the refrigerant. The most dangerous thing in this situation is a leakage of a conductive agent, but this is unlikely to scare anyone. In this situation, IBM distinguished himself by constructing SuperMUC, which helped to achieve saving 40% of power consumption due to a lack of chillers on a cooling system. Google uses their own developed system in the most of its data centers, where the system of cold and hot aisles is used.
One more system with fluid implies immersion of a server into a special mineral oil. Oil is an insulator hence there will be no short circuit. What about energy efficiency, the specialists at Intel, for example, assure that a cooling system in such case consumes 90% less power, and a power consumption of servers decreases as well. Racks for immersed fluid cooling are manufactured by CarnotJet. Such racks can accommodate any servers, however there is a need to pull out all fans out of them.
Picture 4. Fluid cooling.
Another factor of versatility is a huge amount of ways to cool a refrigerant. For example, there is a technology named SeaWater Air Conditioning (SWAC), according to which a Google data center is built in Finland. From its name it is clear that a heat exchanger, working on cold water from deep sea is used to cool water that comes to data center.
A classic system of fluid cooling acts as an intermediary between a relatively high temperature inside server rooms, and a cooler (more often a dry cooling tower and a chiller) outside.
A dry cooling tower is an airtight cooling contour, where a fluid enters a radiator, which is forcedly blown by air. There are also wet cooling towers, water is sprayed and blown simultaneously inside them. Usually, liquid refrigerant is only prepared in cooling towers or fan coils, by being cooled to ambient temperature. The cooling process takes place in a heat exchanger of a chiller.
A chiller is a refrigerator, which operates on Freon and cools the liquid that passes through its cooler to a required temperature.
Picture N+1. Chillers installed on a roof (source http://www.quantum-v.ru/).
The same rules are correct for classic fluid conditioning as for Freon systems. Air that had been cooled in an evaporator passes through consumers and is taken from a server room by a cooling system. Despite the fact fluid systems are more versatile and are cheaper in use than Freon ones, their efficiency is lower because of a bigger amount of intermediaries air-chiller-fluid-air. It is evident this scheme is not the best.
Let’s Get Rid of Intermediaries
Direct free cooling is the most energy-efficient method to cool servers. Certainly, its efficiency totally depends on an air temperature of the environment, however, certain modifications in standards as well as various green technologies constantly move cooling systems towards this direction.
Let start with the fact that the largest standardizer of engineering systems, particularly cooling and heating systems, ASHRAE (American Society of Heating, Refrigerating and Air-conditioning Engineers), increased a recommended air temperature for cooling server rooms two times from +22 to +27 degrees of Celsius since 2004. In 2011 amended standard stratified two new classes of equipment for the server A3 and A4, where a temperature range is increased up to +40 and +45 degrees. Manufacturers of servers produce such models already. Despite the fact they have not become widespread yet, more and more builders of data centers tend to use green technologies for cooling.
For server rooms in our latitudes, free-cooling can become a serious help for cooling during a cold season, if not a full replacement of a classical cooling model, as well as decrease capacity of conditioners.
The biggest problem of a direct free-cooling is a level of air pollution in cities. The amount of filters necessary to change and a power consumption of fans to purge them can negate all savings of electricity and power. This issue can be solved by separating contours, and introducing a heat exchanger between them on the basis of a thermal wheel. In this case, filters are the same necessary, but they are cheaper and with a minimal air resistance.
Another significant problem is helping function of our free cooler does not work well with household systems and works best with precision systems. Advantages are: while direct free-cooling there is no risk to dry up the air in a server room because there is a constant exchange of air with the environment. On the other hand, air humidity outside can absolutely not conform to accepted standards of humidity for server rooms. One of the main pros of free-cooling, adiabatic cooling, becomes very helpful here.
It has long been observed that a moist air from reservoirs is always cooler than from plains away from them, just like a sea breeze. For adiabatic cooling systems, there is no need to have backup systems or complicated technical solutions. They are designed according to the principle of wet cooling towers: inside collectors, nozzles sprinkle water into a heated outside air. This dissipated water cools and humidifies air when evaporating. Such system does not only effectively decrease a temperature of an outside air, but also maintains a required level of air humidity. However, there is a new consumable in such systems, water. Therefore, ASHRAE introduced a new term WUE (Water usage effectiveness) similar to PUE (Power usage effectiveness). I think it is clear what these parameters are responsible for.
The brightest examples of implementing such systems are data centres of eBay "Mercury" in Phoenix (USA) and Facebook in Prineville (USA).
Picture. N+2. Adiabatic cooling in action (source http://www.es-engineering.net/index_en.html).
Just to Conclude
You may ask me, then how to cool small server rooms for a couple of dozen kVA?
The answer is ambiguous. A solution consisting of two powerful household air conditioners will be a good fit for a majority of readers. Those who will be able to persuade their own management to save funds and introduce ecologic innovations will get a huge headache and endless enjoying of the final result afterwards.
As I have already mentioned above, a specific solution strongly depends on climatic conditions of a particular region. To understand a climate pattern better, take a look at historical information on instrumentally observed maximum and minimum temperatures and humidity in your region or city, and analyze similar information on the hottest temperatures for the last 10-20 years. That's more than enough to develop a clear strategy.
Despite all advantages of free-cooling, in climate conditions of temperate latitudes you are unlikely to succeed without a compressor or fluid conditioning unit with the chance of 80 cases out of 100. Therefore, a general idea of building a perfect server room is the following:
- This is a room with a precision cooling system. A room has a raised floor to supply cold air separated into cold and hot aisles, which are airtight from a server room and provide more precise heat exchange. Heat is removed through a dropped ceiling;
- Most of time, a system runs direct free-cooling, and when air temperature grows outside, the system of adiabatic cooling helps. When permissible humidity levels are exceeded, adiabatic cooling is switched off and a system of compressor or fluid cooling, i.e. an air conditioner is switched on.
It turns out there is a need to invest funds when building, to save money later when a server room will operate.
There is a need to pay attention that such system cannot operate without appropriate and detailed monitoring of an internal environment, such as controlling a temperature in cold and hot aisles, air humidity inside and outside, and a presence of water in adiabatic system, and leakage control. To do that, there are monitoring devices, which are able to publish data from different sensors via Ethernet or WiFi. They are manufactured in the form of boards, cabinets and products to be installed into standard 19" racks. For example, [NetPing] are already equipped with a built-in GSM modem with an SMS-module, which is able to notify about significant change in parameters or sensor triggering not only responsible nods, but also personally you.
In addition, all these data can be entered into a global monitoring system, such as Zabbix, where it is possible to analyze a server temperature map globally, using charts and samples, correlate changes inside and outside a server room. Hence, it is possible to automatize creating incidents based on a set of indicators, not the only one.
All this allows to build a cooling system of maximum efficiency and prevent its failures.
Unfortunately, one small article is not enough to thoroughly work out the topic of a server room cooling. On the one hand, it may seem that free-cooling is the best choice for everybody, but in fact it is a venture, which is risky enough. History knows a decent amount of epic situations when entire data centers failed because of design errors and lack of attention to details. The best though more expensive is a solution, which includes backing up regular cooling systems with alternative ones.
We wish you to have big data centers with incessant noise in them.