Archive

Posts Tagged ‘enviroment’

Data centers and the environment: The case of Facebook

September 24, 2015 1 comment
Facebook data center engineer

A Facebook data center engineer

There has been an increased uptake and use of the Internet especially  social media by many in the world. This has led to rapid deployment of infrastructure to support this increased demand.

This infrastructure consumes power. It is estimated that data centers that power the internet world-over consume about 1.3% of the world’s total electric power. This might seem small but if you consider that Facebook consumed about 532 million kWh in 2011 (must be close to double that amount now). At current Kenyan electricity tariffs, that’s about 10.6 Billion shillings in power bills. Google consumed just over 2 billion kWh during the same time to power their servers world-wide. With most of this power being from coal plants, data centers are attracting the attention of groups such as green peace who are have launched campaigns such as ‘unfriend coal’ which was geared towards forcing Facebook to lower its dependence on coal to power its service.

With pressure piling on data centers to lower their carbon foot prints, innovation and new way of thinking is needed. One of the low hanging fruits is to build new data centers in regions that use green energy. One of the prime locations now for setting up data centers is Iceland. The country generates all of its power from geothermal steam and hydro. The cool weather there also means that natural cold air that is about 5.5 degrees C on average is simply circulated in the data center to cool the equipment as opposed to using air conditioning systems for forced cooling. This means that a server operating out of Iceland is cheaper to run and has a near zero carbon emission attached to it. According to Verne Global’s findings in 2013, the 10 year energy cost (the length of a standard data center hosting contract) for 1 megawatt of IT load in Keflavik, Iceland is near $3.5 million, compared to nearly $23 million in London, $20 million in Frankfurt, $12.5 million in Chicago, around $6 million in Oslo, Norway. The other bonus is the geographical location of Iceland makes latency from a server there to Europe and US nearly equal at 40ms.

However, with the likes of Facebook who have already invested a lot of money on data centers in the US, they cannot simply cart it to Iceland. They have therefore come up with innovative ways to lower their data center energy costs. It is estimated that about 25% of power in a data center goes to cooling, 10% is wasted in the conversion from AC to DC and back to AC voltage, IT load taking 46% of the power (25% servers, 8% network and 13% storage) there is a huge opportunity to lower the IT load portion and cooling portion.

IT load efficiency

Facebook did some research and found out that servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads. In short a server should either be kept idle or at moderate/high load, not in low load. The traditional method of load distribution on a group of servers is known as round robin. This method is efficient on computing resources but inefficient on power use. Facebook developed a new way of doing things known as Autoscale.

Autoscale is designed to distribute incoming requests to the servers so that they are either idling, or running at medium/high-capacity and not in between. It tries to avoid assigning workloads in a way that results in servers running at low capacity. This was informed by a test that was done by Facebook engineers. In this test they found out that a server that is in idle mode consumes about 60 watts of power. If some light lower level load is applied to the server, the power consumption goes from 60 to 130 watts. However, if the same server is run at medium or higher loads, the power consumption is about 150 Watts; a 2o watt difference between low load and high load. This means that its more energy-efficient to give an already moderately busy server some more load (20 watts extra consumed) as opposed to giving this load to an idle server (70 watts extra consumed if you do this). Autoscale will also reduce the number of servers sharing the load so that it puts as many servers as possible in idle mode. In low traffic periods such as American midnight. Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don’t receive traffic.

The other method deployed to reduce power consumption is the reduction of power transformation. There is about 10-15% loss in transformers and rectifiers found in UPS’s. In most data center setups, mains AC power is fed to a centralized UPS. The UPS converts this AC to DC and back to AC to supply the servers with power. This AC-DC-AC conversion results in about 6-12% loss. a way to lower this loss is to have the servers supplied directly by mains AC power but have localized UPS’s on each rack that can give up to 45 seconds of backup power as the diesel generator turns on in case of a power outage (a very rare occurrence in the developed world). Eliminating centralized UPS’s means that data centers can save about 10% of power. Feeding direct AC power from the grid to servers can be a tricky affair, this is because reactive components in the grid such as motors that power everything from escalators to coffee grinders lower the power factor and increase reactive power. The deployment of reactive synchronous condensers in data centers lowers reactive power which is responsible for some losses depending on power factor of received power. Facebook has deployed in-house custom-made reactor power panels which try to bring the power factor as close as possible to unity. Other than improving the quality of power, the Facebook reactors also reduced harmonic distortion in the power system which causes delays in generators kicking in when there is a detected power loss from the mains.

Use of 277Volts instead of 120 or 240Volts

Facebook hardware is also designed to operate at 277 Volts AC as opposed to the standard 120Volts in the USA main supply systems. The reason behind this is simple. with US 3 phase power being supplied at 480 Volts, the single phase neutral doesn’t come out at the 120Volts but at 227 Volts (you can use imaginary/complex number cube root of 1 components to derive this). The lowering of 227Volts to 120Volts by a transformer leads to about 3% transformation losses. So operating the servers at 277Volts and not 120Volts saves 3% power. The diagram below shows how a servers efficiency improved with the use of a higher voltage.

Hewlett-Packard server power supply efficiency as a function of load

Hewlett-Packard server power supply efficiency as a function of load (c) Syska Hennessy Group

A server operating at 240Volts (which is what we use in Kenya) is 91% efficient at 50% load compared to a similar server operating at 120Volts. jacking up this to 277Volts improves efficiency to 92% compared to a server at 120Volts at 89% efficiency on 50% load. The reason why America uses 120Volts is because in the early days of electricity, bulbs were made of carbon filaments that lasted longer if operated at 120Volts than at 230Volts, because most of electricity was used for lighting, it made sense then to run the grid at 120Volts. Later, when electricity went to Europe and Asia, technology had improved and the tungsten filaments could do higher, more efficient voltage at 240Volts.

Simpler cooling and Humidity control

About 12% of the cooling energy consumption goes to delivering the cold air at the point of heat rejection. By use of a ductless cooling system, the cold air is delivered at the center of the data center and with additional smaller cooling systems at the rack where the heat is generated, substantial power savings can be achieved.

The use of a vapor seal can also play a critical role in controlling relative humidity, reducing unnecessary humidification and dehumidification. If humidity is too high in the data center,conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear and corrosion can occur. These risks increase exponentially as relative humidity increases above 55 percent. If humidity is too low, the magnitude and propensity for electrostatic discharge (ESD) increases, which can damage equipment or adversely affect operation. Also, tape products and media may have excessive errors when exposed to low relative humidity.

Most equipment manufactured today is designed to draw in air through the front and exhaust it out the rear. This allows equipment racks to be arranged to create hot aisles and cold aisles. This approach positions racks so that rows of racks face each other, with the front of each opposing row of racks drawing cold air from the same aisle (the “cold” aisle). What this does is that it makes it easier to draw out hot air from the hot isles before it mixes with the cold air which lowers the cooling efficiency.

compressor_efficiencyThe other method of lowering cooling costs is through the use of multi step compressors for the cooling systems. Most traditional cooling systems simply switch on the compressors at full load when the thermostat input dictates that cooling should happen. a 4 step compressor operation showed that compressors operate at different efficiency at various steps. The diagram  on the side shows that the compressor in question is most efficient at step 2. The cooling system is designed in such a way that the compressor operates at step 2 most of the time.  Off the shelf cooling systems work well but are grossly power inefficient for use in data centers.

The internet is currently moving towards cloud computing. This essentially means that data centers will continue to grow and soon the power consumed by data centers will pile pressure on the grids and the environment. The use of green energy sources and innovation will go a long way in reducing the contribution of the Internet to global warming.