Michael is a Senior Consultant specialising in multi-disciplinary design of educational, Health, mixed use and commercial buildings. He has developed his experience working on a range of projects in UK, Middle East and Europe. Since graduating Michael has amassed a broad range of experience both as a technical advisor, project manager and as a professional engineering consultant. As an experienced mechanical engineer, he has led the design on a number of new as well as retrofit data centres in U.K. As a project manager, he has delivered multi-disciplinary as well as Data Centre projects which have enabled him to develop a broad understanding of building design issues. He received a bachelor’s degree in Mechanical Engineering from Kingston University London and a master’s in Mechanical Engineering from Kingston University London. Michael Grigoratos Senior Consultant, Halcrow Group Ltd
Data Center Cooling and Power Conversion Performance Typical Practice Best Practice Server Load /Computing Operations Cooling & Power Conversions Server Load /Computing Operations Cooling & Power Conversions
Typical Energy Flow/Use Fuel Burned at Power Plant Server Load/ Computing Operations Cooling Equipment Power Conversion & Distribution Delivered Power Electricity Generation & Transmission Losses
Typical Energy Flow/Use Will reduce cooling needs Fuel Burned at Power Plant Reducing server power requirements Lowering power conversion losses Electricity Generation & Transmission Losses Delivered Electricity … ultimately reducing fuel burned at the power plant Reducing power demand and losses Server Load/ Computing Operations Cooling Equipment Power Conversion & Distribution
Opportunity Potential Comparison of Projected Electricity Use, All Scenarios, 2007 to 2011 Annual Energy Use (Billion kWh/year) 2007 2008 2009 2010 2011 2008 Baseline 58.7 0 140 120 100 80 60 40 20 Business as usual Current trends Improved operational management Best practice State of the art
The first step is to determine the current data centre energy baseline. Once the energy baseline is established, it is then possible to measure the result of any improvements while gaining the ability to pinpoint existing problem areas quickly. The Data Centre Manager will need to know the data centre’s building infrastructure and IT equipment power requirements (including forecasts) and actual energy usage that include average and peak loads. 1. Determine Energy Baseline
IT growth forecasts and aging data centre facilities may point towards data centre consolidations, outsourcing and or even a new data centre build. The goal here is to avoid a surprise and negative impact to the corporation as a whole, with larger than expected increases to the IT Capital and Operations budget. 2. Forecast IT Growth The second step is to quickly determine forecasted IT equipment growth, new projects and any other significant changes that impact in data centre operations. Clearly, good forecasts lead to better planning and more effective energy solutions. 2003 2005 2007 2009 2011
Data centre best practices can significantly reduce energy consumption from 10-50%. Data Analysis After going through the first two steps, it is now possible to examine infrastructure adjustments as well as data centre best practices to maximize cooling and energy efficiency.
To accomplish the above, we need to determine energy usage at the utility meter coming into the data centre as well as usage at the IT equipment. Next, we have to apply some metrics such as the Uptime Institute’s “Four Metrics to Define Data Centre Greenness” The use of metrics, in turn, will provide opportunities to improve your data centre energy efficiency. In addition, new IT equipment with enabled energy saving features should be part of your “refresh”, growth, or consolidation game plan to improve your data centre. Energy Management As stated before, benchmarks should be set for energy usage and costs with a flexible, modular plan developed for added data centre equipment growth.
According to the Green Grid, a non-profit trade organization of IT professionals, there are two related metrics that can improve the energy efficiency of existing data centres as well as determine the decision to build new data centres. Ideally, these metrics and processes will help determine if the existing data centre can be optimized before a new data centre is needed. These new metrics are Power Usage Effectiveness (PUE) and Data Centre Efficiency (DCE). Green Metrics Total Facility Power is defined as the power measured at the utility meter, the power dedicated solely to the data centre, and IT Equipment Power is defined as the equipment that is used to manage, process, store, or route data within the data centre. Implementing these metrics allows a firm to determine areas to improve operational efficiency, compare their data centre with other competitive data centres, ensure that the data centre operators are improving the designs and processes over time and discover opportunities to repurpose energy for additional IT equipment. While these metrics may be similar, they can be used to illustrate the energy allocation in the data centre differently
A good layout begins with the tried and true cold aisle/hot aisle cooling strategy within the data centre. This begins with sealing any cable cut-outs to eliminate air bypass and making sure that equipment rows are perpendicular to cooling units. Then minimize hot air/cold air recirculation with unobstructed clearance from the top of the rack to the top of the return air path. Rack Layout Separate high-density racks When high-density racks are clustered together, most cooling systems become ineffective. Distributing these racks across the entire floor area alleviates this problem. Finally, construct all cabinets and racks of uniform height to help limit aisle to aisle hot air/cold air recirculation which must be kept to a minimum at equipment air intake levels.
Properly conditioned air intake methods with non restricted airflow can significantly improve airflow within a data centre. Using blanking panels to limit hot air/cold air recirculation, having door ventilation on cabinets, unrestricted airflow in the back of the cabinets, and no shelves to block airflow are all best practices in airflow management. Airflow assisting devices for direct cold air delivery, hot aisle containment systems, rack air containment systems and speciality hot exhaust air return ducts can be applied as alternative airflow solutions in support of high density enclosures and blade server farms in the data centre. Airflow Management
The Cooling capacity of the data centre should match the IT equipment that’s located inside it with appropriate settings for CRAC unit temperatures and humidity. Any hot spots should be eliminated, and proper air velocity provided, while ensuring that all air vents are properly located. Cooling Management
High Density Server Cooling Front-to-back cooling principle used in most high density server designs. The Modular Cooling System (MDS) evenly distributes cold supply air at the front of the rack of equipment. Each server receives adequate supply air, regardless of its position within the rack or the density of the rack. The servers expel warm exhaust air out the rear of the rack. The fan modules re-direct the warm air from the rear of the rack into the heat exchanger modules. The air is re-cooled and then re-circulated to the front of the rack. Any condensation that forms is collected in each heat exchanger module and flows through a discharge tube to a condensation tray integrated in the base assembly.
CO 2 RAC Blade server cooling Performance CO 2 is able to absorb over seven times more heat as it evaporates than an equivalent quantity of water. CO 2 vaporises during the heat absorption in. In contrast with conventional cooling systems, CO 2 cooling can save up to 30% energy. CO2 Cooling Carbon dioxide (CO 2 ) is an ideal refrigerant, particularly when considered against ecological and safety criteria. It is natural, non-flammable, oil-free, chemically inactive and has zero potential for ozone depletion. It is electrically benign and does not present a danger to PCs or power and data cabling. CO 2 pipe diameters are much smaller than comparable chilled water pipes.
Hinweis der Redaktion
Why are energy costs going up so dramatically? Through basic supply and demand economics, we understand that as demand increases and supply decreases, prices tend to rise. And this is exactly what is happening. At the same time Data Centers demand more electricity, the supply of electricity is constrained. In the Americas for example, demand for electricity has doubled from 2000 to 2005, to the point where Data Centers now consume 1.5% of all electricity in the country. Compounding the demand for energy is the cost for producing energy has gone up. Four of the main natural resources for producing electricity have seen sharp cost increases. Even countries like France, which get more than have of their electricity from Nuclear have seen price increases. Coal prices have doubled, Natural Gas prices have quadrupled and even oil & uranium have gone up substantially. And, the situation doesn’t appear to be getting any better. With no end in sight to increasing energy costs, we have no choice left but to conserve and reduce our insatiable demand.
Power Conversion: Potential for 10 to 30% improvement Server Load/Computing: Potential for 30 to 50% improvement (check EPA report) Load management Virtualization Sleep modes Load shifting Server innovation Chip design Efficient power supplies Semiconductor materials Cooling: Potential for 30 to 50% improvement (check EPA report) Alternative Power Generation: On-site generation (eliminates transmission losses) CHP applications Use of waste heat for cooling Use of renewable energy (could couple with DC power distribution systems) PV Fuel cells Dale’s revision: Improve “in-the-box” power supply efficiency Improve efficiency of software applications Improve resource utilization (e.g. virtualization) Reduce idle power (power management) Hardware innovation (e.g. more efficient computations per watt)