Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air
Data center cooling is a hot topic. But, when you consider the challenges associated with cooling the latest generation servers, the growing cost of infrastructure equipment, and the risks associated with data center hot spots brought on by high-density clusters and premature hardware failure, it's easy to understand the focus.
To view the recorded webinar event, please visit http://www.42u.com/data-center-hot-spots-webinar.htm
Six Myths about Ontologies: The Basics of Formal Ontology
Eliminating Data Center Hot Spots
1. Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air December 5, 2007 Presented By:
2.
3.
4. A Snapshot: Energy Trends √ Server density has increased significantly over the past decade √ The average server’s power consumption has quadrupled √ Higher density and the resultant higher operating temperatures spawn increased administration costs √ Executives are starting to look more closely at the energy budgets associated with IT infrastructure √ Customers are running out of power and cooling capacity well before they reach the spatial limits of their facilities
10. What Leads to Inefficient CoE Sources of Mechanical Inefficiencies Thermal Incapacity and Excessive Bypass Airflow Mismatched Expectations Mismatched Architectures No Master Plan Failure to Measure and Monitor Failure to Use Best Practices
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
24.
25.
26.
27.
28.
29.
30.
31. Q&A To Arrange a Complimentary 15Minute Cooling Evaluation [email_address] To Receive a copy of the Presentation [email_address]
Hinweis der Redaktion
Ladies and Gentlemen: Thanks for standing by and welcome to today’s session in the DirectNET web Seminar Serious. Today’s presentation is entitled: “Eliminating Data Center Hot Spots: An Approach for Identifying and Correcting Lost Air .” During the presentation, all participants will be in a listen only mode. However, we encourage your questions or comments at anytime through the “chat” feature located at the lower left of your screen. These questions will be addressed as time allows. As a reminder, this Web Seminar is being recorded, today, December 5 th , 2007 and a recording will be sent to all attendees within 48 hours.
<Jen> Before we get started today, I’d like to introduce our speaker: Joining us from Upsite Technologies is Lars Lars Strong , P.E. Lars has been with Upsite Technologies, Inc., since its inception in 2001. Prior to this, Strong was the lead engineer and/or project manager for several private consulting firm endeavors and consulted with numerous international corporate data centers on design and construction management. Strong’s recent focus has been the identification and remediation of problems associated with the fluid dynamics and thermodynamics of data center cooling infrastructure. Moderating today’s conference is Patrick Cameron, Director of Business Development for DirectNET. Patrick is responsible for managing DirectNET’s suite of datacenter infrastructure solutions. Prior to DirectNET Patrick spent the last eight years designing and implementing software and hardware solutions at Accenture and Akamai Technologies. <Rebecca> Patrick, I’ll turn the conference over to you.
<Patrick>: Before we get started today, I’d like to quickly go over our agenda. The Uptime’s Institutes' Reducing bypass airflow is the technical backdrop. It illustrates the science of controlling your DC environment. Our goal today is to have a higher level discussion of this science as well the mitigation factors in play and their impacts to your business. Discussion of our customer trends Changing spaces -> Changing metrics -> Need for new baselines Whitepaper highlights with respect to inefficiency Best practices for mitigation Look at those best practices in action -> Case studies Q&A Collecting questions through out
Server density is increasing with the adoption of blades and virtualization These servers are requiring more power Higher density takes less space but results in higher operating cost both in power and cooling and management Increasing energy costs lead to top level energy scrutiny Power and cooling limit growth not physical space Patrick <paraphrase> To accommodate additional computing requirements it is not as easy as just adding servers. Now there needs to be a broader conversation with facilities and a need to plan for growth together. Lars <paraphrase>: Make mention of the ‘higher’ level that this applies to – c-levels. Mention the incident about the servers that were deemed needed by the IT team that the Facilities team did not have bandwidth w/ existing data center. Patrick: Power is in some ways easier to understand and manage but cooling is a bigger challenge and makes up the brunt of the costs. http://h20219.www2.hp.com/services/library/GetPage.aspx?pageid=540289&statusid=0&audienceid=0&ccid=225&langid=121
Patrick: HP study shows where the power is going. Only 1/3 goes to IT equipment, another small percentage is lost in conversion leaving nearly 2/3s of most DCs energy going to cooling. When thinking about improvement this is a big target but it requires a different understanding and skill set to address. <TRANSITION> Patrick: So, for some trends increase in density, increase in power our customers have a very high comfort level for the other like cooling gaining this expertise is often more of a stretch and can be a bit overwhelming. Lars can you walk our audience through what they need to be looking at to understand and manage the cooling of their environments? http://h20219.www2.hp.com/services/library/GetPage.aspx?pageid=540289&statusid=0&audienceid=0&ccid=225&langid=121
TRANSITION <Patrick>: That’s a very important point, Lars. the area for the greatest opportunity for improving power use is the mechanical equipment and as the HP study showed it is also the highest potential cost. But, how do we really know how to measure where we are? How can we measure our energy efficiency?
TRANSITION <Patrick>: In the last slide, you showed a sample data center with a CoE of 2.4, which falls to a typical CoE. Is that normal?
<TRANSITION> Patrick: Moving Target Depending on the availability needs CoE ratings will vary but there are general targets for tiers as we are continually finding opportunities for improvement on all tiers.
TRANSITION <Patrick>: What did you find to be the significant factors?
TRANSITION<Patrick>: lar, because I think Thermal incapacity and Bypass Airflow may be new terms for our audience, can you spent a little time defining these?
Patrick: So, what we’re really talking about here is the difference between the rated capacity of the air handler and the cooling being delivered to IT equipment?
<Patrick>: So, the less bypass airflow the better, right?
Both of these topics are covered in detail in the whitepaper you will receive for attending this webinar. This thorough analysis of 19 computer rooms provides additional detail on the science behind optimizing your datacenter to reduce TI and BPAF. Today we will be referencing the experience and data collected during the creation of this whitepaper as we talk through how these issues were affecting individual customers. <Patrick> Lars tell me a bit more about the lessons learned as you distilled the broader finding of this research.
Patrick: What you’re telling us is that in this study you found examples where people had over 2x’s the needed capacity and STILL had hot spots? Lars: Yes, but it gets worse.
<Patrick>: These all seem like symptoms. What are the problems?
Patrick Transition: How is it that a data center can end up with 14 times cooling infrastructure than is needed
Hammer ->nails Patrick: I can see how this makes sense, but if the answer isn’t more cooling, let talk a bit more about what is the answer. http://www.hpac.com/GlobalSearch/Article/24486/#capitalize
Patrick <paraphrase>: This leads to the manifestation of hot spots?
In either case exceeding 77 F with a relative humidity of less than 40% are serious threats to maximum information availability and hardware reliability TRANSTION<Patrick>: So how would this apply to my physical floor plan?
Patrick: So, Lars, does this assume that if we just move to a hot/cold aisle arrangement that we will solve our bypass airflow issues?
Patrick: So, this is easy to see in your color coded pictures but how do I fit this into my datacenter?
TRANSITION<Patrick>: What are some of the common ways to fix this problem?
Patrick It seems like this is very exhaustive list of the perfect solutions. Let’s talk through a case study so we can understand how these atrributes affect the data center.
<Patrick Transition> This is a nice review of the science, but how did these changes affect the bottom line?
Patrick: I’m sure folks are wondering “why can’t I just do this myself?”
<Patrick Transition> This example provides a great reminder that we need to be deliberate in our approach here. Your team at Upsite does this type of work all the time. How do you recommend moving forward?
We have also put together a initial troubleshooting kit to help you start to gather the information you need to make informed decisions about your space. The kit includes the Upsite Temperature Strip is a liquid crystal thermometer with an acrylic self-adhesive backing that quickly and accurately measures the air-intake temperature of IT equipment. The Upsite Temperature Strip indicates if the air temperature is within acceptable limits based on standards established by the American Society of Heating, Refrigerating and Air-Conditioning Engineers (ASHRAE) and equipment manufacturers. The troubleshooter card helps you quantify the pressure in your raised floor.
<Patrick> This concludes the presentation piece of the this webinar. Before we move to questions I wanted to remind you that copies of this presentation are available upon request. To receive a copy of today’s presentation refer to the link datacentertrends@directnet.us on the screen. If the issues discussed here seem relevant to your data center we are also offering a 15 minute complimentary cooling evaluation to further explore whether this approach could be beneficial to you. If this is of interest please email the address above and you will be contacted to schedule the evaluation. When talking about the positive attributes of potential solutions for bypass airflow you mentioned dressing raw edges. What are the ramifications here? How do I know if my computer room will benefit from this type of analysis? How much does an analysis cost? How does your product differ from the foam units I have seen that do the same thing?