With your data center design in place, shouldn’t your data center run smoothly? As some of you may already have discovered, anything can happen to your data center, no matter how good (or bad) its design. The following article discusses how a strong data center operations program is more important to the smooth functioning of company than the design of their data center.
1. A solid data center operations program will maximize your data center design
No matter how old your data center design, a strong data center operations program will help optimize its performance. A strong data center operations program can take an old data center and increase its efficiency and capacity, saving your company serious money. Even if your data center design is unreliable, consistent operations and maintenance can keep the data center running. Without a strong data center operations program, however, your dated data center is at risk for an outage.
Even if you have recently invested in a new data center, a strong data center operations program is vital. A newly designed data center without a strong data center operations program is like a Ferrari without a driver, it will not go anywhere. Having a strong data center operations program will allow you to take full advantage of your new data center’s potential. As many people know, the number one cause of problems in data centers is due to human error. Your data center design is only as good as the people who work it.
2. Strong data center operations will decrease the likelihood of an Outage
No matter how new or efficient your data center design, it is still at risk of experiencing an outage. And when it occurs, what then? Significant downtime will jeopardize your company’s investments and reputation. Having a strong data center operations program will decrease the likelihood of an outage. If an outage does occur, your data center operations staff will have their carefully constructed EOPs to help fix the problem and have the data center running efficiently again in no time. Without the data center operations program, your data center could be down for quite some time. For many companies, taking downtime can cost millions of dollars per minute, and there’s no such thing as a one minute outage.
3. Strong Data Center Operations Will Help Protect Your Capital Investment in Your Data Center
You may have spent millions on your data center design. Without a strong data center operations program, however, this investment could quickly waste away to nothing. As noted above, without a strong data center operations program, you will not be able to take advantage of everything your data center has to offer. In addition, however, your data center will not be maintained properly, allowing its value to slip away. Having a strong data center operations program will help take care of the capital investment that your data center is.
Although you may think you are taking a short cut by not investing in a data center operations program, not doing so will cost you money, time, and your reputation. Take care of and make the most of your data center by developing a strong data center operations program.
For more information on a good data center operations program, please check out our white paper:
By Bob Woolley
It’s interesting to me just how many data center facility engineering teams operate without a formal change management process. Usually, you can’t even perform a change on a single server in the data center without going through a rigorous change control process, but FE teams routinely operate and maintain the electrical and HVAC equipment that keeps every server in the building running without a similar method of control.
I’m not talking about having a change management system that provides notification to end users about a maintenance activity taking place. That’s an important function, of course, but what’s typically missing is a change control process that deals with the actual performance of an installation or maintenance activity. Since this is where the human errors occur and the potential for service disruption is high, it would make sense that a methodology be employed that is at least as rigorous as the one the IT teams use.
The problem of course is that most FE teams are not wired for this type of activity. In normal buildings, the stakes are much lower and the extra time and effort it takes to implement change management processes are not as clearly justified. Facility management organizations and the individuals that they employ are typically used to working in non-critical environments, and they bring the same tools and techniques to the table when working on the data center facility infrastructure. Therefore, it's not always second nature to ask why you need to design a process for the worst case scenario when 99.9 times out of 100 you can get by just fine with a more simplified approach. Unfortunately, three ‘nines’ of availability just doesn’t make the grade for most data center managers.
What does a real and effective facility change management program look like? The use of a Method of Procedure (MOP) or a similar work control document for all work on or around the critical infrastructure equipment is one indicator. Having a documented process for MOP usage and work performance is even better. Rules for vendor management and supervision, a comprehensive training program, formal Quality Control processes and a Computerized Maintenance Management System (CMMS) are other positive signs.
The days are gone when the IT and FE groups could operate on different planes when it comes to change management. The data center is a single entity that can only perform as expected when everyone in the environment operates to the same standard of quality and reliability. It’s time for all Facility Engineering groups to drink the Kool-Aid and adopt effective change control practices. However, they may have to be dragged to the table by the folks in the organization that already understand the concepts. For more information on change management best practices, download my whitepaper:
By Martin Brennan, Critical Facilities Manager
Let’s picture it for a moment: you've built your top of the line data center with all the bells and whistles. This room should service your IT needs for the next 10 years of growing servers. The dust has just settled from the heat rejection test where you consumed enough power to launch a satellite into space and you’re ready to sit back and relax.
The reality is that it’s going to take IT the next 10 years to return the raised floor space to those tested power levels, maybe even longer. What do you do with these huge cooling units in the meantime? There’s one unit cooling, one unit re-heating, one unit humidifying and one unit de-humidifying. All this activity is to satisfy one server in the room. This drives your PUE is sky high. It’s the equivalent of trying to drive a race car 1 mile per hour. Your 10 cylinder engine is pumping, but your foot is on the brake.
Design processes typically focus on maximizing a data center's life, and not enough consideration is given to the start and middle. Building for the end can create operational issues that drive up yearly operating costs. That’s why it’s important to protect your utility bills from operating an empty data center. Here are five ideas you should start using to further your understanding of what is going on in your data center, and help you minimize costs:
1. Generate a heat load study. Know what level of power is being consumed in your room. Based on PDU power output to servers, estimate how much cooling is required to dissipate that heat. Add in any outside factors, lighting, PDU transformer heat, and roof or exterior wall loads. Convert your CRAC unit output from tons to kW. Round up the number of CRAC units you need to dissipate the heat load.
2. Designate master CRACs based upon the heat load data and CRAC unit proximity to the loads. Designate all remaining CRACs as standby units. Set temperature and humidity on Master units in line with your IT server standards. Open the tolerances on the standby CRACs.
3. Cycle your Masters/Standby CRACs and re-visit your heat load study on a monthly basis. This should help distribute the number of run-time hours on compressors and condensing fans so that you are not having failures on your CRACs closest to the heat loads.
4. Seal all floor tile power penetrations and install blank plates in all non-occupied cabinets. There's no way around this one.
5. When performing temperature checks in your data center, try to standardize your readings. Choose a consistent location to take your readings, for example: cold isles, server cabinet doors @ 4' AFF. This can help in troubleshooting airflow problems and identify any possible air-dams or need for additional perforated tiles.
In the event of a master unit failure, the standby units will still cool the space as the temperature starts to rise in the room. You can then adjust temperature set points until the unit has been repaired.
Here's an operational mistake to avoid: Powering off excessive CRAC units. You may get away with one or two, but watch out! To control costs when purchasing a raised floor, many times rooms are fitted on day one with all the perforated tiles required to operate at capacity. This decision removes the ability to power off individual CRAC units. The result is for each CRAC that is powered off, airflow decreases across every perforated tile. The few cabinets with servers can suffer from the low CFM through the tile and result in high temperature alarms. This is why we open the tolerances on the CRAC rather than powering off. Static pressures are maintained and the perforated tiles that are supplying a server do not become subject to low airflows.
Last but certainly not least, be sure to document all of your actions. Take snapshots of your PUE and operations before you incorporate your changes, and then document your energy savings. Now that you have completed these five items, you can bring down cooling costs for further savings and sleep easy at night knowing your decisions are backed by data.
There are many unique data center designs out there that are revolutionizing the way that the world thinks of data centers. No longer do we mandate that data centers be in a large box. As we’ve seen in the past few years, data centers can come in all shapes and sizes. Let’s take a look at some of the most innovative data centers out there right now.
Google – Hamina, Finland
This new data center from Google is in an old paper plant, located on the Gulf of Finland in the Baltic Sea. The original paper plant was equipped with underground tunnels designed to suck in sea water to power the plant. By utilizing the initial design, Google was able to reconfigure the structure to allow sea water to power the cooling system instead of electricity. They also designed a tempering building to mix the heated water back with cool ocean water before released back into the sea. This design feature was important to make sure that there was little to no environmental impact on the ocean’s ecosystem in the area.
Yahoo – Lockport, New York
Yahoo’s famous “chicken coop” data center has been open for about a year now. The facility is designed to maximize the use of free air cooling, which is made easier by its location in upstate New York. In fact, there are only 9 days out of a year when free cooling cannot be used. The facility enjoys a PUE of 1.08, which is remarkable. The design was inspired by farmers, and uses a complicated louver system to control the airflow throughout the buildings.
Iron Mountain – Boyers, PA
Talk about secure! This data center is built in an old limestone mine in western Pennsylvania. It’s located around 200 feet below ground, which allows it to take advantage of the naturally cool air to maintain the facility. The natural limestone walls also play a part in this by absorbing heat produced by the data center equipment.
Microsoft – Chicago
Much has been made about Microsoft’s Chicago data center, where servers are housed in 40 foot shipping containers. The data center is one of the largest in the world, and has the capacity to hold 112 containers full of servers on top of traditional data center server rooms. One of the interesting aspects about this data center is that the containers themselves vary widely, with many coming from different vendors. Some utilize a center aisle system, while others use a side aisle. Some containers hold strictly servers, while others also house power equipment. It will be interesting to see the useful life of these containers, and how they fair in years to come.
Lakeside – Chicago
Chicago seems to have more than its fair share of innovative data centers. The Lakeside Technology Center could certainly win an award for the prettiest data center. Housed in the old R.R. Donnelly building, the developers of the building turned it from an old printing factory to a state of the art carrier hotel. It now houses several big name clients, including Equinix and Global Center. The original building is nearing 100 years old, and showcases the craftsmanship of the day in the details of the building. Because it was built to support the heavy printing presses of the day, the transition to a data center was relatively seamless.
Hewlett Packard – Wynyard, England
We’ve already talked about cooling by sea water, but this data center is cooled by sea air from the North Sea. This data center uses a lower floor that cycles the outside air through the facility, allowing free cooling for all but about 20 hours a year. This allows the facility to maintain a PUE of roughly 1.2. Another unique design feature is that the servers are kept in white cabinets, which reflects more light and helps save on the lighting bill
Citi – Frankfurt, Germany
This data center was the first ever to ear LEED-platinum certification, due to many energy saving design aspects of the building. The most noticeable is the “green” roof, which helps regulate the internal temperature of the building, as well as absorbing rain water. This allows the facility to operate on outside air cooling 63% of the year. When the temperature won’t allow for outside air to be used, the facility cools down by using a reverse osmosis water treatment, which helps to save 13 million gallons of water per year. That’s a lot of water!
Key takeaways from the article and webcast included:
- Most modern data centers utilize some level of SCADA control in their critical switchgear and mechanical plants.
- A direct connection to internet is not required for a SCADA systems to become infected
- Unlike all other malware and hacks, cyber attacks on SCADA systems can cause catastrophic damage to “real world” electrical and mechanical infrastructure.
- Data center infrastructure is a tasty target for cybercriminals and cyber terrorists.
At the time, there were very few examples of cyber attacks against SCADA controlled systems. However, the Stuxnet worm that damaged uranium purification centrifuges in Iran provided some concrete evidence of what a well executed SCADA exploit could achieve.
At the conclusion of the article and the webcast I predicted that Stuxnet would be the first of many attacks on SCADA systems and that this vulnerability posed a real threat to national security. Furthermore, I predicted that attacks by for-profit, cyber criminals and would become common and would represent an increasing threat to unprotected commercial mission critical facilities. In the few short months since the article and webcast there is already evidence that these predictions were accurate.
Here’s a few of the news items since the article and webcast:
Feb 2011 The online “hacktivist” collective known as “Anonymous” claims to have access to the Stuxnet worm. Criminal organizations and international or corporate espionage are obvious sources of cyber attacks on critical infrastructure. Hacktivist groups, such as Anonymous, are less well known to the general public but are emerging as a powerful player on the cyberwar landscape.
March 2011 Technology and application security firm Idappcom identifies 52 new SCADA exploits. According to leading UK based digital publisher, v3.co.uk: “Cyber criminals appear to be ramping up their interest in industrial control systems after research from application security management firm Idappcom found 52 new threats in March targeted at supervisory control and data acquisition (Scada) systems of the sort hit by the infamous Stuxnet worm. Tony Haywood, chief technology officer at Idappcom, told V3.co.uk that hackers could be going for the systems as they are typically less well defended than more mainstream public facing IT systems…”
May 2011 ICS-CERT (Industrial Control System-Cyber Emergency Readiness Team) a branch of the US Dept of Homeland Security (DHS) issued a number of advisories in 2011 regarding vulnerabilities in SCADA systems. These advisories included ICSA-11-131-01 which describes how vulnerabilities in Iconics (Human Machine Interfaces – HMI) Genesis32 and BizViz products, “results in remote arbitrary code execution with privileges of the current user.”
May 13, 2011 Obama Administration offers a “Cybersecurity Legislative Proposal” to assist Congress on the formation of new cyber laws. The proposal concludes that, “Our Nation is at risk. The cybersecurity vulnerabilities in our government and critical infrastructure are a risk to national security, public safety, and economic prosperity.”
Without a doubt, the cybersecurity challenges confronting our Nation have more facets than the vulnerability of SCADA system. However, the Federal government is taking a very proactive and comprehensive stance on the issue of critical infrastructure security. This stance will necessarily address the SCADA component of critical infrastructure facilities such as power generation station and data centers.
These news items are snapshots that indicate a clear and growing threat to the security of SCADA systems. It is essential that private and public data centers recognize this vulnerability and take steps to secure their systems from cyberattack.
Your data center operations are one of the most vital parts of your company; you need to make sure they are in competent hands.
You want to eliminate risk, expenses and wasted time so that you can focus more on other core objectives that help you achieve your goals and grow your organization. When looking to streamline your data center operations, here are five steps you should employ to ensure that your data is always available, secure and reliable:
Make Sure the Process is Conveyed in an Operations Plan
By clearly laying out exactly what it is you plan to achieve and precisely how to do it, your data center operations team members will be better able to reach pre-determined goals. This is an extremely important step that should be accomplished early on by the organization. It is similar to the writing of a business plan when first launching a company. Without this, it will be easy to lose track of your end goals, and make constant improvements difficult.
Set Up Performance Indicators
You’ve paved the road but you need to know how far you’ve gone. You need to discuss how the quality and success of the service delivery will be evaluated, initially and then adjusting throughout the duration of the contract. After the process is defined, you need to set up key performance indicators, which will define the metrics by which your operations program will be judged. By setting up a method for measuring your progress, you will able to determine whether your strategy is working or not. If certain goals are not being met, then adjustments to the data center operations process need to be made.
Be Prepared for a Change in Scope of Work
Once you’ve establish that there is a successful data center operations strategy in place and you know how to measure it efficiently, it’s important to plan for your long term operational success. In critical environments, the unexpected can happen at any time. Having a long term plan in place can help seamlessly transition during these times of change.
What happens if there are facility expansions or additions in numbers of sites? You need to have a prepared strategy for any sudden growth so that you are not overwhelmed by rapidly rising needs. There should be no wasted time as you scramble to accommodate these expansions. This also means recognizing factors that could change the proposed scope of services.
Define Your Customer Service Needs
When you need to get a hold of your facility operations or vendors, are they easily accessible?
Something could come up at anytime of the day that would require you to need to get in contact with them. Even if they do not have around-the-clock assistance, do their hours mesh well with yours? Are they in the same time zones? Do they work quickly and efficiently to resolve issues that may arise that they are responsible for? These questions and more must be answered for you to be fully prepared.
Establish Who Your Go-To Experts Are
It’s well known that a majority of data center outages are caused by human error. This is why your personnel may be the most important component of your data center operations plan. You should have access to a team of highly qualified and experienced personnel that will be able to help you with any dilemma you face. Continuing education and training are important to guarantee that your experts are as qualified as possible. Make sure their certifications are up to date and that they have methods in place to continually hone their skills. This will help ensure your success.
For more information, download our Hitchhiker's Guide to Data Center Operations:
Last week, The Uptime Institute announced their 2011 Green Enterprise IT Award winners, and Lee Technologies was one of them! Jointly with Harris Corporation, Lee Technologies has been awarded in the “Audacious Idea” category for its unique multi-zone water containment system. This system allows irrigation for the entire landscaped property (4.5 acres) without using potable water at a savings of 2.6M gallons of water per annum. Significant cost savings are also achieved by this innovation.
Lee Tech and Harris Corporation will be attending the Uptime Institute Symposium to accept this award, and to present the case study. Their session will be Wednesday, May 11 from 11:30am to 12:00pm. If you’re attending the show, please stop by! We look forward to seeing you there.
By Steven Manos
Now let’s not start singing along like we just left an Austrian convent to teach a bunch of lederhosen-clad rich kids how to sing. Instead, take a trip with me all the way back to January 14th, 1972 when the world was forever changed by the angelic, almost other-worldly sound of The Brady Bunch Kids. Dough Re Mi was the title to episode 65 where Peter Brady and the rest of the gang belted out its timeless second single Time to Change.
Aside from being a public service announcement on how awkward in can be to be a pre-pubescent teen, this harmonic vocal symphony could easily become the anthem for the modular data center revolution.
Let’s face it, when the Brady kids were telling us that “when it’s time to change, you’ve got to re-arrange,” it was like they had Nostradamus-like seer ability as it relates to our industry. With the almost daily refinements in container/modular data center design and the pace at which new products/services are popping up to support this new way to deploy, the industry is doing its best to “re-arrange.”
For those who anticipate having to provide the maintenance and support of these new fandangled gadgets (that one’s for you Grandma), I’m sure many of you feel as though you are stuck on the Starship Enterprise, being buried by a pile of propagating Tribbles. Because of this, I have grown acutely interested in how this alters/differs in how we manage/maintain these solutions vs. traditional raised floor.
Sure, I know a 225kVA UPS is a 225kVA UPS whether it’s sitting in an electrical room or in a metal box, but what ARE the idiosyncrasies that exist in maintaining and supporting containerized or modular data center solutions? Are there differences? Any subtle nuances? Are there enough of either to warrant an article that summons the likes of the Brady Bunch or one of the greatest episodes of Star Trek? My initial assumption would be that with modularity promoting simplicity, that if anything, we should see less complexity of systems that foster easier maintenance and support.
Well, while it is impossible for me to know all of the adjustments in maintenance and support for every solution out there, here are a few of the simple/blatant ones come to mind:
- Micro Climate: Like my mother used to always say in the summertime, “Shut the door! Were you born in a barn?!” The container/module is an extremely controlled environment, simple issues such as leaving a door open could significantly compromise hot/cold aisle dynamics.
- Spatially Confined: Much like my swim trunks from last summer, things can be a little tight. Staff must operate under OSHA confined space requirements. With containers being treated as equipment, regulations would be similar to working on an air handler, etc.
- Harsh working conditions: I was just in a colo where the temp was in the high 70’s and it felt hot. Like poor more water on the pile of sauna stones and don’t sit too close to that guy over there hot.
For those that work in container hot aisles though, high 70’s would feel as if they were working in shorts… in Duluth…in February. Here too there are a number of OSHA considerations with hot aisles reaching 105 degrees.
Obviously these examples are primarily environmental due to the dynamics inherent in working within a smaller scale. I would love to get feedback from those out there to determine what other differences you have seen or assume one would see in maintaining container or modular based solutions.
By the way, if anyone finds a shirt like Peter is sporting in the video, I wear a XXL.
Sha, na, na, na, na ,na , na, na, na…
Shan a na na na!
In case you haven’t heard, we’ve got some big news to share. Schneider Electric has acquired Lee Technologies to expand their IT business skills in data center management. To read more about this deal, here’s the full press release.
There’s a lot going on in the data center world this month! We’ve been busy busy with tradeshows, events, and planning upcoming webinars. Last week, several Lee Tech employees attended the DataCenterDynamics show in New York City. Any excuse to go to the Big Apple will do, right? At the show, we got to reconnect with lots of old industry friends, as well as meet some new ones.
Also last week, our own Steve Manos sat on a panel at a Lake Michigan 7x24 Exchange chapter meeting. The panel was on modular data centers, and had over 200 people turn out.
This week, we’ve been busy preparing for our next webinar, “Is Your Data Center Ready for STUXNET?” It’s going to be live on Friday, March 25th at 2:00pm EDT, with Eric Gallant presenting. We’re always interested in new ideas for content. Are there any topics you would like to see in a webinar or a white paper?
The last week in March, we’ll be off to Las Vegas for AFCOM’s Data Center World. We’re looking forward to being in sunny and warm Las Vegas for a change! Our own Mike Hagan will be speaking on Wednesday, March 30th at 9:15 about Modular Data Centers. So, if you’re at AFCOM, please come check out the session!
What data center events are you attending this year?