You are currently browsing the tag archive for the ‘BL460 G6’ tag.

A white paper released today by Dell shows that the Dell M1000e blade chassis infrastructure offers significant power savings compared to equivalent HP and IBM blade environments. In fact, the results were audited by an outside source, the Enterprise Management Associates (http://www.enterprisemanagement.com). After the controversy with the Tolly Group report discussing HP vs Cisco, I decided to take the time to investigate these findings a bit deeper.

The Dell Technical White Paper titled, “Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures” was written by the Dell Server Performance Analysis Team. This team is designed to run competitive comparisons for internal use, however the findings of this report were decided to be published external to Dell since the results were unexpected. The team used an industry standard SPECpower_ssj2008 benchmark to compare the power draw and performance per watt of blade solutions from Dell, HP and IBM. SPECpower_ssj2008 is the first industry-standard benchmark created by the Standard Performance Evaluation Corporation (SPEC) that evaluates the power and performance characteristics of volume server class and multi-node class computers. According to the white paper, the purpose of using this benchmark was to establish a level playing field to examine the true power efficiency of the Tier 1 blade server providers using identical configurations.

What Was Tested

Each blade chassis was fully populated with blade servers running a pair of Intel Xeon X5670 CPUs. In the Dell configuration, 16 x M610 blade servers were used, in the HP configuration, 16 x BL460c G6 blade servers were used and in the IBM configuration, 14 x HS22 blade servers was used since the IBM BladeCenter H holds a x maximum of 14 servers. Each server was configured with 6 x 4GB (24GB total) and 2 x 73GB 15k SAS drives, running Microsoft Windows Server 2008 Enterprise R2. Each chassis used the maximum amount of power supplies – Dell: 6, HP: 6 and IBM: 4 and was populated with a pair of Ethernet Pass-thru modules in the first two I/O bays.

Summary of the Findings

I don’t want to re-write the 48 page technical white paper, so I’ll summarize the results.

  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 13 – 17% less power than the HP C7000 with 16 x BL460c G6 servers
  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 19 – 20% less power than the IBM BladeCenter H with 14 x HS22s
  • At idle power, Dell’s chassis used 24% less power than the HP C7000 with 16 x BL460c G6 servers
  • At idle power, Dell’s chassis used 63.6% less power than the IBM BladeCenter H with 14 x HS22s

Dell - Blade Solution Chart

Following a review of the findings I had the opportunity to interview Dell’s Senior Product Manager for Blade Marketing, Robert Bradfield, , where I asked some questions about the study.

Question – “Why wasn’t Cisco’s UCS included in this test?”

Answer – The Dell testing team didn’t have the right servers. They do have a Cisco UCS, but they don’t have the UCS blade servers that would equal the BL460 G6 or the HS22’s.

Question – “Why did you use pass-thru modules for the design, and why only two?”

Answer – Dell wanted to create a level playing field. Each vendor has similar network switches, but there are differences. Dell did not want for those differences to impact the testing at all, so they chose to go with pass-thru modules. Same reason as to why they didn’t use more than 2. With Dell having 6 I/O bays, HP having 8 I/O bays and IBM having 8 I/O bays, it would have been challenging to create an equal environment to measure the power accurately.

Question – “How long did it take to run these tests?”

Answer – It took a few weeks. Dell placed all 3 blade chassis side-by-side but they only ran the tests on one chassis at a time. They wanted to give the test in progress absolute focus. In fact, the two chassis that were not being tested were not running at all (no power) because the testing team wanted to ensure there were no thermal variations.

Question – “Were the systems on a bench, or did you have them racked?”

Answer – All 3 chassis were racked – in their own rack. They were properly cooled with perforated doors with vented floor panels under the floor. In fact, the temperatures never varied by 1 degree between all enclosures.

Question – “Why do you think the Dell design offered the lowest power in these tests?”

Answer – There are three contributing factors to the success of Dell’s M1000e chassis offering a lower power draw over HP and IBM. The first is the 2700W Platinum certified power supply. It offers greater energy efficiency over previous power supplies and they are shipping as a standard power supply in the M1000e chassis now. However, truth be told, the difference in “Platinum” certified and “Gold” certified is only 2 – 3%, so this adds very little to the power savings seen in the white paper. Second is the technology of the Dell M1000e fans. Dell has patent pending fan control algorithms that help provide better fan efficiency. From what I understand this patent helps to ensure that at no point in time does the fan rev up to “high”. (If you are interested in reading about the patent pending fan control technology, pour yourself a cup of coffee and read all about it at the U.S. Patent Office website – application number 20100087965). Another interesting fact is that the fans used in the Dell M1000e are balanced by the manufacturer to ensure proper rotation. It is a similar process to the way your car tires are balanced – there is one or two small weights on each fan. (This is something you can validate if you own a Dell M1000e). Overall, it really comes down to the overall architecture of the Dell M1000e chassis being designed for efficient laminar airflow. In fact (per Robert Bradfield) when you look at the Dell M1000e as tested in this technical white paper versus the IBM BladeCenter H, the savings in power realized in a one year period would be enough power saved to power a single U.S. home for one year.

I encourage you, the reader, to review this Technical White Paper (Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures) for yourself and see what your thoughts are. I’ve looked for things like use of solid state drives or power efficient memory DIMMs, but this seems to be legit. However I know there will be critics, so voice your thoughts in the comments below. I promise you Dell is watching to see what you think…

(updated 1/13/2010 – see bottom of blog for updates)

Eric Gray at www.vcritical.com blogged today about the benefits of using a flash based device, like an SD card, for loading VMware ESXi, so I thought I would take a few minutes to touch on the topic.

As Eric mentions, probably the biggest benefit of using VMware ESXi on an embedded device is that you don’t need local drives, which lowers the power and cooling of your blade server. While he mentions HP in his blog, both HP and Dell offer SD slots in their blade servers – so let’s take a look:

HP
HP currently offers these SD slots in their BL460 G6 and BL490 G6 blade servers. As you can see from the picture on the left (thanks again to Eric at vCritical.com) HP allows for you to access the SD slot from the top of the blade server. This makes it fairly convenient to access, although once the image is installed on the SD card, it’s probably not ever coming out. HP’s QuickSpecs for the BL460 G6 state offer up an “HP 4GB SD Flash Media” that has a current list price of $70, however I have been unable to find any documentation that says you MUST use this SD card, so if you want to try and use it with your own personal SD card first, good luck. It is important to note that HP does not currently offer VMware ESXi, or any other virtualization vendor’s software, pre-installed on an SD card, unlike Dell.

Dell
Dell has been offering SD slots on select servers for quite a while. In fact, I can remember seeing it at VMworld 2008. Everyone else was showing “embedded hypervisors” on USB keys while Dell was using an SD card. I don’t know that I have a personal preference of USB vs SD, but the point is that Dell was ahead of the game on this one.

Dell currently only offers their SD slot on their M805 and M905 blade servers. These are full-height servers, which could be considered good candidates for a virtualization server due to its redundant connectivity, high memory offering and high I/O (but that’s for another blog post.)

Dell chose to place the SD slots on the bottom rear of their blade servers. I’m not sure I agree with the placement, because if you needed to access the card, for whatever reason, you have to pull the server completely out of the chassis to service. It’s a small thing, but it adds time and complexity to the serviceability of the server.

An advantage that Dell has over HP is they offer to have VMware ESXi 4 PRE-LOADED on the SD key upon delivery. Per the Dell website, an SD card with ESXi 4 (basic, not Standard or Enterprise) is available for $99. It’s listed as “VMware ESXi v4.0 with VI4, 4CPU, Embedded, Trial, No Subsc, SD,NoMedia“. Yes, it’s considered a “trial” and it’s the basic version with no bells or whistles, however it is pre-loaded which equals time savings. There are additional options to upgrade the ESXi to either Standard or Enterprise as well (for additional cost, of course.)

It is important to note that this discussion was only about SD slots. All of the blade server vendors, including IBM, have incorporated USB slots internally to their blade servers, so whereas a specific server may not have an SD slot, there is still the ability to load the hypervisor onto an USB key (where supported.)

1/13/2010 UPDATE –SD slots are also available on the BL 280G6 and BL 685 G6.

There is also an HP Advisory discouraging use of an internal USB key for embedded virtualization. Check it out at:

http://h20000.www2.hp.com/bizsupport/TechSupport/Document.jsp?objectID=c01957637&lang=en&cc=us&taskId=101&prodSeriesId=3948609&prodTypeId=3709945