You are currently browsing the tag archive for the ‘dell’ tag.

A white paper released today by Dell shows that the Dell M1000e blade chassis infrastructure offers significant power savings compared to equivalent HP and IBM blade environments. In fact, the results were audited by an outside source, the Enterprise Management Associates (http://www.enterprisemanagement.com). After the controversy with the Tolly Group report discussing HP vs Cisco, I decided to take the time to investigate these findings a bit deeper.

The Dell Technical White Paper titled, “Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures” was written by the Dell Server Performance Analysis Team. This team is designed to run competitive comparisons for internal use, however the findings of this report were decided to be published external to Dell since the results were unexpected. The team used an industry standard SPECpower_ssj2008 benchmark to compare the power draw and performance per watt of blade solutions from Dell, HP and IBM. SPECpower_ssj2008 is the first industry-standard benchmark created by the Standard Performance Evaluation Corporation (SPEC) that evaluates the power and performance characteristics of volume server class and multi-node class computers. According to the white paper, the purpose of using this benchmark was to establish a level playing field to examine the true power efficiency of the Tier 1 blade server providers using identical configurations.

What Was Tested

Each blade chassis was fully populated with blade servers running a pair of Intel Xeon X5670 CPUs. In the Dell configuration, 16 x M610 blade servers were used, in the HP configuration, 16 x BL460c G6 blade servers were used and in the IBM configuration, 14 x HS22 blade servers was used since the IBM BladeCenter H holds a x maximum of 14 servers. Each server was configured with 6 x 4GB (24GB total) and 2 x 73GB 15k SAS drives, running Microsoft Windows Server 2008 Enterprise R2. Each chassis used the maximum amount of power supplies – Dell: 6, HP: 6 and IBM: 4 and was populated with a pair of Ethernet Pass-thru modules in the first two I/O bays.

Summary of the Findings

I don’t want to re-write the 48 page technical white paper, so I’ll summarize the results.

  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 13 – 17% less power than the HP C7000 with 16 x BL460c G6 servers
  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 19 – 20% less power than the IBM BladeCenter H with 14 x HS22s
  • At idle power, Dell’s chassis used 24% less power than the HP C7000 with 16 x BL460c G6 servers
  • At idle power, Dell’s chassis used 63.6% less power than the IBM BladeCenter H with 14 x HS22s

Dell - Blade Solution Chart

Following a review of the findings I had the opportunity to interview Dell’s Senior Product Manager for Blade Marketing, Robert Bradfield, , where I asked some questions about the study.

Question – “Why wasn’t Cisco’s UCS included in this test?”

Answer – The Dell testing team didn’t have the right servers. They do have a Cisco UCS, but they don’t have the UCS blade servers that would equal the BL460 G6 or the HS22’s.

Question – “Why did you use pass-thru modules for the design, and why only two?”

Answer – Dell wanted to create a level playing field. Each vendor has similar network switches, but there are differences. Dell did not want for those differences to impact the testing at all, so they chose to go with pass-thru modules. Same reason as to why they didn’t use more than 2. With Dell having 6 I/O bays, HP having 8 I/O bays and IBM having 8 I/O bays, it would have been challenging to create an equal environment to measure the power accurately.

Question – “How long did it take to run these tests?”

Answer – It took a few weeks. Dell placed all 3 blade chassis side-by-side but they only ran the tests on one chassis at a time. They wanted to give the test in progress absolute focus. In fact, the two chassis that were not being tested were not running at all (no power) because the testing team wanted to ensure there were no thermal variations.

Question – “Were the systems on a bench, or did you have them racked?”

Answer – All 3 chassis were racked – in their own rack. They were properly cooled with perforated doors with vented floor panels under the floor. In fact, the temperatures never varied by 1 degree between all enclosures.

Question – “Why do you think the Dell design offered the lowest power in these tests?”

Answer – There are three contributing factors to the success of Dell’s M1000e chassis offering a lower power draw over HP and IBM. The first is the 2700W Platinum certified power supply. It offers greater energy efficiency over previous power supplies and they are shipping as a standard power supply in the M1000e chassis now. However, truth be told, the difference in “Platinum” certified and “Gold” certified is only 2 – 3%, so this adds very little to the power savings seen in the white paper. Second is the technology of the Dell M1000e fans. Dell has patent pending fan control algorithms that help provide better fan efficiency. From what I understand this patent helps to ensure that at no point in time does the fan rev up to “high”. (If you are interested in reading about the patent pending fan control technology, pour yourself a cup of coffee and read all about it at the U.S. Patent Office website – application number 20100087965). Another interesting fact is that the fans used in the Dell M1000e are balanced by the manufacturer to ensure proper rotation. It is a similar process to the way your car tires are balanced – there is one or two small weights on each fan. (This is something you can validate if you own a Dell M1000e). Overall, it really comes down to the overall architecture of the Dell M1000e chassis being designed for efficient laminar airflow. In fact (per Robert Bradfield) when you look at the Dell M1000e as tested in this technical white paper versus the IBM BladeCenter H, the savings in power realized in a one year period would be enough power saved to power a single U.S. home for one year.

I encourage you, the reader, to review this Technical White Paper (Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures) for yourself and see what your thoughts are. I’ve looked for things like use of solid state drives or power efficient memory DIMMs, but this seems to be legit. However I know there will be critics, so voice your thoughts in the comments below. I promise you Dell is watching to see what you think…

Advertisements

Thanks to fellow blogger, M. Sean McGee (http://www.mseanmcgee.com/) I was alerted to the fact that Cisco announced on today, Sept. 14, their 13th blade server to the UCS family – the Cisco UCS B230 M1.

This newest addition performs a few tricks that no other vendor has been able to perform. Read the rest of this entry »

Last week at VMworld 2010 I had the opportunity to get some great pictures of HP and Dell’s newest blade servers. The HP Proliant BL620 G7, the HP Proliant BL680 G7 and the Dell PowerEdge M610X and M710HD. These newest blade servers are exciting offerings from HP and Dell so I encourage you to take a few minutes to look. Read the rest of this entry »

One of the questions I get the most is, “which blade server option is best for me?” My honest answer is always, “it depends.” The reality is that the best blade infrastructure for YOU is really going to depend on what is important to you. Based on this, I figured it would be a good exercise to do a high level comparison of the blade chassis offerings from Cisco, Dell, HP and IBM. If you ready through my past blog posts, you’ll see that my goal is to be as unbiased as possible when it comes to talking about blade servers. I’m going to attempt to be “vendor neutral” with this post as well, but I welcome your comments, thoughts and criticisms. Read the rest of this entry »

The VMware VMmark web site was recently updated to show Dell’s PowerEdge M910 blade server in the #1 slot (for blades) in the two socket space. I think the PowerEdge M910 is very intriguing, so I thought I’d spend some time highlighting the features. Read the rest of this entry »

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies. Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Read the rest of this entry »

Dell announced today two new additions to their blade server family – the PowerEdge 11G M710HD and the M610x. The two new servers are just a part of Dell’s “Blade 3.0 Launch” – a campaign highlighting Dell’s ongoing effort to become the leader in blade server technology. Over the next several months, Dell will be making changes in their chassis infrastructure introducing more efficient power supplies and fans that will require up to 10% less power over existing chassis. Don’t worry though, there will not be a new chassis. They’ll simply be upgrading the fans and power supplies that ship standard at no charge to the customer.

Dell also has announced a significant upgrade to their Chassis Management Controller Software (CMC). This is great news, as Dell’s chassis management software interface had not had an update since the early part of the decade. The CMC 3.0 release offers a better user interface and ease of use. One of the key features that CMC 3.0 will offer is the ability to upgrade the iDRAC, BIOS, RAID, NIC and Diagnostic firmware to all the blades at one time offering huge time savings. Expect the CMC 3.0 software to be available in early July 2010. For demo’s of the new interface, jump over to Dell TechCenter.

Dell PowerEdge 11G M710HDPowerEdge 11G M710HD
Ideal for virtualization or applications requiring large amounts of memory, the M710HD is a half-height blade server that offers up:

* Up to 2 Intel 5500 or 5600 Xeon Processors
* 18 memory DIMMs
* 2 hot-swap drives (SAS and Solid State Drive Option)
* 2 mezzanine card slots
* dual SD slots for redundant hypervisor
*2 or 4 x 1Gb NICs

On paper – the Dell M710HD looks like a direct competitor to the HP Proliant BL490 G6, and it is, however Dell has added something that could change the blade server market – a flexible embedded network controller. The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removeable daughter card, very similar to the mezzanine cards. This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows. For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent. When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment. Any time a manufacturer offers investment protection I get excited. An important note – the M710HD will come with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used.

PowerEdge 11G M610x
Dell PowerEdge 11G M610xAs the industry continues to hype up GPGPU (General Purpose computing on Graphic Processor Units), it’s no surprise to see that Dell has announced the availability of a blade server with dedicated PCIe 16xGen2 slots. Here’s some quick details about this blade server:

* Full-height blade server
*
Up to 2 Intel 5500 or 5600 Xeon Processors
* 12 memory DIMMs
* 2 hot-swap drives
* 2 mezzanine card slots
* 2 x PCIe 16x(Gen2) slots

I know the skeptical reader will think, “so what – HP and IBM have PCIe expansion blades,” which is true – however the M610x blade server differenciates itself by offering 2 x PCIe 16x Generation 2 slots that can hold up to 250w cards, allowing this blade server to handle many of the graphics cards designed for GPGPU or even the latest I/O Adapters from Fusion I/O. Although this blade server can handle these niche PCIe cards, don’t overlook the opportunity to take advantage of the PCIe slots for situations like fax modems, dedicated SCSI controller needs, or even dedicated USB requirements.

I’m curious to know what your thoughts are about these new servers. Leave me a comment and let me know.

For your viewing pleasure, here’s some more views of the M610x.Dell PowerEdge 11G M610x

NOTE: IDC revised their report on May 28, 2010. This post now includes those changes.

IDC reported on May 28, 2010 that worldwide server sales for Q1 2010 factory revenues increased 4.6 4.7% year over year to $10.4 billion in the first quarter of 2010 (1Q10). They also reported the blade server market accelerated and continued its sharp growth in the quarter with factory revenue increasing 37.1% 37.2% year over year, with shipment growth increasing by 20.8% compared to 1Q09. According to IDC, nearly 90% of all blade revenue is driven by x86 systems, a segment in which blades now represent 18.8% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following:

#1 market share: HP increased their market share from 52.4% in Q4 2009 to 56.2% in Q1 2010

#2 market share: IBM decreased their market share from 35.1% in Q4 2009 to 23.6% in Q1 2010.

The remaining 20.2% of market share was not mentioned, but I imagine they are split between Dell and Cisco. In fact, based on the fact that Cisco was not even mentioned in the IDC report, I’m willing to bet a majority of that I’m working on getting some visibility into clarification on that (if you’re with Dell or Cisco and can help, please shoot me an email.)

According to Jed Scaramella, senior research analyst in IDC’s Datacenter and Enterprise Server group, “”In the first quarter of 2009, we observed a lot of business in the mid-market as well as refresh activity of a more transactional nature; these factors have driven x86 rack-based revenue to just below 1Q08 value. Blade servers, which are more strategic in customer deployments, continue to accelerate in annual growth rates. The blade segment fared relatively well during the 2009 downturn and have increased revenue value by 13% from the first quarter of 2008.”

For the full IDC report covering the Q1 2010 Worldwide Server Market, please visit http://www.idc.com/getdoc.jsp?containerId=prUS22356410

new link: http://www.idc.com/getdoc.jsp?containerId=prUS22360110

Let’s face it. Virtualization is everywhere.

Odds are there is something virtualized in your data center. If not, it soon will be. As more workloads become virtualized, chances are you are going to run out of “capacity” on your virtualization host. When a host’s capacity is exhausted, 99% of the time it is because the host ran out of memory, not CPU. Typically you would have to add another ESX host server when you run out of capacity. When you do this, you are adding more hardware cost AND more virtualization licensing costs. But what if you could simply add memory when you need it instead of buying more hardware. Now you can with Dell’s FlexMem Bridge.

Background
You may recall that I mentioned the FlexMem Bridge technology in a previous post, but I don’t think I did it justice. Before I describe what the FlexMem Bridge technology, let me provide some background. With the Intel Xeon 7500 CPU (and in fact with all Intel Nehalem architectures), the memory is controlled by a memory controller located on the CPU. Therefore you have to have a CPU in place to access the associated memory DIMMs…up until now. Dell’s innovative approach removed the necessity to have a CPU in order to access the memory.

Introducing Dell FlexMem Bridge
Dell’s FlexMem Bridge sits in CPU sockets #3 and #4 and connects a memory controller from CPU 1 to the memory DIMMs associated to CPU socket #3 and CPU 2 to the memory associated to CPU Socket #4.

The FlexMem Bridge does two things:

  1. It extends the Scalable Memory Interconnects (SMI) from CPU 1 and CPU 2 to the memory subsystem of CPU 3 and CPU 4.
  2. It reroutes and terminates the 2nd Quick Path Interconnect (QPI) inter-processor communications links to provide optimal performance which would otherwise be disconnected in a 2 CPU configuration.

Sometimes it’s easier to view pictures than read descriptions, so take a look at the picture below for a diagram on how this works.

(A special thanks to Mike Roberts from Dell for assistance with the above info.)

Saving 50% on Virtualization Licensing
So how does this technology from Dell help you save money on virtualization licenses? Simple – with Dell’s FlexMem Bridge technology, you only have to add memory, not more servers, when you need more capacity for VMs. When you add only memory, you’re not increasing your CPU count, therefore your virtualization licensing stays the same. No more buying extra servers just for the memory and no more buying more virtualization licenses. In the future, if you find you have run out of CPU resources for your VM’s, you can remove the FlexMem bridges and replace with CPUs (for models with the Intel Xeon 7500 CPU only.)

Dell FlexMem Bridge is available in the Dell PowerEdge 11G R810, R910 and M910 servers running the Intel Xeon 7500 and 6500 CPUs.

Perhaps one of Dell’s best kept secrets on their 11G servers (blade, rack and tower) is something called Lifecycle Controller. This innovative offering allows a user to configure the hardware, run diagnostics and prep the server for an operating system. “SO WHAT?” you are probably thinking – “HP and IBM have this with their SmartStart and ServerGuide CD’s!” Yes, you are right, however Dell’s innovation is a flash based device embedded on the motherboard that does all this – there are NO CD’s to mess with this. Out of the box, you turn it on and go.

What can Dell’s Lifecycle Controller do? Here’s a partial list taken from Dell TechCenter:

  • Basic device configuration (RAID, NIC and iDRAC) via simple wizards
  • Diagnose the system using embed Diagnostics utility
  • OS install by unpacking the drivers for the user selected OS
  • Drivers are embed for systems with iDRAC Express, Lifecycle Controller
  • Drivers are available on Systems Management Tools and Documentation media for systems with Baseboard Management Controller (BMC)
  • Advanced device configuration for NIC and BIOS. This is available only in systems with iDRAC Express
  • Update BIOS, firmware and stage updated drivers by directly connecting to relevant updates on ftp.dell.com. This is available only in systems with iDRAC Express
  • Roll back firmware to a last known good state. THis is available only in systems with iDRAC Express
  • Supports 7 languages (English, French, German, Spanish, Simplified Chinese, Japanese, Korean)
  • Auto-discovery of bare metal systems. iDRAC can be configured in factory or using USC to connect and authenticate to a provisioning console
  • Install OS on the discovered system using drivers resident on the Lifecycle Controller
  • Install custom OS image – allows users to install OS that does not have the desired drivers on the Lifecycle Controller
  • Install OS by booting from service image on a network share
  • Remote out-of-band instant Firmware Inventory of installed and available firmware images
  • Bare metal out-of-band updates – Remotely initiate offline BIOS, firmware and driver pack update and schedule updates

I’m a big fan of “seeing is believing”, so to see how easy it is to use, check out this demo on setting up an Operating System with no CD’s (except the O/S) taken from Dell TechCenter.

Kudos to Dell on this innovation. No CD’s means potentially faster deployment. If you wonder – can the data on the Lifecycle Controller be updated? The answer is YES – go to Dell TechCenter and check out the video on “Product Updates” (or click here to view directly.)

Let me know what you think about this. Do you see this as being helpful?