You are currently browsing the category archive for the ‘dell’ category.

A white paper released today by Dell shows that the Dell M1000e blade chassis infrastructure offers significant power savings compared to equivalent HP and IBM blade environments. In fact, the results were audited by an outside source, the Enterprise Management Associates (http://www.enterprisemanagement.com). After the controversy with the Tolly Group report discussing HP vs Cisco, I decided to take the time to investigate these findings a bit deeper.

The Dell Technical White Paper titled, “Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures” was written by the Dell Server Performance Analysis Team. This team is designed to run competitive comparisons for internal use, however the findings of this report were decided to be published external to Dell since the results were unexpected. The team used an industry standard SPECpower_ssj2008 benchmark to compare the power draw and performance per watt of blade solutions from Dell, HP and IBM. SPECpower_ssj2008 is the first industry-standard benchmark created by the Standard Performance Evaluation Corporation (SPEC) that evaluates the power and performance characteristics of volume server class and multi-node class computers. According to the white paper, the purpose of using this benchmark was to establish a level playing field to examine the true power efficiency of the Tier 1 blade server providers using identical configurations.

What Was Tested

Each blade chassis was fully populated with blade servers running a pair of Intel Xeon X5670 CPUs. In the Dell configuration, 16 x M610 blade servers were used, in the HP configuration, 16 x BL460c G6 blade servers were used and in the IBM configuration, 14 x HS22 blade servers was used since the IBM BladeCenter H holds a x maximum of 14 servers. Each server was configured with 6 x 4GB (24GB total) and 2 x 73GB 15k SAS drives, running Microsoft Windows Server 2008 Enterprise R2. Each chassis used the maximum amount of power supplies – Dell: 6, HP: 6 and IBM: 4 and was populated with a pair of Ethernet Pass-thru modules in the first two I/O bays.

Summary of the Findings

I don’t want to re-write the 48 page technical white paper, so I’ll summarize the results.

  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 13 – 17% less power than the HP C7000 with 16 x BL460c G6 servers
  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 19 – 20% less power than the IBM BladeCenter H with 14 x HS22s
  • At idle power, Dell’s chassis used 24% less power than the HP C7000 with 16 x BL460c G6 servers
  • At idle power, Dell’s chassis used 63.6% less power than the IBM BladeCenter H with 14 x HS22s

Dell - Blade Solution Chart

Following a review of the findings I had the opportunity to interview Dell’s Senior Product Manager for Blade Marketing, Robert Bradfield, , where I asked some questions about the study.

Question – “Why wasn’t Cisco’s UCS included in this test?”

Answer – The Dell testing team didn’t have the right servers. They do have a Cisco UCS, but they don’t have the UCS blade servers that would equal the BL460 G6 or the HS22’s.

Question – “Why did you use pass-thru modules for the design, and why only two?”

Answer – Dell wanted to create a level playing field. Each vendor has similar network switches, but there are differences. Dell did not want for those differences to impact the testing at all, so they chose to go with pass-thru modules. Same reason as to why they didn’t use more than 2. With Dell having 6 I/O bays, HP having 8 I/O bays and IBM having 8 I/O bays, it would have been challenging to create an equal environment to measure the power accurately.

Question – “How long did it take to run these tests?”

Answer – It took a few weeks. Dell placed all 3 blade chassis side-by-side but they only ran the tests on one chassis at a time. They wanted to give the test in progress absolute focus. In fact, the two chassis that were not being tested were not running at all (no power) because the testing team wanted to ensure there were no thermal variations.

Question – “Were the systems on a bench, or did you have them racked?”

Answer – All 3 chassis were racked – in their own rack. They were properly cooled with perforated doors with vented floor panels under the floor. In fact, the temperatures never varied by 1 degree between all enclosures.

Question – “Why do you think the Dell design offered the lowest power in these tests?”

Answer – There are three contributing factors to the success of Dell’s M1000e chassis offering a lower power draw over HP and IBM. The first is the 2700W Platinum certified power supply. It offers greater energy efficiency over previous power supplies and they are shipping as a standard power supply in the M1000e chassis now. However, truth be told, the difference in “Platinum” certified and “Gold” certified is only 2 – 3%, so this adds very little to the power savings seen in the white paper. Second is the technology of the Dell M1000e fans. Dell has patent pending fan control algorithms that help provide better fan efficiency. From what I understand this patent helps to ensure that at no point in time does the fan rev up to “high”. (If you are interested in reading about the patent pending fan control technology, pour yourself a cup of coffee and read all about it at the U.S. Patent Office website – application number 20100087965). Another interesting fact is that the fans used in the Dell M1000e are balanced by the manufacturer to ensure proper rotation. It is a similar process to the way your car tires are balanced – there is one or two small weights on each fan. (This is something you can validate if you own a Dell M1000e). Overall, it really comes down to the overall architecture of the Dell M1000e chassis being designed for efficient laminar airflow. In fact (per Robert Bradfield) when you look at the Dell M1000e as tested in this technical white paper versus the IBM BladeCenter H, the savings in power realized in a one year period would be enough power saved to power a single U.S. home for one year.

I encourage you, the reader, to review this Technical White Paper (Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures) for yourself and see what your thoughts are. I’ve looked for things like use of solid state drives or power efficient memory DIMMs, but this seems to be legit. However I know there will be critics, so voice your thoughts in the comments below. I promise you Dell is watching to see what you think…

Last week at VMworld 2010 I had the opportunity to get some great pictures of HP and Dell’s newest blade servers. The HP Proliant BL620 G7, the HP Proliant BL680 G7 and the Dell PowerEdge M610X and M710HD. These newest blade servers are exciting offerings from HP and Dell so I encourage you to take a few minutes to look. Read the rest of this entry »

The VMware VMmark web site was recently updated to show Dell’s PowerEdge M910 blade server in the #1 slot (for blades) in the two socket space. I think the PowerEdge M910 is very intriguing, so I thought I’d spend some time highlighting the features. Read the rest of this entry »

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies. Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Read the rest of this entry »

Dell announced today two new additions to their blade server family – the PowerEdge 11G M710HD and the M610x. The two new servers are just a part of Dell’s “Blade 3.0 Launch” – a campaign highlighting Dell’s ongoing effort to become the leader in blade server technology. Over the next several months, Dell will be making changes in their chassis infrastructure introducing more efficient power supplies and fans that will require up to 10% less power over existing chassis. Don’t worry though, there will not be a new chassis. They’ll simply be upgrading the fans and power supplies that ship standard at no charge to the customer.

Dell also has announced a significant upgrade to their Chassis Management Controller Software (CMC). This is great news, as Dell’s chassis management software interface had not had an update since the early part of the decade. The CMC 3.0 release offers a better user interface and ease of use. One of the key features that CMC 3.0 will offer is the ability to upgrade the iDRAC, BIOS, RAID, NIC and Diagnostic firmware to all the blades at one time offering huge time savings. Expect the CMC 3.0 software to be available in early July 2010. For demo’s of the new interface, jump over to Dell TechCenter.

Dell PowerEdge 11G M710HDPowerEdge 11G M710HD
Ideal for virtualization or applications requiring large amounts of memory, the M710HD is a half-height blade server that offers up:

* Up to 2 Intel 5500 or 5600 Xeon Processors
* 18 memory DIMMs
* 2 hot-swap drives (SAS and Solid State Drive Option)
* 2 mezzanine card slots
* dual SD slots for redundant hypervisor
*2 or 4 x 1Gb NICs

On paper – the Dell M710HD looks like a direct competitor to the HP Proliant BL490 G6, and it is, however Dell has added something that could change the blade server market – a flexible embedded network controller. The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removeable daughter card, very similar to the mezzanine cards. This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows. For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent. When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment. Any time a manufacturer offers investment protection I get excited. An important note – the M710HD will come with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used.

PowerEdge 11G M610x
Dell PowerEdge 11G M610xAs the industry continues to hype up GPGPU (General Purpose computing on Graphic Processor Units), it’s no surprise to see that Dell has announced the availability of a blade server with dedicated PCIe 16xGen2 slots. Here’s some quick details about this blade server:

* Full-height blade server
*
Up to 2 Intel 5500 or 5600 Xeon Processors
* 12 memory DIMMs
* 2 hot-swap drives
* 2 mezzanine card slots
* 2 x PCIe 16x(Gen2) slots

I know the skeptical reader will think, “so what – HP and IBM have PCIe expansion blades,” which is true – however the M610x blade server differenciates itself by offering 2 x PCIe 16x Generation 2 slots that can hold up to 250w cards, allowing this blade server to handle many of the graphics cards designed for GPGPU or even the latest I/O Adapters from Fusion I/O. Although this blade server can handle these niche PCIe cards, don’t overlook the opportunity to take advantage of the PCIe slots for situations like fax modems, dedicated SCSI controller needs, or even dedicated USB requirements.

I’m curious to know what your thoughts are about these new servers. Leave me a comment and let me know.

For your viewing pleasure, here’s some more views of the M610x.Dell PowerEdge 11G M610x

NOTE: IDC revised their report on May 28, 2010. This post now includes those changes.

IDC reported on May 28, 2010 that worldwide server sales for Q1 2010 factory revenues increased 4.6 4.7% year over year to $10.4 billion in the first quarter of 2010 (1Q10). They also reported the blade server market accelerated and continued its sharp growth in the quarter with factory revenue increasing 37.1% 37.2% year over year, with shipment growth increasing by 20.8% compared to 1Q09. According to IDC, nearly 90% of all blade revenue is driven by x86 systems, a segment in which blades now represent 18.8% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following:

#1 market share: HP increased their market share from 52.4% in Q4 2009 to 56.2% in Q1 2010

#2 market share: IBM decreased their market share from 35.1% in Q4 2009 to 23.6% in Q1 2010.

The remaining 20.2% of market share was not mentioned, but I imagine they are split between Dell and Cisco. In fact, based on the fact that Cisco was not even mentioned in the IDC report, I’m willing to bet a majority of that I’m working on getting some visibility into clarification on that (if you’re with Dell or Cisco and can help, please shoot me an email.)

According to Jed Scaramella, senior research analyst in IDC’s Datacenter and Enterprise Server group, “”In the first quarter of 2009, we observed a lot of business in the mid-market as well as refresh activity of a more transactional nature; these factors have driven x86 rack-based revenue to just below 1Q08 value. Blade servers, which are more strategic in customer deployments, continue to accelerate in annual growth rates. The blade segment fared relatively well during the 2009 downturn and have increased revenue value by 13% from the first quarter of 2008.”

For the full IDC report covering the Q1 2010 Worldwide Server Market, please visit http://www.idc.com/getdoc.jsp?containerId=prUS22356410

new link: http://www.idc.com/getdoc.jsp?containerId=prUS22360110

Updated 5/24/2010 – I’ve received some comments about expandability and I’ve received a correction about the speed of Dell’s memory, so I’ve updated this post. You’ll find the corrections / additions below in GREEN.

Since I’ve received a lot of comments from my post on the Dell FlexMem Bridge technology, I thought I would do an unbiased comparison between Dell’s FlexMem Bridge technology (via the PowerEdge 11G M910 blade server) vs IBM’s MAX5 + HX5 blade server offering. In summary both offerings provide the Intel Xeon 7500 CPU plus the ability to add “extended memory” offering value for virtualization, databases and any other workloads that benefit from large amounts of memory.

The Contenders

IBM
IBM’s extended memory solution is a two part solution consisting of the HX5 blade server PLUS the MAX5 memory blade.

  • HX5 Blade Server
    I’ve spent considerable time on previous blogs detailing the IBM HX5, so please jump over to those links to dig into the specifics, but at a high level, the HX5 is IBM’s 2 CPU blade server that offers the Intel Xeon 7500 CPU. The HX5 is a 30mm, “single wide” blade server therefore you can fit up to 14 in an IBM BladeCenter H blade chassis.
  • MAX5
    The MAX 5 offering from IBM can be thought of as a “memory expansion blade.” Offering an additional 24 memory DIMM slots, the MAX5 when coupled with the HX5 blade server, provides a total of 40 memory DIMMs. The MAX5 is a standard “single wide”, 30mm form factor so when used with a single HX5 two IBM BladeCenter H server bays are required in the chassis.

Dell PowerEdge 11G M910 Blade ServerDELL
Dell’s approach to extended memory is a bit different. Instead of relying on a memory blade, Dell starts with the M910 blade server and allows users to use 2 CPUs plus their FlexMem Bridge to access the memory DIMMs of the 3rd and 4th CPU sockets. For details on the FlexMem Bridge, check out my previous post.

  • PowerEdge 11G M910 Blade Server
    The M910 is a 4 CPU capable blade server with 32 memory DIMMs. This blade server is a full-height server therefore you can fit 8 servers inside the Dell M1000e blade chassis.

The Face-Off

ROUND 1 – Memory Capacity
When we compare the memory DIMMs available on each, we see that Dell’s offering comes up with 32 DIMMs vs IBM’s 40 DIMMs. However, IBM’s solution of using the HX5 blade server + the MAX 5 memory expansion has a current maximum memory size is 8Gb whereas Dell offers a max memory size of 16Gb. While this may change in the future, as of today, Dell has the edge so I have to claim:

Round 1 Winner: Dell

ROUND 2 – Memory Performance
As many comments came across on my posting of the Dell FlexMem Bridge technology the other day, several people pointed out that the memory performance is something that needs to be considered when comparing technologies. Dell’s FlexMem Bridge offering reportedly runs at a maximum memory speed of 833Mhz, runs at a max of 1066Ghz, but is dependent upon the speed of the processor. A processor that has a 6.4GT QPI supports memory @ 1066Ghz ; a processor that supports 5.8GT/s QPI supports memory at 978Mhz, and a processor with a QPI speed of 4.8GT runs memory at 800Mhz. This is a component of Intel’s Xeon 7500 architecture so it should be the same regardless of the server vendor. Looking at IBM, we see the HX5 blade server memory runs at a maximum of 978Mhz. However, when you attach the MAX5 to the HX5 for the additional memory slots, however, the memory runs at speed of 1066Mhz, regardless of the speed of the CPU installed. While this appears to be black magic, it’s really the results of IBM’s proprietary eXa scaling – something that I’ll cover in detail at a later date. Although the HX5 blade server memory, when used by itself, does not have the ability to achieve 1066Ghz, this comparison is based on the Dell PowerEdge 11G M910 vs the IBM HX5+MAX5. With that in mind, the ability to run the expanded memory at 1066Mhz gives IBM the edge in this round.

Round 2 Winner: IBM

ROUND 3 – Server Density
This one is pretty straight forward. IBM’s HX5 + MAX5 offering takes up 2 server bays, so in the IBM BladeCenter H, you can only fit 7 systems. You can only fit 4 BladeCenter H chassis in a 42u rack, therefore you can fit a max of 28 IBM HX5 + MAX5 systems into a rack.

The Dell PowerEdge 11G M910 blade server is a full height server, so you can fit 8 servers into the Dell M1000e chassis. 4 Dell chassis will fit in a 42u rack, so you can get 32 Dell M910’s into a rack.

Round 3 Winner: Dell

(NEW) ROUND 4 – Expandability
It was mentioned several times in the comments that expandability should have been reviewed as well. When we look at Dell’s design, we see there two expansion options: run the Dell PowerEdge 11G M910 blade with 2 processors and the FlexMem Bridge, or run them with 4 processors and remove the FlexMem Bridge.

The modular design of the IBM eX5 architecture allows for a user to add memory (MAX5), add processors (2nd HX5) or both (2 x HX5 + 2 x MAX5). This provide users with a lot of flexibility to choose a design that meets their workload.

Choosing a winner for this round is tough, as there a different ways to look at this:

Maximum CPUs in a server: TIE – both IBM and Dell can scale to 4 CPUs.
Maximum CPU density in a 42u rack: Dell wins with 32 x 4 CPU servers vs IBM’s 12.
Maximum Memory in a server: IBM with 640Gb using 2 x HX5 and 2 x MAX5
Max Memory density in a 42u Rack: Dell wins with 16Tb

Round 4 Winner: TIE
Summary
While the fight was close, with a 2 to 1 win, it is clear the overall winner is Dell. For this comparison, I tried to keep it focused on the memory aspect of the offerings.

On a final note, at the time of this writing, the IBM MAX 5 memory expansion has not been released for general availability, while Dell is shipping their M910 blade server.

There may be other advantages relative to processors that were not considered for this comparison, however I welcome any thoughts or comments you have.

Let’s face it. Virtualization is everywhere.

Odds are there is something virtualized in your data center. If not, it soon will be. As more workloads become virtualized, chances are you are going to run out of “capacity” on your virtualization host. When a host’s capacity is exhausted, 99% of the time it is because the host ran out of memory, not CPU. Typically you would have to add another ESX host server when you run out of capacity. When you do this, you are adding more hardware cost AND more virtualization licensing costs. But what if you could simply add memory when you need it instead of buying more hardware. Now you can with Dell’s FlexMem Bridge.

Background
You may recall that I mentioned the FlexMem Bridge technology in a previous post, but I don’t think I did it justice. Before I describe what the FlexMem Bridge technology, let me provide some background. With the Intel Xeon 7500 CPU (and in fact with all Intel Nehalem architectures), the memory is controlled by a memory controller located on the CPU. Therefore you have to have a CPU in place to access the associated memory DIMMs…up until now. Dell’s innovative approach removed the necessity to have a CPU in order to access the memory.

Introducing Dell FlexMem Bridge
Dell’s FlexMem Bridge sits in CPU sockets #3 and #4 and connects a memory controller from CPU 1 to the memory DIMMs associated to CPU socket #3 and CPU 2 to the memory associated to CPU Socket #4.

The FlexMem Bridge does two things:

  1. It extends the Scalable Memory Interconnects (SMI) from CPU 1 and CPU 2 to the memory subsystem of CPU 3 and CPU 4.
  2. It reroutes and terminates the 2nd Quick Path Interconnect (QPI) inter-processor communications links to provide optimal performance which would otherwise be disconnected in a 2 CPU configuration.

Sometimes it’s easier to view pictures than read descriptions, so take a look at the picture below for a diagram on how this works.

(A special thanks to Mike Roberts from Dell for assistance with the above info.)

Saving 50% on Virtualization Licensing
So how does this technology from Dell help you save money on virtualization licenses? Simple – with Dell’s FlexMem Bridge technology, you only have to add memory, not more servers, when you need more capacity for VMs. When you add only memory, you’re not increasing your CPU count, therefore your virtualization licensing stays the same. No more buying extra servers just for the memory and no more buying more virtualization licenses. In the future, if you find you have run out of CPU resources for your VM’s, you can remove the FlexMem bridges and replace with CPUs (for models with the Intel Xeon 7500 CPU only.)

Dell FlexMem Bridge is available in the Dell PowerEdge 11G R810, R910 and M910 servers running the Intel Xeon 7500 and 6500 CPUs.

Perhaps one of Dell’s best kept secrets on their 11G servers (blade, rack and tower) is something called Lifecycle Controller. This innovative offering allows a user to configure the hardware, run diagnostics and prep the server for an operating system. “SO WHAT?” you are probably thinking – “HP and IBM have this with their SmartStart and ServerGuide CD’s!” Yes, you are right, however Dell’s innovation is a flash based device embedded on the motherboard that does all this – there are NO CD’s to mess with this. Out of the box, you turn it on and go.

What can Dell’s Lifecycle Controller do? Here’s a partial list taken from Dell TechCenter:

  • Basic device configuration (RAID, NIC and iDRAC) via simple wizards
  • Diagnose the system using embed Diagnostics utility
  • OS install by unpacking the drivers for the user selected OS
  • Drivers are embed for systems with iDRAC Express, Lifecycle Controller
  • Drivers are available on Systems Management Tools and Documentation media for systems with Baseboard Management Controller (BMC)
  • Advanced device configuration for NIC and BIOS. This is available only in systems with iDRAC Express
  • Update BIOS, firmware and stage updated drivers by directly connecting to relevant updates on ftp.dell.com. This is available only in systems with iDRAC Express
  • Roll back firmware to a last known good state. THis is available only in systems with iDRAC Express
  • Supports 7 languages (English, French, German, Spanish, Simplified Chinese, Japanese, Korean)
  • Auto-discovery of bare metal systems. iDRAC can be configured in factory or using USC to connect and authenticate to a provisioning console
  • Install OS on the discovered system using drivers resident on the Lifecycle Controller
  • Install custom OS image – allows users to install OS that does not have the desired drivers on the Lifecycle Controller
  • Install OS by booting from service image on a network share
  • Remote out-of-band instant Firmware Inventory of installed and available firmware images
  • Bare metal out-of-band updates – Remotely initiate offline BIOS, firmware and driver pack update and schedule updates

I’m a big fan of “seeing is believing”, so to see how easy it is to use, check out this demo on setting up an Operating System with no CD’s (except the O/S) taken from Dell TechCenter.

Kudos to Dell on this innovation. No CD’s means potentially faster deployment. If you wonder – can the data on the Lifecycle Controller be updated? The answer is YES – go to Dell TechCenter and check out the video on “Product Updates” (or click here to view directly.)

Let me know what you think about this. Do you see this as being helpful?

I heard a rumour on Friday that HP has been chosen by another animated movie studio to provide the blade servers to render an upcoming movie. To recount the movies that have used / are using HP blades:

So, as I look at the vast number of movies that have chosen HP for their blade server technology, I have to wonder WHY? HP does have some advantages in the blade marketplace, like having market share, but when you review HP with Dell, you would be surprised as to how similar the offerings are:

When you compare the two offerings, HP wins in a few categories, like the ability to have up to 32 CPUs in a single blade chassis – a valuable feature for rendering accomplished with the HP BL2x220c blade servers. However, Dell also shines in areas, too. Look at their ability to run 512GB of memory on a 2 CPU server using FlexMem Bridge technology. From a pure technology comparison (taking out the management and I/O of the equation), I see Dell offering very similar product offerings as HP and I have to wonder why Dell has not been able to get any movie companies to use Dell blades. Perhaps it’s not a focus of Dell marketing. Perhaps it is because HP has a history of movie processing on HP workstations. Perhaps movie companies need 32 CPUs in a chassis. I don’t know. I welcome any comments from Dell or HP, but I’d also like to know, what do you think? Let me know in the comments below.