You are currently browsing the tag archive for the ‘HP’ tag.

IDC came out with their 2Q 2010 worldwide server market revenue report last month which shows that HP lost blade server market share to IBM. Read the rest of this entry »

Advertisements

A white paper released today by Dell shows that the Dell M1000e blade chassis infrastructure offers significant power savings compared to equivalent HP and IBM blade environments. In fact, the results were audited by an outside source, the Enterprise Management Associates (http://www.enterprisemanagement.com). After the controversy with the Tolly Group report discussing HP vs Cisco, I decided to take the time to investigate these findings a bit deeper.

The Dell Technical White Paper titled, “Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures” was written by the Dell Server Performance Analysis Team. This team is designed to run competitive comparisons for internal use, however the findings of this report were decided to be published external to Dell since the results were unexpected. The team used an industry standard SPECpower_ssj2008 benchmark to compare the power draw and performance per watt of blade solutions from Dell, HP and IBM. SPECpower_ssj2008 is the first industry-standard benchmark created by the Standard Performance Evaluation Corporation (SPEC) that evaluates the power and performance characteristics of volume server class and multi-node class computers. According to the white paper, the purpose of using this benchmark was to establish a level playing field to examine the true power efficiency of the Tier 1 blade server providers using identical configurations.

What Was Tested

Each blade chassis was fully populated with blade servers running a pair of Intel Xeon X5670 CPUs. In the Dell configuration, 16 x M610 blade servers were used, in the HP configuration, 16 x BL460c G6 blade servers were used and in the IBM configuration, 14 x HS22 blade servers was used since the IBM BladeCenter H holds a x maximum of 14 servers. Each server was configured with 6 x 4GB (24GB total) and 2 x 73GB 15k SAS drives, running Microsoft Windows Server 2008 Enterprise R2. Each chassis used the maximum amount of power supplies – Dell: 6, HP: 6 and IBM: 4 and was populated with a pair of Ethernet Pass-thru modules in the first two I/O bays.

Summary of the Findings

I don’t want to re-write the 48 page technical white paper, so I’ll summarize the results.

  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 13 – 17% less power than the HP C7000 with 16 x BL460c G6 servers
  • While running the CPUs at 40 – 60% utilization, Dell’s chassis used 19 – 20% less power than the IBM BladeCenter H with 14 x HS22s
  • At idle power, Dell’s chassis used 24% less power than the HP C7000 with 16 x BL460c G6 servers
  • At idle power, Dell’s chassis used 63.6% less power than the IBM BladeCenter H with 14 x HS22s

Dell - Blade Solution Chart

Following a review of the findings I had the opportunity to interview Dell’s Senior Product Manager for Blade Marketing, Robert Bradfield, , where I asked some questions about the study.

Question – “Why wasn’t Cisco’s UCS included in this test?”

Answer – The Dell testing team didn’t have the right servers. They do have a Cisco UCS, but they don’t have the UCS blade servers that would equal the BL460 G6 or the HS22’s.

Question – “Why did you use pass-thru modules for the design, and why only two?”

Answer – Dell wanted to create a level playing field. Each vendor has similar network switches, but there are differences. Dell did not want for those differences to impact the testing at all, so they chose to go with pass-thru modules. Same reason as to why they didn’t use more than 2. With Dell having 6 I/O bays, HP having 8 I/O bays and IBM having 8 I/O bays, it would have been challenging to create an equal environment to measure the power accurately.

Question – “How long did it take to run these tests?”

Answer – It took a few weeks. Dell placed all 3 blade chassis side-by-side but they only ran the tests on one chassis at a time. They wanted to give the test in progress absolute focus. In fact, the two chassis that were not being tested were not running at all (no power) because the testing team wanted to ensure there were no thermal variations.

Question – “Were the systems on a bench, or did you have them racked?”

Answer – All 3 chassis were racked – in their own rack. They were properly cooled with perforated doors with vented floor panels under the floor. In fact, the temperatures never varied by 1 degree between all enclosures.

Question – “Why do you think the Dell design offered the lowest power in these tests?”

Answer – There are three contributing factors to the success of Dell’s M1000e chassis offering a lower power draw over HP and IBM. The first is the 2700W Platinum certified power supply. It offers greater energy efficiency over previous power supplies and they are shipping as a standard power supply in the M1000e chassis now. However, truth be told, the difference in “Platinum” certified and “Gold” certified is only 2 – 3%, so this adds very little to the power savings seen in the white paper. Second is the technology of the Dell M1000e fans. Dell has patent pending fan control algorithms that help provide better fan efficiency. From what I understand this patent helps to ensure that at no point in time does the fan rev up to “high”. (If you are interested in reading about the patent pending fan control technology, pour yourself a cup of coffee and read all about it at the U.S. Patent Office website – application number 20100087965). Another interesting fact is that the fans used in the Dell M1000e are balanced by the manufacturer to ensure proper rotation. It is a similar process to the way your car tires are balanced – there is one or two small weights on each fan. (This is something you can validate if you own a Dell M1000e). Overall, it really comes down to the overall architecture of the Dell M1000e chassis being designed for efficient laminar airflow. In fact (per Robert Bradfield) when you look at the Dell M1000e as tested in this technical white paper versus the IBM BladeCenter H, the savings in power realized in a one year period would be enough power saved to power a single U.S. home for one year.

I encourage you, the reader, to review this Technical White Paper (Power Efficiency Comparison of Enterprise-Class Blade Servers and Enclosures) for yourself and see what your thoughts are. I’ve looked for things like use of solid state drives or power efficient memory DIMMs, but this seems to be legit. However I know there will be critics, so voice your thoughts in the comments below. I promise you Dell is watching to see what you think…

Thanks to fellow blogger, M. Sean McGee (http://www.mseanmcgee.com/) I was alerted to the fact that Cisco announced on today, Sept. 14, their 13th blade server to the UCS family – the Cisco UCS B230 M1.

This newest addition performs a few tricks that no other vendor has been able to perform. Read the rest of this entry »

Last week at VMworld 2010 I had the opportunity to get some great pictures of HP and Dell’s newest blade servers. The HP Proliant BL620 G7, the HP Proliant BL680 G7 and the Dell PowerEdge M610X and M710HD. These newest blade servers are exciting offerings from HP and Dell so I encourage you to take a few minutes to look. Read the rest of this entry »

One of the questions I get the most is, “which blade server option is best for me?” My honest answer is always, “it depends.” The reality is that the best blade infrastructure for YOU is really going to depend on what is important to you. Based on this, I figured it would be a good exercise to do a high level comparison of the blade chassis offerings from Cisco, Dell, HP and IBM. If you ready through my past blog posts, you’ll see that my goal is to be as unbiased as possible when it comes to talking about blade servers. I’m going to attempt to be “vendor neutral” with this post as well, but I welcome your comments, thoughts and criticisms. Read the rest of this entry »
(Updated 6/22/2010, 1:00 a.m. Pacific, with updated BL620 image and 2nd Switch pic)

As expected, HP announced today new blade servers to their BladeSystem lineup as well as a new converged switch for their chassis. Everyone expected updates to the BL460 and BL490, the BL2x220c and even the BL680 blade servers, but the BL620 G7 blade server was a surprise (at least to me.) Before I highlight the announcements, I have to preface this by saying I don’t have a lot of technical information yet. I attended the press conference at HP Tech Forum 2010 in Las Vegas, but I didn’t get the press kit in advance. I’ll update this post with links to the Spec Sheets as they become available.

The Details

First up- the BL620 G7

The BL620 G7 is a full-height blade server with 2 x CPU sockets designed to handle the Intel 7500 (and possibly the 6500) CPU. It has 32 memory DIMMS, 2 hot-plug hard drive bays and 3 mezzanine expansion slots.

HP Proliant BL620 Blade Server

HP Proliant BL620 Blade Server

BL680 G7
The BL680 G7 is an upgrade to to the previous generation, however the 7th generation is a double-wide server. This design offers up to 4 servers in a C7000 BladeSystem chassis. This server’s claim to fame is that it will hold 1 terabyte (1TB) of RAM. To put this into perspective, the Library of Congress’s entire library is 6TB of data, so you could put the entire library on 6 of these BL680 G7’s!

HP BL680 G7 Blade Server

“FlexFabric” I/O Onboard
Each of the Generation 7 (G7) servers is coming with “FlexFabric” I/O on the motherboard of the blade server. These are NEW NC551i Dual Port FlexFabric 10Gb Converged Network Adapters (CNA) that supports stateless TCP/IP offload, TCP Offload Engine (TOE), Fibre Channel over Ethernet (FCoE) and iSCSI protocols.

Virtual Connect FlexFabric 10Gb/24-Port Module
The final “big” announcement on the blade server front is a converged fabric switch that fits inside the blade chassis. Titled, the Virtual Connect FlexFabric 10Gb/24-Port Module, it is designed to allow you split the Ethernet fabric and the Fibre fabric at the switch module level, inside the blade chassis INSTEAD OF at the top of rack switch. You may recall that I previously blogged about this as being a rumour, but now it is true.

The image on the left was my rendition of what it would look like.

And here are the actual images.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP believes that converged technology is good to the edge of the network, but that it is not yet mature enough to go to the datacenter (not ready for multi-hop, end to end.) When the technology is acceptable and mature and business needs dictate – they’ll offer a converged network offering to the datacenter core.

What do you think about these new blade offerings? Let me know, I always enjoy your comments.

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

It’s been a while since I’ve posted what rumours I’m hearing, so I thought I’d dig around and see what I can find out. NOTE: this is purely speculation, I have no definitive information from any vendor about any of this information so this may be false info. Read at your own risk.

Rumour #1 – GPU’s on a Blade Server
I’m hearing more and more discussion around “GPU’s” being used on a blade server. Now, I have to admit, when I hear the term, “GPU“, I’m think of Graphical Processing Unit – or the type of processor that runs a high-end graphics card. So, when I hear rumours that there might be blade servers coming out that can handle GPUs, I have to wonder WHY?

Wikipedia defines a GPU as “A graphics processing unit or GPU (also occasionally called visual processing unit or VPU) is a specialized processor that offloads 3D or 2D graphics rendering from the microprocessor. It is used in embedded systems, mobile phones, personal computers, workstations, and game consoles. Modern GPUs are very efficient at manipulating computer graphics, and their highly parallel structure makes them more effective than general-purpose CPUs for a range of complex algorithms. “

NVIDIA, the top maker of GPUs, also points out on their website, “The model for GPU computing is to use a CPU and GPU together in a heterogeneous computing model. The sequential part of the application runs on the CPU and the computationally-intensive part runs on the GPU. From the user’s perspective, the application just runs faster because it is using the high-performance of the GPU to boost performance.

(For a cool Mythbusters video on GPU vs CPU, check out Cliff’s IBM Blog.)

So if a blade vendor decided to put together the ability to run normal AMD or Intel CPUs in tandem with GPU’s from NVIDIA, let’s say by using graphics cards in PCI-x expansion slots, they would have a blade server ideal for running any application that would benefit from high pherformace computing. This seems do-able today since both HP and IBM offer PCI-x Expansion blades, however the rumour I’m hearing is that there is a blade server coming out that will be specifically designed for running GPUs. Interesting concept. I’m anxious to see how it will be received once it’s announced…

Rumour #2 – Another Blade Server Dedicated for Memory
My second rumour is less exciting than the first – is that yet another blade vendor is about to announce a blade server designed for maximum memory density. If you’ll recall, IBM has the HS22v blade and HP has the BL490c G6 blade server – both of which are designed for 18 memory DIMMs and internal drives. So – that leaves either Cisco or Dell to be next on this rumour. Since Cisco has the B250 blade server that can hold 48 DIMMs, I’m willing to believe they wouldn’t need to invest into designing a half-wide blade that can hold 18 DIMMs, therefore the only remaining option is Dell. What would Dell gain from introducing a blade server with high memory density? For one, it would give them an option to compete with IBM and HP in the “2 CPU, 18 Memory DIMM” environment. Another reason is that it would help expand Dell’s blade portfolio. If you examine what Dell’s current blade server offerings are today, you see they can’t compete with any requirement for large memory environments without moving to a full-height blade.

That’s all I have. Let me know if you hear of any other rumours.

I heard a rumour on Friday that HP has been chosen by another animated movie studio to provide the blade servers to render an upcoming movie. To recount the movies that have used / are using HP blades:

So, as I look at the vast number of movies that have chosen HP for their blade server technology, I have to wonder WHY? HP does have some advantages in the blade marketplace, like having market share, but when you review HP with Dell, you would be surprised as to how similar the offerings are:

When you compare the two offerings, HP wins in a few categories, like the ability to have up to 32 CPUs in a single blade chassis – a valuable feature for rendering accomplished with the HP BL2x220c blade servers. However, Dell also shines in areas, too. Look at their ability to run 512GB of memory on a 2 CPU server using FlexMem Bridge technology. From a pure technology comparison (taking out the management and I/O of the equation), I see Dell offering very similar product offerings as HP and I have to wonder why Dell has not been able to get any movie companies to use Dell blades. Perhaps it’s not a focus of Dell marketing. Perhaps it is because HP has a history of movie processing on HP workstations. Perhaps movie companies need 32 CPUs in a chassis. I don’t know. I welcome any comments from Dell or HP, but I’d also like to know, what do you think? Let me know in the comments below.

Chalk yet another win up for HP.

It was reported last week on www.itnews.com.au that Digital production house Dr. D. Studios is in the early stages of building a supercomputer grid cluster for the rendering of the animated feature film Happy Feet 2 and visual effects in Fury Road the long-anticipated fourth film in the Mad Max series. The super computer grid is based on HP BL490 G6 blade servers housed within an APC HACS pod, is already running in excess of 1000 cores and is expected to reach over 6000 cores during peak rendering by mid-2011.

This cluster boasted 4096 cores, taking it into the top 100 on the list of Top 500 supercomputers in the world in 2007 (it now sits at 447).

According to Doctor D infrastructure engineering manager James Bourne, “High density compute clusters provide an interesting engineering exercise for all parties involved. Over the last few years the drive to virtualise is causing data centres to move down a medium density path.”

Check out the full article, including video at:
http://www.itnews.com.au/News/169048,video-building-a-supercomputer-for-happy-feet-2-mad-max-4.aspx

Intel officially announced today the Xeon 5600 processor, code named “Westmere.” Cisco, HP and IBM also announced their blade servers that have the new processor. The Intel Xeon 5600 offers:

  • 32nm process technology with 50% more threads and cache
  • Improved energy efficiency with support for 1.35V low power memory

There will be 4 core and 6 core offerings. This processor also provide the option of HyperThreading, so you could have up to 8 threads and 12 threads per processor, or 16 and 24 in a dual CPU system. This will be a huge advantage to applications that like multiple threads, like virtualization. Here’s a look at what each vendor has come out with:

Cisco
Cisco B200 blade serverThe B200 M2 provides Cisco users with the current Xeon 5600 processors. It looks like Cisco will be offering a choice of the following Xeon 5600 processors: Intel Xeon X5670, X5650, E5640, E5620, L5640, or E5506. Because Cisco’s model is a “built-to-order” design, I can’t really provide any part numbers, but knowing what speeds they have should help.

HP
HP is starting off with the Intel Xeon 5600 by bumping their existing G6 models to include the Xeon 5600 processor. The look, feel, and options of the blade servers will remain the same – the only difference will be the new processor. According to HP, “the HP ProLiant G6 platform, based on Intel Xeon 5600 processors, includes the HP ProLiant BL280c, BL2x220c, BL460c and BL490c server blades and HP ProLiant WS460c G6 workstation blade for organizations requiring high density and performance in a compact form factor. The latest HP ProLiant G6 platforms will be available worldwide on March 29.It appears that HP’s waiting until March 29 to provide details on their Westmere blade offerings, so don’t go looking for part numbers or pricing on their website.

IBM
IBM is continuing to stay ahead of the game with details about their product offerings. They’ve refreshed their HS22 and HS22v blade servers:

HS22
7870ECU – Express HS22, 2x Xeon 4C X5560 95W 2.80GHz/1333MHz/8MB L2, 4x2GB, O/Bay 2.5in SAS, SR MR10ie

7870G4U – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870GCU – HS22, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Broadcom 10Gb Gen2 2-port

7870H2U -HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H4U – HS22, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870H5U – HS22, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870HAU – HS22, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS, Emulex Virtual Fabric Adapter

7870N2U – HS22, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 2.5in SAS

7870EGU – Express HS22, 2x Xeon 4C E5630 80W 2.53GHz/1066MHz/12MB, 6x2GB, O/Bay 2.5in SAS

IBM HS22V Blade ServerHS22V
7871G2U HS22V, Xeon 4C E5620 80W 2.40GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871G4U HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871GDU HS22V, Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H4U HS22V, Xeon 6C X5670 95W 2.93GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871H5U HS22V, Xeon 4C X5667 95W 3.06GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871HAU HS22V, Xeon 6C X5650 95W 2.66GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871N2U HS22V, Xeon 6C L5640 60W 2.26GHz/1333MHz/12MB, 3x2GB, O/Bay 1.8in SAS

7871EGU Express HS22V, 2x Xeon 4C E5640 80W 2.66GHz/1066MHz/12MB, 6x2GB, O/Bay 1.8in SAS

7871EHU Express HS22V, 2x Xeon 6C X5660 95W 2.80GHz/1333MHz/12MB, 6x4GB, O/Bay 1.8in SAS

I could not find any information on what Dell will be offering, from a blade server perspective, so if you have information (that is not confidential) feel free send it my way.