You are currently browsing the tag archive for the ‘blade server’ tag.

What is Cisco’s blade server market share? That seems to be the mystery question that no one can really answer. The previous IDC quarterly worldwide server report mentioned nothing about Cisco, yet readers and bloggers alike claim Cisco is #3, so what IS the true answer. Read the rest of this entry »

Advertisements

Brocade Converged 10GbE Switch for IBM BladeCenterIBM recently announced the addition of a Brocade Fibre Channel over Ethernet (FCoE) switch module for the IBM BladeCenter designed to deliver up to 24% savings over traditional 10Gb Ethernet & 8Gb FC setup.

Read the rest of this entry »

IDC came out with their 2Q 2010 worldwide server market revenue report last month which shows that HP lost blade server market share to IBM. Read the rest of this entry »

One of the questions I get the most is, “which blade server option is best for me?” My honest answer is always, “it depends.” The reality is that the best blade infrastructure for YOU is really going to depend on what is important to you. Based on this, I figured it would be a good exercise to do a high level comparison of the blade chassis offerings from Cisco, Dell, HP and IBM. If you ready through my past blog posts, you’ll see that my goal is to be as unbiased as possible when it comes to talking about blade servers. I’m going to attempt to be “vendor neutral” with this post as well, but I welcome your comments, thoughts and criticisms. Read the rest of this entry »

The VMware VMmark web site was recently updated to show Dell’s PowerEdge M910 blade server in the #1 slot (for blades) in the two socket space. I think the PowerEdge M910 is very intriguing, so I thought I’d spend some time highlighting the features. Read the rest of this entry »

(Updated 7/27/2010 – 11 am EST – added info on power and tower options)

When you think about blade servers, you probably think, “they are too expensive.” When you think about doing a VMware project, you probably think, “my servers are too old” or “I can’t afford new servers.” For $8 per GB, you can have blade servers preloaded with VMware ESXi 4.1 AND 4TB of storage! Want to know how? Keep reading. Read the rest of this entry »

(Updated 6/22/2010, 1:00 a.m. Pacific, with updated BL620 image and 2nd Switch pic)

As expected, HP announced today new blade servers to their BladeSystem lineup as well as a new converged switch for their chassis. Everyone expected updates to the BL460 and BL490, the BL2x220c and even the BL680 blade servers, but the BL620 G7 blade server was a surprise (at least to me.) Before I highlight the announcements, I have to preface this by saying I don’t have a lot of technical information yet. I attended the press conference at HP Tech Forum 2010 in Las Vegas, but I didn’t get the press kit in advance. I’ll update this post with links to the Spec Sheets as they become available.

The Details

First up- the BL620 G7

The BL620 G7 is a full-height blade server with 2 x CPU sockets designed to handle the Intel 7500 (and possibly the 6500) CPU. It has 32 memory DIMMS, 2 hot-plug hard drive bays and 3 mezzanine expansion slots.

HP Proliant BL620 Blade Server

HP Proliant BL620 Blade Server

BL680 G7
The BL680 G7 is an upgrade to to the previous generation, however the 7th generation is a double-wide server. This design offers up to 4 servers in a C7000 BladeSystem chassis. This server’s claim to fame is that it will hold 1 terabyte (1TB) of RAM. To put this into perspective, the Library of Congress’s entire library is 6TB of data, so you could put the entire library on 6 of these BL680 G7’s!

HP BL680 G7 Blade Server

“FlexFabric” I/O Onboard
Each of the Generation 7 (G7) servers is coming with “FlexFabric” I/O on the motherboard of the blade server. These are NEW NC551i Dual Port FlexFabric 10Gb Converged Network Adapters (CNA) that supports stateless TCP/IP offload, TCP Offload Engine (TOE), Fibre Channel over Ethernet (FCoE) and iSCSI protocols.

Virtual Connect FlexFabric 10Gb/24-Port Module
The final “big” announcement on the blade server front is a converged fabric switch that fits inside the blade chassis. Titled, the Virtual Connect FlexFabric 10Gb/24-Port Module, it is designed to allow you split the Ethernet fabric and the Fibre fabric at the switch module level, inside the blade chassis INSTEAD OF at the top of rack switch. You may recall that I previously blogged about this as being a rumour, but now it is true.

The image on the left was my rendition of what it would look like.

And here are the actual images.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP believes that converged technology is good to the edge of the network, but that it is not yet mature enough to go to the datacenter (not ready for multi-hop, end to end.) When the technology is acceptable and mature and business needs dictate – they’ll offer a converged network offering to the datacenter core.

What do you think about these new blade offerings? Let me know, I always enjoy your comments.

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

Last week, Blade.org invited me to their 3rd Annual Technology Symposium – an online event with speakers from APC, Blade Network Technologies, Emulex, IBM, NetApp, Qlogic and Virtensys. Blade.org is a collaborative organization and developer community focused on accelerating the development and adoption of open blade server platforms. This year’s Symposium focused on “the dynamic data center of the future”. While there were many interesting topics (check out the replay here), the one that appealed to me most was “Shared I/O” by Alex Nicolson, VP and CTO of Emulex. Let me explain why. Read the rest of this entry »

Dell announced today two new additions to their blade server family – the PowerEdge 11G M710HD and the M610x. The two new servers are just a part of Dell’s “Blade 3.0 Launch” – a campaign highlighting Dell’s ongoing effort to become the leader in blade server technology. Over the next several months, Dell will be making changes in their chassis infrastructure introducing more efficient power supplies and fans that will require up to 10% less power over existing chassis. Don’t worry though, there will not be a new chassis. They’ll simply be upgrading the fans and power supplies that ship standard at no charge to the customer.

Dell also has announced a significant upgrade to their Chassis Management Controller Software (CMC). This is great news, as Dell’s chassis management software interface had not had an update since the early part of the decade. The CMC 3.0 release offers a better user interface and ease of use. One of the key features that CMC 3.0 will offer is the ability to upgrade the iDRAC, BIOS, RAID, NIC and Diagnostic firmware to all the blades at one time offering huge time savings. Expect the CMC 3.0 software to be available in early July 2010. For demo’s of the new interface, jump over to Dell TechCenter.

Dell PowerEdge 11G M710HDPowerEdge 11G M710HD
Ideal for virtualization or applications requiring large amounts of memory, the M710HD is a half-height blade server that offers up:

* Up to 2 Intel 5500 or 5600 Xeon Processors
* 18 memory DIMMs
* 2 hot-swap drives (SAS and Solid State Drive Option)
* 2 mezzanine card slots
* dual SD slots for redundant hypervisor
*2 or 4 x 1Gb NICs

On paper – the Dell M710HD looks like a direct competitor to the HP Proliant BL490 G6, and it is, however Dell has added something that could change the blade server market – a flexible embedded network controller. The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removeable daughter card, very similar to the mezzanine cards. This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows. For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent. When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment. Any time a manufacturer offers investment protection I get excited. An important note – the M710HD will come with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used.

PowerEdge 11G M610x
Dell PowerEdge 11G M610xAs the industry continues to hype up GPGPU (General Purpose computing on Graphic Processor Units), it’s no surprise to see that Dell has announced the availability of a blade server with dedicated PCIe 16xGen2 slots. Here’s some quick details about this blade server:

* Full-height blade server
*
Up to 2 Intel 5500 or 5600 Xeon Processors
* 12 memory DIMMs
* 2 hot-swap drives
* 2 mezzanine card slots
* 2 x PCIe 16x(Gen2) slots

I know the skeptical reader will think, “so what – HP and IBM have PCIe expansion blades,” which is true – however the M610x blade server differenciates itself by offering 2 x PCIe 16x Generation 2 slots that can hold up to 250w cards, allowing this blade server to handle many of the graphics cards designed for GPGPU or even the latest I/O Adapters from Fusion I/O. Although this blade server can handle these niche PCIe cards, don’t overlook the opportunity to take advantage of the PCIe slots for situations like fax modems, dedicated SCSI controller needs, or even dedicated USB requirements.

I’m curious to know what your thoughts are about these new servers. Leave me a comment and let me know.

For your viewing pleasure, here’s some more views of the M610x.Dell PowerEdge 11G M610x

I’ve had a few questions lately about “the best” blade server to use for virtualization – specifically VMware virtualization. While the obvious answer is “it depends”, I thought it would be an interesting approach to identify the blade servers that ranked in the top 5 in VMware’s VMmark benchmark. Before I begin, let me explain what the VMark testing is about. VMmark enables equipment manufacturers, software vendors, system integrators and other organizations to:

  • Measure virtual machine performance accurately and reliably
  • Determine the performance of different hardware and virtualization platforms
  • Make appropriate hardware decisions for your virtual infrastructure

VMware developed VMmark as a standard methodology for comparing virtualized systems. According to VMware’s VMark website, the benchmark system in VMmark is comprised of a series of “sub-tests” that are derived from commonly used load-generation tools, as well as from benchmarks developed by the Standard Performance Evaluation Corporation (SPEC®). In parallel to VMmark, VMware is a member of the SPEC Virtualization subcommittee and is working with other SPEC members to create the next generation virtualization benchmark.

In testing the terms, a “tile” is simply a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment. The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity. The more tiles, the better. The faster the performance, the better.

THE RESULTS (as of 6/2/2010)

24 Cores (4 Sockets)
HP ProLiant BL685c G6 running VMware ESX v4.0 – 29.19@20 tiles (published 7/14/2009)
HP ProLiant BL680c G5 running VMware ESX v3.5.0 Update 3 – 18.64@14 tiles (published 3/30/2009)

16 Cores (4 Sockets)
Dell PowerEdge M905 running VMware ESX v4.0 – 22.90@17 tiles (published 6/19/2009)
HP ProLiant BL685 G6 running VMware ESX v4.0 – 20.87@14 tiles (published 4/24/2009)

12 Cores (2 Sockets)
Cisco UCS B250 M2 running VMware ESX v4.0 Update 1 – 35.83@26 tiles (published 4/6/2010)
Fujitsu BX922 S2 running VMware ESX v4.0 Update 1 – 32.89@24 tiles (published 4/6/2010)

8 Cores (2 Sockets)
Fujitsu BX922 S2 running VMware ESX v4.0 Update 1 – 27.99@18tiles (published 5/10/2010)
HP ProLiant BL490c G6 runningVMware ESX v4.0 – 25.27@17tiles (published 4/20/2010)

THE WINNER IS…
Cisco UCS B250 M2
running VMware ESX v4.0 Update 1 – 35.83 with 26 tiles

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot? Here’s the configuration:

server config:

  • 2 x Intel Xeon X5680 Processors
  • 192GB of RAM (48 x 4GB)
  • 1 x Converged Network Adapter (Cisco UCS VIC M81KR)

storage config:

  • EMC CX4-240
  • Cisco MDS 9134
  • 1173.48GB Used Disk Space
  • 1024MB Array Cache
  • 50 disks used on 5 enclosures/shelves (1 with 14 disk, 4 with 9 disks)
  • 55 LUNs used
    *21 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *5 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *21 at 15GB (database) + 2 LUNs at 400GB (Standby, Webserver, Javaserver) over 16 x 450GB 15k disks
    *5 at 15GB (database) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

As you can see from the information above, the Cisco UCS B250 M2 is the clear winner above all of the blade server offerings. As you can see, none of the Xeon 7500 blade servers have yet to be tested but when they do, I’ll be sure to let you know.