As VMworld 2010 is right around the corner, I wanted to take a few minutes to make a plea to all attendees.

This year, IF you receive a bag or backpack that you just don’t want, please don’t throw it away, but instead take it home, go to the dollar store and fill the backpack with pencils, crayons, paper and erasers and donate it to your local school system. You would be AMAZED to find out the numbers of children who don’t get backpacks and whose familes can not afford the costly school supplies that are required each year. You will be making some family happy and you’ll get the name “VMware” marketed throughout the schools, getting the next generation of techno geeks ready to learn all about virtualization.

Thanks for the consideration!

(Updated 7/27/2010 – 11 am EST – added info on power and tower options)

When you think about blade servers, you probably think, “they are too expensive.” When you think about doing a VMware project, you probably think, “my servers are too old” or “I can’t afford new servers.” For $8 per GB, you can have blade servers preloaded with VMware ESXi 4.1 AND 4TB of storage! Want to know how? Keep reading. Read the rest of this entry »

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies. Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Read the rest of this entry »

Along with the Intel blade server announcements, Tuesday HP also announced two new AMD based blades, the BL465c G7 and BL685G7. Although originally viewed as a refresh of their existing AMD blade servers, while at the HP Tech Forum in Las Vegas, I found there were a few interesting facts. Let’s take a look. Read the rest of this entry »

(Updated 6/22/2010, 1:00 a.m. Pacific, with updated BL620 image and 2nd Switch pic)

As expected, HP announced today new blade servers to their BladeSystem lineup as well as a new converged switch for their chassis. Everyone expected updates to the BL460 and BL490, the BL2x220c and even the BL680 blade servers, but the BL620 G7 blade server was a surprise (at least to me.) Before I highlight the announcements, I have to preface this by saying I don’t have a lot of technical information yet. I attended the press conference at HP Tech Forum 2010 in Las Vegas, but I didn’t get the press kit in advance. I’ll update this post with links to the Spec Sheets as they become available.

The Details

First up- the BL620 G7

The BL620 G7 is a full-height blade server with 2 x CPU sockets designed to handle the Intel 7500 (and possibly the 6500) CPU. It has 32 memory DIMMS, 2 hot-plug hard drive bays and 3 mezzanine expansion slots.

HP Proliant BL620 Blade Server

HP Proliant BL620 Blade Server

BL680 G7
The BL680 G7 is an upgrade to to the previous generation, however the 7th generation is a double-wide server. This design offers up to 4 servers in a C7000 BladeSystem chassis. This server’s claim to fame is that it will hold 1 terabyte (1TB) of RAM. To put this into perspective, the Library of Congress’s entire library is 6TB of data, so you could put the entire library on 6 of these BL680 G7’s!

HP BL680 G7 Blade Server

“FlexFabric” I/O Onboard
Each of the Generation 7 (G7) servers is coming with “FlexFabric” I/O on the motherboard of the blade server. These are NEW NC551i Dual Port FlexFabric 10Gb Converged Network Adapters (CNA) that supports stateless TCP/IP offload, TCP Offload Engine (TOE), Fibre Channel over Ethernet (FCoE) and iSCSI protocols.

Virtual Connect FlexFabric 10Gb/24-Port Module
The final “big” announcement on the blade server front is a converged fabric switch that fits inside the blade chassis. Titled, the Virtual Connect FlexFabric 10Gb/24-Port Module, it is designed to allow you split the Ethernet fabric and the Fibre fabric at the switch module level, inside the blade chassis INSTEAD OF at the top of rack switch. You may recall that I previously blogged about this as being a rumour, but now it is true.

The image on the left was my rendition of what it would look like.

And here are the actual images.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP believes that converged technology is good to the edge of the network, but that it is not yet mature enough to go to the datacenter (not ready for multi-hop, end to end.) When the technology is acceptable and mature and business needs dictate – they’ll offer a converged network offering to the datacenter core.

What do you think about these new blade offerings? Let me know, I always enjoy your comments.

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

Last week, Blade.org invited me to their 3rd Annual Technology Symposium – an online event with speakers from APC, Blade Network Technologies, Emulex, IBM, NetApp, Qlogic and Virtensys. Blade.org is a collaborative organization and developer community focused on accelerating the development and adoption of open blade server platforms. This year’s Symposium focused on “the dynamic data center of the future”. While there were many interesting topics (check out the replay here), the one that appealed to me most was “Shared I/O” by Alex Nicolson, VP and CTO of Emulex. Let me explain why. Read the rest of this entry »

Dell announced today two new additions to their blade server family – the PowerEdge 11G M710HD and the M610x. The two new servers are just a part of Dell’s “Blade 3.0 Launch” – a campaign highlighting Dell’s ongoing effort to become the leader in blade server technology. Over the next several months, Dell will be making changes in their chassis infrastructure introducing more efficient power supplies and fans that will require up to 10% less power over existing chassis. Don’t worry though, there will not be a new chassis. They’ll simply be upgrading the fans and power supplies that ship standard at no charge to the customer.

Dell also has announced a significant upgrade to their Chassis Management Controller Software (CMC). This is great news, as Dell’s chassis management software interface had not had an update since the early part of the decade. The CMC 3.0 release offers a better user interface and ease of use. One of the key features that CMC 3.0 will offer is the ability to upgrade the iDRAC, BIOS, RAID, NIC and Diagnostic firmware to all the blades at one time offering huge time savings. Expect the CMC 3.0 software to be available in early July 2010. For demo’s of the new interface, jump over to Dell TechCenter.

Dell PowerEdge 11G M710HDPowerEdge 11G M710HD
Ideal for virtualization or applications requiring large amounts of memory, the M710HD is a half-height blade server that offers up:

* Up to 2 Intel 5500 or 5600 Xeon Processors
* 18 memory DIMMs
* 2 hot-swap drives (SAS and Solid State Drive Option)
* 2 mezzanine card slots
* dual SD slots for redundant hypervisor
*2 or 4 x 1Gb NICs

On paper – the Dell M710HD looks like a direct competitor to the HP Proliant BL490 G6, and it is, however Dell has added something that could change the blade server market – a flexible embedded network controller. The “Network Daughter Card” or NDC is the blade servers LAN on Motherboard (LOM) but on a removeable daughter card, very similar to the mezzanine cards. This is really cool stuff because this design allows for a user to change their blade server’s on-board I/O as their network grows. For example, today many IT environments are standardized on 1Gb networks for server connectivity, however 10Gb connectivity is becoming more and more prevalent. When users move from 1Gb to 10Gb in their blade environments, with the NDC design, they will have the ability to upgrade the onboard network controller from 1Gb to 10Gb therefore protecting their investment. Any time a manufacturer offers investment protection I get excited. An important note – the M710HD will come with a NDC that will provide up to 4 x 1Gb NICs when the Dell PowerConnect M6348 Ethernet Switch is used.

PowerEdge 11G M610x
Dell PowerEdge 11G M610xAs the industry continues to hype up GPGPU (General Purpose computing on Graphic Processor Units), it’s no surprise to see that Dell has announced the availability of a blade server with dedicated PCIe 16xGen2 slots. Here’s some quick details about this blade server:

* Full-height blade server
*
Up to 2 Intel 5500 or 5600 Xeon Processors
* 12 memory DIMMs
* 2 hot-swap drives
* 2 mezzanine card slots
* 2 x PCIe 16x(Gen2) slots

I know the skeptical reader will think, “so what – HP and IBM have PCIe expansion blades,” which is true – however the M610x blade server differenciates itself by offering 2 x PCIe 16x Generation 2 slots that can hold up to 250w cards, allowing this blade server to handle many of the graphics cards designed for GPGPU or even the latest I/O Adapters from Fusion I/O. Although this blade server can handle these niche PCIe cards, don’t overlook the opportunity to take advantage of the PCIe slots for situations like fax modems, dedicated SCSI controller needs, or even dedicated USB requirements.

I’m curious to know what your thoughts are about these new servers. Leave me a comment and let me know.

For your viewing pleasure, here’s some more views of the M610x.Dell PowerEdge 11G M610x

I’ve had a few questions lately about “the best” blade server to use for virtualization – specifically VMware virtualization. While the obvious answer is “it depends”, I thought it would be an interesting approach to identify the blade servers that ranked in the top 5 in VMware’s VMmark benchmark. Before I begin, let me explain what the VMark testing is about. VMmark enables equipment manufacturers, software vendors, system integrators and other organizations to:

  • Measure virtual machine performance accurately and reliably
  • Determine the performance of different hardware and virtualization platforms
  • Make appropriate hardware decisions for your virtual infrastructure

VMware developed VMmark as a standard methodology for comparing virtualized systems. According to VMware’s VMark website, the benchmark system in VMmark is comprised of a series of “sub-tests” that are derived from commonly used load-generation tools, as well as from benchmarks developed by the Standard Performance Evaluation Corporation (SPEC®). In parallel to VMmark, VMware is a member of the SPEC Virtualization subcommittee and is working with other SPEC members to create the next generation virtualization benchmark.

In testing the terms, a “tile” is simply a collection of virtual machines (VM’s) that are executing a set of diverse workloads designed to represent a natural work environment. The total number of tiles that a server can handle provides a detailed measurement of that server’s consolidation capacity. The more tiles, the better. The faster the performance, the better.

THE RESULTS (as of 6/2/2010)

24 Cores (4 Sockets)
HP ProLiant BL685c G6 running VMware ESX v4.0 – 29.19@20 tiles (published 7/14/2009)
HP ProLiant BL680c G5 running VMware ESX v3.5.0 Update 3 – 18.64@14 tiles (published 3/30/2009)

16 Cores (4 Sockets)
Dell PowerEdge M905 running VMware ESX v4.0 – 22.90@17 tiles (published 6/19/2009)
HP ProLiant BL685 G6 running VMware ESX v4.0 – 20.87@14 tiles (published 4/24/2009)

12 Cores (2 Sockets)
Cisco UCS B250 M2 running VMware ESX v4.0 Update 1 – 35.83@26 tiles (published 4/6/2010)
Fujitsu BX922 S2 running VMware ESX v4.0 Update 1 – 32.89@24 tiles (published 4/6/2010)

8 Cores (2 Sockets)
Fujitsu BX922 S2 running VMware ESX v4.0 Update 1 – 27.99@18tiles (published 5/10/2010)
HP ProLiant BL490c G6 runningVMware ESX v4.0 – 25.27@17tiles (published 4/20/2010)

THE WINNER IS…
Cisco UCS B250 M2
running VMware ESX v4.0 Update 1 – 35.83 with 26 tiles

Cisco’s Winning Configuration
So – how did Cisco reach the top server spot? Here’s the configuration:

server config:

  • 2 x Intel Xeon X5680 Processors
  • 192GB of RAM (48 x 4GB)
  • 1 x Converged Network Adapter (Cisco UCS VIC M81KR)

storage config:

  • EMC CX4-240
  • Cisco MDS 9134
  • 1173.48GB Used Disk Space
  • 1024MB Array Cache
  • 50 disks used on 5 enclosures/shelves (1 with 14 disk, 4 with 9 disks)
  • 55 LUNs used
    *21 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *5 at 38GB (file server + mail server) over 20 x 73GB SSDs
    *21 at 15GB (database) + 2 LUNs at 400GB (Standby, Webserver, Javaserver) over 16 x 450GB 15k disks
    *5 at 15GB (database) over 16 x 450GB 15k disks
    * 1 LUN at 20GB (boot) over 5 x 300GB 15k disks
  • RAID 0 for VMs, RAID 5 for VMware ESX 4.0 O/S

As you can see from the information above, the Cisco UCS B250 M2 is the clear winner above all of the blade server offerings. As you can see, none of the Xeon 7500 blade servers have yet to be tested but when they do, I’ll be sure to let you know.

NOTE: IDC revised their report on May 28, 2010. This post now includes those changes.

IDC reported on May 28, 2010 that worldwide server sales for Q1 2010 factory revenues increased 4.6 4.7% year over year to $10.4 billion in the first quarter of 2010 (1Q10). They also reported the blade server market accelerated and continued its sharp growth in the quarter with factory revenue increasing 37.1% 37.2% year over year, with shipment growth increasing by 20.8% compared to 1Q09. According to IDC, nearly 90% of all blade revenue is driven by x86 systems, a segment in which blades now represent 18.8% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following:

#1 market share: HP increased their market share from 52.4% in Q4 2009 to 56.2% in Q1 2010

#2 market share: IBM decreased their market share from 35.1% in Q4 2009 to 23.6% in Q1 2010.

The remaining 20.2% of market share was not mentioned, but I imagine they are split between Dell and Cisco. In fact, based on the fact that Cisco was not even mentioned in the IDC report, I’m willing to bet a majority of that I’m working on getting some visibility into clarification on that (if you’re with Dell or Cisco and can help, please shoot me an email.)

According to Jed Scaramella, senior research analyst in IDC’s Datacenter and Enterprise Server group, “”In the first quarter of 2009, we observed a lot of business in the mid-market as well as refresh activity of a more transactional nature; these factors have driven x86 rack-based revenue to just below 1Q08 value. Blade servers, which are more strategic in customer deployments, continue to accelerate in annual growth rates. The blade segment fared relatively well during the 2009 downturn and have increased revenue value by 13% from the first quarter of 2008.”

For the full IDC report covering the Q1 2010 Worldwide Server Market, please visit http://www.idc.com/getdoc.jsp?containerId=prUS22356410

new link: http://www.idc.com/getdoc.jsp?containerId=prUS22360110

Updated 5/24/2010 – I’ve received some comments about expandability and I’ve received a correction about the speed of Dell’s memory, so I’ve updated this post. You’ll find the corrections / additions below in GREEN.

Since I’ve received a lot of comments from my post on the Dell FlexMem Bridge technology, I thought I would do an unbiased comparison between Dell’s FlexMem Bridge technology (via the PowerEdge 11G M910 blade server) vs IBM’s MAX5 + HX5 blade server offering. In summary both offerings provide the Intel Xeon 7500 CPU plus the ability to add “extended memory” offering value for virtualization, databases and any other workloads that benefit from large amounts of memory.

The Contenders

IBM
IBM’s extended memory solution is a two part solution consisting of the HX5 blade server PLUS the MAX5 memory blade.

  • HX5 Blade Server
    I’ve spent considerable time on previous blogs detailing the IBM HX5, so please jump over to those links to dig into the specifics, but at a high level, the HX5 is IBM’s 2 CPU blade server that offers the Intel Xeon 7500 CPU. The HX5 is a 30mm, “single wide” blade server therefore you can fit up to 14 in an IBM BladeCenter H blade chassis.
  • MAX5
    The MAX 5 offering from IBM can be thought of as a “memory expansion blade.” Offering an additional 24 memory DIMM slots, the MAX5 when coupled with the HX5 blade server, provides a total of 40 memory DIMMs. The MAX5 is a standard “single wide”, 30mm form factor so when used with a single HX5 two IBM BladeCenter H server bays are required in the chassis.

Dell PowerEdge 11G M910 Blade ServerDELL
Dell’s approach to extended memory is a bit different. Instead of relying on a memory blade, Dell starts with the M910 blade server and allows users to use 2 CPUs plus their FlexMem Bridge to access the memory DIMMs of the 3rd and 4th CPU sockets. For details on the FlexMem Bridge, check out my previous post.

  • PowerEdge 11G M910 Blade Server
    The M910 is a 4 CPU capable blade server with 32 memory DIMMs. This blade server is a full-height server therefore you can fit 8 servers inside the Dell M1000e blade chassis.

The Face-Off

ROUND 1 – Memory Capacity
When we compare the memory DIMMs available on each, we see that Dell’s offering comes up with 32 DIMMs vs IBM’s 40 DIMMs. However, IBM’s solution of using the HX5 blade server + the MAX 5 memory expansion has a current maximum memory size is 8Gb whereas Dell offers a max memory size of 16Gb. While this may change in the future, as of today, Dell has the edge so I have to claim:

Round 1 Winner: Dell

ROUND 2 – Memory Performance
As many comments came across on my posting of the Dell FlexMem Bridge technology the other day, several people pointed out that the memory performance is something that needs to be considered when comparing technologies. Dell’s FlexMem Bridge offering reportedly runs at a maximum memory speed of 833Mhz, runs at a max of 1066Ghz, but is dependent upon the speed of the processor. A processor that has a 6.4GT QPI supports memory @ 1066Ghz ; a processor that supports 5.8GT/s QPI supports memory at 978Mhz, and a processor with a QPI speed of 4.8GT runs memory at 800Mhz. This is a component of Intel’s Xeon 7500 architecture so it should be the same regardless of the server vendor. Looking at IBM, we see the HX5 blade server memory runs at a maximum of 978Mhz. However, when you attach the MAX5 to the HX5 for the additional memory slots, however, the memory runs at speed of 1066Mhz, regardless of the speed of the CPU installed. While this appears to be black magic, it’s really the results of IBM’s proprietary eXa scaling – something that I’ll cover in detail at a later date. Although the HX5 blade server memory, when used by itself, does not have the ability to achieve 1066Ghz, this comparison is based on the Dell PowerEdge 11G M910 vs the IBM HX5+MAX5. With that in mind, the ability to run the expanded memory at 1066Mhz gives IBM the edge in this round.

Round 2 Winner: IBM

ROUND 3 – Server Density
This one is pretty straight forward. IBM’s HX5 + MAX5 offering takes up 2 server bays, so in the IBM BladeCenter H, you can only fit 7 systems. You can only fit 4 BladeCenter H chassis in a 42u rack, therefore you can fit a max of 28 IBM HX5 + MAX5 systems into a rack.

The Dell PowerEdge 11G M910 blade server is a full height server, so you can fit 8 servers into the Dell M1000e chassis. 4 Dell chassis will fit in a 42u rack, so you can get 32 Dell M910’s into a rack.

Round 3 Winner: Dell

(NEW) ROUND 4 – Expandability
It was mentioned several times in the comments that expandability should have been reviewed as well. When we look at Dell’s design, we see there two expansion options: run the Dell PowerEdge 11G M910 blade with 2 processors and the FlexMem Bridge, or run them with 4 processors and remove the FlexMem Bridge.

The modular design of the IBM eX5 architecture allows for a user to add memory (MAX5), add processors (2nd HX5) or both (2 x HX5 + 2 x MAX5). This provide users with a lot of flexibility to choose a design that meets their workload.

Choosing a winner for this round is tough, as there a different ways to look at this:

Maximum CPUs in a server: TIE – both IBM and Dell can scale to 4 CPUs.
Maximum CPU density in a 42u rack: Dell wins with 32 x 4 CPU servers vs IBM’s 12.
Maximum Memory in a server: IBM with 640Gb using 2 x HX5 and 2 x MAX5
Max Memory density in a 42u Rack: Dell wins with 16Tb

Round 4 Winner: TIE
Summary
While the fight was close, with a 2 to 1 win, it is clear the overall winner is Dell. For this comparison, I tried to keep it focused on the memory aspect of the offerings.

On a final note, at the time of this writing, the IBM MAX 5 memory expansion has not been released for general availability, while Dell is shipping their M910 blade server.

There may be other advantages relative to processors that were not considered for this comparison, however I welcome any thoughts or comments you have.