You are currently browsing the category archive for the ‘IBM’ category.

Brocade Converged 10GbE Switch for IBM BladeCenterIBM recently announced the addition of a Brocade Fibre Channel over Ethernet (FCoE) switch module for the IBM BladeCenter designed to deliver up to 24% savings over traditional 10Gb Ethernet & 8Gb FC setup.

Read the rest of this entry »

Advertisements

IBM announced on Sept. 27, 2010 that it has entered into a definitive agreement to acquire BLADE Network Technologies (BLADE), a privately held company based in Santa Clara, CA. BLADE specializes in software and devices that route data and transactions to and from servers. The acquisition is anticipated to close in the fourth quarter of 2010, subject to the satisfaction of customary closing conditions and applicable regulatory reviews. Financial terms were not disclosed.

To read the full press release, continue reading.

(Updated 7/27/2010 – 11 am EST – added info on power and tower options)

When you think about blade servers, you probably think, “they are too expensive.” When you think about doing a VMware project, you probably think, “my servers are too old” or “I can’t afford new servers.” For $8 per GB, you can have blade servers preloaded with VMware ESXi 4.1 AND 4TB of storage! Want to know how? Keep reading. Read the rest of this entry »

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies. Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Read the rest of this entry »

NOTE: IDC revised their report on May 28, 2010. This post now includes those changes.

IDC reported on May 28, 2010 that worldwide server sales for Q1 2010 factory revenues increased 4.6 4.7% year over year to $10.4 billion in the first quarter of 2010 (1Q10). They also reported the blade server market accelerated and continued its sharp growth in the quarter with factory revenue increasing 37.1% 37.2% year over year, with shipment growth increasing by 20.8% compared to 1Q09. According to IDC, nearly 90% of all blade revenue is driven by x86 systems, a segment in which blades now represent 18.8% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following:

#1 market share: HP increased their market share from 52.4% in Q4 2009 to 56.2% in Q1 2010

#2 market share: IBM decreased their market share from 35.1% in Q4 2009 to 23.6% in Q1 2010.

The remaining 20.2% of market share was not mentioned, but I imagine they are split between Dell and Cisco. In fact, based on the fact that Cisco was not even mentioned in the IDC report, I’m willing to bet a majority of that I’m working on getting some visibility into clarification on that (if you’re with Dell or Cisco and can help, please shoot me an email.)

According to Jed Scaramella, senior research analyst in IDC’s Datacenter and Enterprise Server group, “”In the first quarter of 2009, we observed a lot of business in the mid-market as well as refresh activity of a more transactional nature; these factors have driven x86 rack-based revenue to just below 1Q08 value. Blade servers, which are more strategic in customer deployments, continue to accelerate in annual growth rates. The blade segment fared relatively well during the 2009 downturn and have increased revenue value by 13% from the first quarter of 2008.”

For the full IDC report covering the Q1 2010 Worldwide Server Market, please visit http://www.idc.com/getdoc.jsp?containerId=prUS22356410

new link: http://www.idc.com/getdoc.jsp?containerId=prUS22360110

Updated 5/24/2010 – I’ve received some comments about expandability and I’ve received a correction about the speed of Dell’s memory, so I’ve updated this post. You’ll find the corrections / additions below in GREEN.

Since I’ve received a lot of comments from my post on the Dell FlexMem Bridge technology, I thought I would do an unbiased comparison between Dell’s FlexMem Bridge technology (via the PowerEdge 11G M910 blade server) vs IBM’s MAX5 + HX5 blade server offering. In summary both offerings provide the Intel Xeon 7500 CPU plus the ability to add “extended memory” offering value for virtualization, databases and any other workloads that benefit from large amounts of memory.

The Contenders

IBM
IBM’s extended memory solution is a two part solution consisting of the HX5 blade server PLUS the MAX5 memory blade.

  • HX5 Blade Server
    I’ve spent considerable time on previous blogs detailing the IBM HX5, so please jump over to those links to dig into the specifics, but at a high level, the HX5 is IBM’s 2 CPU blade server that offers the Intel Xeon 7500 CPU. The HX5 is a 30mm, “single wide” blade server therefore you can fit up to 14 in an IBM BladeCenter H blade chassis.
  • MAX5
    The MAX 5 offering from IBM can be thought of as a “memory expansion blade.” Offering an additional 24 memory DIMM slots, the MAX5 when coupled with the HX5 blade server, provides a total of 40 memory DIMMs. The MAX5 is a standard “single wide”, 30mm form factor so when used with a single HX5 two IBM BladeCenter H server bays are required in the chassis.

Dell PowerEdge 11G M910 Blade ServerDELL
Dell’s approach to extended memory is a bit different. Instead of relying on a memory blade, Dell starts with the M910 blade server and allows users to use 2 CPUs plus their FlexMem Bridge to access the memory DIMMs of the 3rd and 4th CPU sockets. For details on the FlexMem Bridge, check out my previous post.

  • PowerEdge 11G M910 Blade Server
    The M910 is a 4 CPU capable blade server with 32 memory DIMMs. This blade server is a full-height server therefore you can fit 8 servers inside the Dell M1000e blade chassis.

The Face-Off

ROUND 1 – Memory Capacity
When we compare the memory DIMMs available on each, we see that Dell’s offering comes up with 32 DIMMs vs IBM’s 40 DIMMs. However, IBM’s solution of using the HX5 blade server + the MAX 5 memory expansion has a current maximum memory size is 8Gb whereas Dell offers a max memory size of 16Gb. While this may change in the future, as of today, Dell has the edge so I have to claim:

Round 1 Winner: Dell

ROUND 2 – Memory Performance
As many comments came across on my posting of the Dell FlexMem Bridge technology the other day, several people pointed out that the memory performance is something that needs to be considered when comparing technologies. Dell’s FlexMem Bridge offering reportedly runs at a maximum memory speed of 833Mhz, runs at a max of 1066Ghz, but is dependent upon the speed of the processor. A processor that has a 6.4GT QPI supports memory @ 1066Ghz ; a processor that supports 5.8GT/s QPI supports memory at 978Mhz, and a processor with a QPI speed of 4.8GT runs memory at 800Mhz. This is a component of Intel’s Xeon 7500 architecture so it should be the same regardless of the server vendor. Looking at IBM, we see the HX5 blade server memory runs at a maximum of 978Mhz. However, when you attach the MAX5 to the HX5 for the additional memory slots, however, the memory runs at speed of 1066Mhz, regardless of the speed of the CPU installed. While this appears to be black magic, it’s really the results of IBM’s proprietary eXa scaling – something that I’ll cover in detail at a later date. Although the HX5 blade server memory, when used by itself, does not have the ability to achieve 1066Ghz, this comparison is based on the Dell PowerEdge 11G M910 vs the IBM HX5+MAX5. With that in mind, the ability to run the expanded memory at 1066Mhz gives IBM the edge in this round.

Round 2 Winner: IBM

ROUND 3 – Server Density
This one is pretty straight forward. IBM’s HX5 + MAX5 offering takes up 2 server bays, so in the IBM BladeCenter H, you can only fit 7 systems. You can only fit 4 BladeCenter H chassis in a 42u rack, therefore you can fit a max of 28 IBM HX5 + MAX5 systems into a rack.

The Dell PowerEdge 11G M910 blade server is a full height server, so you can fit 8 servers into the Dell M1000e chassis. 4 Dell chassis will fit in a 42u rack, so you can get 32 Dell M910’s into a rack.

Round 3 Winner: Dell

(NEW) ROUND 4 – Expandability
It was mentioned several times in the comments that expandability should have been reviewed as well. When we look at Dell’s design, we see there two expansion options: run the Dell PowerEdge 11G M910 blade with 2 processors and the FlexMem Bridge, or run them with 4 processors and remove the FlexMem Bridge.

The modular design of the IBM eX5 architecture allows for a user to add memory (MAX5), add processors (2nd HX5) or both (2 x HX5 + 2 x MAX5). This provide users with a lot of flexibility to choose a design that meets their workload.

Choosing a winner for this round is tough, as there a different ways to look at this:

Maximum CPUs in a server: TIE – both IBM and Dell can scale to 4 CPUs.
Maximum CPU density in a 42u rack: Dell wins with 32 x 4 CPU servers vs IBM’s 12.
Maximum Memory in a server: IBM with 640Gb using 2 x HX5 and 2 x MAX5
Max Memory density in a 42u Rack: Dell wins with 16Tb

Round 4 Winner: TIE
Summary
While the fight was close, with a 2 to 1 win, it is clear the overall winner is Dell. For this comparison, I tried to keep it focused on the memory aspect of the offerings.

On a final note, at the time of this writing, the IBM MAX 5 memory expansion has not been released for general availability, while Dell is shipping their M910 blade server.

There may be other advantages relative to processors that were not considered for this comparison, however I welcome any thoughts or comments you have.

UPDATED 4/14/2010 – IBM announced today their newest blade server using the POWER7 processor. The BladeCenter PS700, PS701 and PS702 servers are IBM’s latest addition to the blade server family, behind last month’s announcement of the BladeCenter HX5 server, based on the Nehalem EX processor. The POWER7 processor-based PS700, PS701 and PS702 blades support AIX, IBM i, and Linux operating systems. (For Windows operations systems, stick with the HS22 or the HX5.) For those of you not familiar with the POWER processor, the POWER7 processor is a 64-bit, 4 core with 256KB L2 cache per core and 4MB L3 cache per core. Today’s announcement reflects IBM’s new naming schema as well. Instead of being labled “JS” blades like in the past, the new POWER family blade servers will be titled “PS” – for Power Systems. Finally – a naming schema that makes sense. (Will someone explain what IBM’s “LS” blades stand for??) Included in today’s announcement are the PS700, PS701 and PS702 blade. Let’s review each.

IBM BladeCenter PS700
The PS700 blade server is a single socket, single wide 4-core 3.0GHz POWER7
processor-based server that has the following:

  • 8 DDR3 memory slots (available memory sizes are 4GB, 1066Mhz or 8GB, 800Mhz)
  • 2 onboard 1Gb Ethernet ports
  • integrated SAS controller supporting RAID levels 0,1 or 10
  • 2 onboard disk drives (SAS or Solid State Drives)
  • one PCIe CIOv expansion card slot
  • one PCIe CFFh expansion card slot

The PS700 is supported in the BladeCenter E, H, HT and S chassis. (Note, support in the BladeCenter E requires an Advanced Management Module and a minimum of two 2000 watt power supplies.)

IBM BladeCenter PS701
The PS701 blade server is a single socket, single wide 8-core 3.0GHz POWER7
processor-based server that has the following:

  • 16 DDR3 memory slots (available memory sizes are 4GB, 1066Mhz or 8GB, 800Mhz)
  • 2 onboard 1Gb Ethernet ports
  • integrated SAS controller supporting RAID levels 0,1 or 10
  • 2 1 onboard disk drives (SAS or Solid State Drives)
  • one PCIe CIOv expansion card slot
  • one PCIe CFFh expansion card slot

The PS701 is supported in the BladeCenter H, HT and S chassis only.

IBM BladeCenter PS702
The PS702 blade server is a dual socket, double-wide 16core (via 2 x 8-core CPUs) 3.0GHz POWER7 processor-based server that has the following:

  • 32 DDR3 memory slots (available memory sizes are 4GB, 1066Mhz or 8GB, 800Mhz)
  • 4 onboard 1Gb Ethernet ports
  • integrated SAS controller supporting RAID levels 0,1 or 10
  • 2 onboard disk drives (SAS or Solid State Drives)
  • 2 PCIe CIOv expansion card slots
  • 2 PCIe CFFh expansion card slots

The PS702 is supported in the BladeCenter H, HT and S chassis only.

For more technical details on the PS blade servers, please visit IBM’s redbook page at: http://www.redbooks.ibm.com/redpieces/abstracts/redp4655.html?Open

Cisco recently announced their first blade offering with the Intel Xeon 7500 processor, known as the “Cisco UCS B440-M1 High-Performance Blade Server.” This new blade is a full-width blade that offers 2 – 4 Xeon 7500 processors and 32 memory slots, for up to 256GB RAM, as well as 4 hot-swap drive bays. Since the server is a full-width blade, it will have the capability to handle 2 dual-port mezzanine cards for up to 40 Gbps I/O per blade.

Each Cisco UCS 5108 Blade Server Chassis can house up to four B440 M1 servers (maximum 160 per Unified Computing System).

How Does It Compare to the Competition?
Since I like to talk about all of the major blade server vendors, I thought I’d take a look at how the new Cisco B440 M1 compares to IBM and Dell. (HP has not yet announced their Intel Xeon 7500 offering.)

Processor Offering
Both Cisco and Dell offer models with 2 – 4 Xeon 7500 CPUs as standard. They each have variations on speeds – Dell has 9 processor speed offerings; Cisco hasn’t released their speeds and IBM’s BladeCenter HX5 blade server will have 5 processor speed offerings initially. With all 3 vendors’ blades, however, IBM’s blade server is the only one that is designed to scale from 2 CPUs to 4 CPUs by connecting 2 x HX5 blade servers. Along with this comes their “FlexNode” technology that enables users to have the 4 processor blade system to split back into 2 x 2 processor systems at specific points during the day. Although not announced, and purely my speculation, IBM’s design also leads to a possible future capability of connecting 4 x 2 processor HX5’s for an 8-way design. Since each of the vendors offer up to 4 x Xeon 7500’s, I’m going to give the advantage in this category to IBM. WINNER: IBM

Memory Capacity
Both IBM and Cisco are offering 32 DIMM slots with their blade solutions, however they are not certifying the use of 16GB DIMMs – only 4GB and 8GB DIMMs, therefore their offering only scales to 256GB of RAM. Dell claims to offers 512GB DIMM capacity on their the PowerEdge 11G M910 blade server, however that is using 16GB DIMMs. REalistically, I think the M910 would only be used with 8GB DIMMs, so Dell’s design would equal IBM and Cisco’s. I’m not sure who has the money to buy 16GB DIMMs, but if they do – WINNER: Dell (or a TIE)

Server Density
As previously mentioned, Cisco’s B440-M1 blade server is a “full-width” blade so 4 will fit into a 6U high UCS5100 chassis. Theoretically, you could fit 7 x UCS5100 blade chassis into a rack, which would equal a total of 28 x B440-M1’s per 42U rack.
Overall, Cisco’s new offering is a nice addition to their existing blade portfolio. While IBM has some interesting innovation in CPU scalability and Dell appears to have the overall advantage from a server density, Cisco leads the management front.

Dell’s PowerEdge 11G M910 blade server is a “full-height” blade, so 8 will fit into a 10u high M1000e chassis. This means that 4 x M1000e chassis would fit into a 42u rack, so 32 x Dell PowerEdge M910 blade servers should fit into a 42u rack.

IBM’s BladeCenter HX5 blade server is a single slot blade server, however to make it a 4 processor blade, it would take up 2 server slots. The BladeCenter H has 14 server slots, so that makes the IBM solution capable of holding 7 x 4 processor HX5 blade servers per chassis. Since the chassis is a 9u high chassis, you can only fit 4 into a 42u rack, therefore you would be able to fit a total of 28 IBM HX5 (4 processor) servers into a 42u rack.
WINNER: Dell

Management
The final category I’ll look at is the management. Both Dell and IBM have management controllers built into their chassis, so management of a lot of chassis as described above in the maximum server / rack scenarios could add some additional burden. Cisco’s design, however, allows for the management to be performed through the UCS 6100 Fabric Interconnect modules. In fact, up to 40 chassis could be managed by 1 pair of 6100’s. There are additional features this design offers, but for the sake of this discussion, I’m calling WINNER: Cisco.

Cisco’s UCS B440 M1 is expected to ship in the June time frame. Pricing is not yet available. For more information, please visit Cisco’s UCS web site at http://www.cisco.com/en/US/products/ps10921/index.html.

(Updated 4/22/2010 at 2:48 p.m.)
IBM officially announced the HX5 on Tuesday, so I’m going to take the liberty to dig a little deeper in providing details on the blade server. I previously provided a high-level overview of the blade server on this post, so now I want to get a little more technical, courtesy of IBM. It is my understanding that the “general availability” of this server will be in the mid-June time frame, however that is subject to change without notice.

Block Diagram
Below is the details of the actual block diagram of the HX5. There’s no secrets here, as they’re using the Intel Xeon 6500 and 7500 chipsets chipset that I blogged about previously.

As previously mentioned, the value that the IBM HX5 blade server brings is scalability. A user has the ability to buy a single blade server with 2 CPUs and 16 DIMMs, then expand it to 40 DIMMs with a 24 DIMM MAX 5 memory blade. OR, in the near future, a user could combine 2 x HX5 servers to make a 4 CPU server with 32 DIMMs, or add a MAX5 memory DIMM to each server and have a 4 CPU server with 80 DIMMs.

The diagrams below provide a more technical view of the the HX5 + MAX5 configs. Note, the “sideplanes” referenced below are actualy the “scale connector“. As a reminder, this connector will physically connect 2 HX5 servers on the tops of the servers, allowing the internal communications to extend to each others nodes. The easiest way to think of this is like a Lego . It will allow a HX5 or a MAX5 to be connected together. There will be a 2 connector, a 3 connector and a 4 connector offering.

 (Updated) Since the original posting, IBM released the “eX5 Porfolio Technical Overview: IBM System x3850 X5 and IBM BladeCenter HX5” so I encourage you to go download it and give it a good read. David’s Redbook team always does a great job answering all the questions you might have about an IBM server inside those documents.

If there’s something about the IBM BladeCenter HX5 you want to know about, let me know in the comments below and I’ll see what I can do.

Thanks for reading!

InfoWorld.com posted on 3/22/2010 the results of a blade server shoot-out between Dell, HP, IBM and Super Micro. I’ll save you some time and help summarize the results of Dell, HP and IBM.

The Contenders
Dell, HP and IBM each provided blade servers with the Intel Xeon X5670 2.93GHz CPUs and at least 24GB of RAM in each blade.

The Tests
InfoWorld designed a custom suite VMware tests as well as several real-world performance metric tests. The VMware tests were composed of:

  • a single large-scale custom LAMP application
  • a load-balancer running Nginx
  • four Apache Web servers
  • two MySQL servers

InfoWorld designed the VMware workloads to mimic a real-world Web app usage model that included a weighted mix of static and dynamic content, randomized database updates, inserts, and deletes with the load generated at specific concurrency levels, starting at 50 concurrent connections and ramping up to 200. InfoWorld’s started off with the VMware tests first on one blade server, then across two blades. Each blade being tested were running VMware ESX 4 and controlled by a dedicated vCenter instance.

The other real-world tests included serveral tests of common single-threaded tasks run simultaneously at levels that met and eclipsed the logical CPU count on each blade, running all the way up to an 8x oversubscription of physical cores. These tests included:

  • LAME MP3 conversions of 155MB WAV files
  • MP4-to-FLV video conversions of 155MB video files
  • gzip and bzip2 compression tests
  • MD5 sum tests

The ResultsDell
Dell did very well, coming in at 2nd in overall scoring. The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis

Some key points made in the article about Dell:

  • Dell does not offer a lot of “blade options.” There are several models available, but they are the same type of blades with different CPUs. Dell does not currently offer any storage blades or virtualization-centric blades.
  • Dell’s 10Gb design does not offer any virtualized network I/O. The 10G pipe to each blade is just that, a raw 10G interface. No virtual NICs.
  • The new CMC (chassis management controller) is a highly functional and attractive management tool offering new tasks like pusing actions to multiple blades at once such as BIOS updates and RAID controller firmware updates.
  • Dell has implemented more efficient dynamic power and cooling features in the M1000e chassis. Such features include the ability to shut down power supplies when the power isn’t needed, or ramping the fans up and down depending on load and the location of that load.

According to the article, “Dell offers lots of punch in the M1000e and has really brushed up the embedded management tools. As the lowest-priced solution…the M1000e has the best price/performance ratio and is a great value.”

HP
Coming in at 1st place, HP continues to shine in blade leadership. HP’s testing equipment consisted of a c7000 nine BL460c blades, each running two 2.93GHz Intel Xeon X5670 (Westmere-EP) CPUs and 96GB of RAM as well as embedded 10G NICs with a dual 1G mezzanine card. As an important note, HP was the only server vendor with 10G NICs on the motherboard. Some key points made in the article about HP:

  • With the 10G NICs standard on the newest blade server models, InfoWorld says “it’s clear that HP sees 10G as the rule now, not the exception.”
  • HP’s embedded Onboard Administrator offers detailed information on all chassis components from end to end. For example, HP’s management console can provide exact temperatures of every chassis or blade component.
  • HP’s console can not offer global BIOS and firmware updates (unlike Dell’s CMC) or the ability to powering up or down more than one blade at a time.
  • HP offers “multichassis management” – the ability to daisy-chain several chassis together and log into any of them from the same screen as well as manage them. This appears to be a unique feature to HP.
  • The HP c7000 chassis also has power controlling features like dynamic power saving options that will automatically turn off power supplies when the system energy requirements are low or increasing the fan airflow to only those blades that need it.

InfoWorld’s final thoughts on HP: “the HP c7000 isn’t perfect, but it is a strong mix of reasonable price and high performance, and it easily has the most options among the blade system we reviewed.”

IBM
Finally, IBM’s came in at 3rd place, missing a tie with Dell by a small fraction. Surprisingly, I was unable to find the details on what the configuration was for IBM’s testing. Not sure if I’m just missing it, or if InfoWorld left out the information, but I know IBM’s blade server had the same Intel Xeon X5670 CPUs as Dell and HP used. Some of the points that InfoWorld mentioned about IBM’s BladeCenter H offering:

  • IBM’s pricing is higher.
  • IBM’s chassis only holds 14 servers whereas HP can hold 32 servers (using BL2x220c servers) and Dell holds 16 servers.
  • IBM’s chassis doesn’t offer a heads-up display (like HP and Dell.)
  • IBM had the only redundant internal power and I/O connectors on each blade. It is important to note the lack of redundant power and I/O connectors is why HP and Dell’s densities are higher. If you want redundant connections on each blade with HP and Dell, you’ll need to use their “full-height” servers, which decrease HP and Dell’s overall capacity to 8.
  • IBM’s Management Module is lacking graphical features – there’s no graphical representation of the chassis or any images. From personal experience, IBM’s management module looks like it’s stuck in the ’90s – very text based.
  • The IBM BladeCenter H lacks dynamic power and cooling capabilities. Instead of using smaller independent regional fans for cooling, IBM uses two blowers. Because of this, the ability to reduce cooling in specific areas, like Dell and HP offer are lacking.

InfoWorld summarizes the IBM results saying, ” if you don’t mind losing two blade slots per chassis but need some extra redundancy, then the IBM BladeCenter H might be just the ticket.”

Overall, each vendor has their own pro’s and con’s. InfoWorld does a great job summarizing the benefits of each offering below. Please make sure to visit the InfoWorld article and read all of the details of their blade server shoot-out.