You are currently browsing the category archive for the ‘Uncategorized’ category.

I continuously find myself re-visiting the same links to find additional information regarding blade servers, so I finally came up with a revolution: why not consolidate the links and put them on my site? Introducing a new addition to my blog – the “Helpful Links” page. Located at the top of every page, the Helpful Links page is designed to be a single stop shop for the best links related to blade servers. My goal is to continue to update every page at my primary site (, so if you have links you want to see, or if something is broken, please let me know. This is YOUR site. I built this site for you, so in the words of Jerry Maguire, “Help me, help you.” Thanks for your continued support.

Go to Helpful Links Section »

Wow. HP continues its “blade everything” campaign from 2007 with a new offering today – HP Integrity Superdome 2 on blade servers. Touting a message of a “Mission-Critical Converged Infrastructure” HP is combining the mission critical Superdome 2 architecture with the successful scalable blade architecture. According to HP’s datasheet for the Integrity Superdome, “The Integrity Superdome is an ideal system to handle online transaction processing (OLTP), data mining, customer relationship management, enterprise resource planning, database hosting, telecom billing, human resources, financial applications, data warehousing and high-performance computing.” In other words – it’s a mission-critical workhorse. Previous generations of the HP Integrity Superdome required a dedicated enclosure, but with this new announcement HP will enable users to combine Superdome, Itanium and x86 servers all in the same rack.

Superdome 2 Modular Architecture

The Chassis

The HP Superdome 2 blade will consist of a new blade chassis which will be exclusively used for the HP Superdome 2 architecture. From what I can tell, it appears this new blade chassis will hold 12 power supplies and hold up to 8 HP Integrity Superdome 2 blade servers. It also appears the new chassis will offer an Insight Display like the HP BladeSystem c3000 and c7000. Based on the naming schema, I’m willing to be HP will call this the HP BladeSystem c9000 chassis once it is released in the later part of this year, but for now it’s known as the “SD2 Enclosure“.

The server

Based on the Intel Itanium 9300, the blade server name that HP has chosen is “HP CB900 i2” cell blade. Little is known about the specs for this server, but HP has announced it will be offered in 8 and 16 socket “building blocks.”

I/O Expanders

To the mix of the scalability, HP will also offer up “I/O Expanders” that will be available for the CB900’s to access. Traditionally, I/O Expanders provide the PCI Express slots for the servers to access. HP’s approach of breaking out the I/O slots into a dedicated chassis provides large I/O expansion, even for blade servers. Again, the specifics of this component have not been provided by HP, but I expect for HP to release this info when the server becomes available later this year.

The Value

Why blades? The blade architecture that HP has built provides a modular architecture so users can build a Superdome platform once and grow it as it is needed by adding I/O and cells (blade servers.) Looking at the bigger picture, by using the already established blade architecture, HP is able to create a “common platform” so everything from workstations to mission-critical systems can use the same design.

Using a common platform means lower cost for HP to manufacture and faster time-to-market for future products. Not only will the blade chassis enclosure feature common components (power supplies, etc) be the same between the x86 and Superdome 2 offerings, but there will be common networking and common management as well.

Also, by using the blade architecture, the HP Integrity Superdome 2 blades can take advantage of the “FlexFabric” – one of the HP Converged Infrastructure messaging that reference the flexibility of having granular control over your Ethernet and Fibre environment with components like VirtualConnect.

While I have doubts on the future of Intel’s Itanium 9300 processor, I commend HP for this Superdome 2 announcement. It will be interesting to see how it is adopted. Let me know what your thoughts are in the Comments section below.

I’ve recently posted some rumours about IBM’s upcoming announcements in their blade server line, now it is time to let you know some rumours I’m hearing about HP. NOTE: this is purely speculation, I have no definitive information from HP so this may be false info. That being said – here we go:

Rumour #1: Integration of “CNA” like devices on the motherboard.
As you may be aware, with the introduction of the “G6”, or Generation 6, of HP’s blade servers, HP added “FlexNICs” onto the servers’ motherboards instead of the 2 x 1Gb NICs that are standard on most of the competition’s blades. FlexNICs allow for the user to carve up a 10Gb NIC into 4 virtual NICs when using the Flex-10 Modules inside the chassis. (For a detailed description of Flex-10 technology, check out this HP video.) The idea behind Flex-10 is that you have 10Gb connectivity that allows you to do more with fewer NICs.

SO – what’s next? Rumour has it that the “G7” servers, expected to be announced on March 16, will have an integrated CNA or Converged Network Adapter. With a CNA on the motherboard, both the ethernet and the fibre traffic will have a single integrated device to travel over. This is a VERY cool idea because this announcement could lead to a blade server that can eliminate the additional daughter card or mezzanine expansion slots therefore freeing up valueable real estate for newer Intel CPU architecture.

Rumour #2: Next generation Flex-10 Modules will separate Fibre and Network traffic.

Today, HP’s Flex-10 ONLY allows handles Ethernet traffic. There is no support for FCoE (Fibre Channel over Ethernet) so if you have a Fibre network, then you’ll also have to add a Fibre Switch into your BladeSystem chassis design. If HP does put in a CNA onto their next generation blade servers that carry Fibre and Ethernet traffic, wouldn’t it make sense there would need to be a module that would fit in the BladeSystem chassis that would allow for the storage and Ethernet traffic to exit?

I’m hearing that a new version of the Flex-10 Module is coming, very soon, that will allow for the Ethernet AND the Fibre traffic to exit out the switch. (The image to the right shows what it could look like.) The switch would allow for 4 of the uplink ports to go to the Ethernet fabric and the other 4 ports of the 8 port Next Generation Flex-10 switch to either be dedicated to a Fibre fabric OR used for additional 4 ports to the Ethernet fabric.

If this rumour is accurate, it could shake up things in the blade server world. Cisco UCS uses 10Gb Data Center Ethernet (Ethernet plus FCoE); IBM BladeCenter has the ability to do a 10Gb plus Fibre switch fabric (like HP) or it can use a 10Gb Enhanced Ethernet plus FCoE (like Cisco) however no one currently has a device to split the Ethernet and Fibre traffic at the blade chassis. If this rumour is true, then we should see it announced around the same time as the G7 blade server (March 16).

That’s all for now. As I come across more rumours, or information about new announcements, I’ll let you know.

As I mentioned previously, the next few weeks are going to be filled with new product / technology annoucements. Here’s a list of some dates that you may want to mark on your calendar (and make sure to come back here for details:)

Feb 9 – Big Blue new product announcement (hint: in the BladeCenter family)

Mar 2 – Big Blue non-product annoucement (hint: it’s not the eX4 family)

Mar 16 – Intel Westmere (Intel Xeon 5600) Processor Announcement (expect HP and IBM to announce their Xeon 5600 offerings)

Mar 30 – Intel Nehalem EX (Xeon 7600) Processor Annoucement (expect HP and IBM to announce their Intel Xeon 7600 offerings)

As always, you can expect for me to give you coverage on the new blade server technology as it gets announced!

2-2-10 CORRECTION Made Below

Okay, I’ve seen the details on IBM’s next generation 4 processor blade server that is based on the Intel Nehalem EX cpu and I can tell you that IBM’s about to change the way people look at workloads for blade servers. Out of respect for IBM (and at the risk of getting in trouble) I’m not going to disclose any confidential details, but I can tell you a few things:

1) my previous post about what the server will look like is not far off. In fact it was VERY close. However IBM up’d the ante and made a few additions that I didn’t expect that will make it appealing for customers who need the ability to run large workloads.

2) the scheduled announce date for this new 4 processor IBM blade server based on the Nehalem EX (whose name I guessed correctly) will be before April 1, 2010 but after March 15, 2010. Ship date is currently scheduled sometime after May but before July.

As a final teaser, there’s another IBM blade server annoucement scheduled for tomorrow. Once it’s officially announced on Feb 3 Feb 9th  I’ll let you know and give you some details.

Okay, I can’t hold back any longer – I have more rumours. The next 45 days is going to be an EXTREMELY busy month with Intel announcing their Westmere EP processor, the predecessor to the Nehalem EP CPU and with the announcement of the Nehalem EX CPU, the predecessor to the Xeon 7400 CPU. I’ll post more details on these processors in the future, as it becomes available, but for now, I want to talk on some additional rumours that I’m hearing from IBM. As I’ve mentioned in my previous rumour post: this is purely speculation, I have no definitive information from IBM so this may be false info. That being said, here we go:

Rumour #1: As I previously posted, IBM has announced they will have a blade server based on their eX5 architecture – the next generation of their eX4 architecture found in their IBM System x3850 M2 and x3950M2. I’ve posted what I think this new blade server will look like (you can see it here) and I had previously speculated that the server would be called HS43 – however it appears that IBM may be changing their nomenclature for this class of blade to “HX5“. I can see this happening – it’s a blend of “HS” and “eX5”. It is a new class of blade server, so it makes sense. I like the HX5 blade server name, although if you Google HX5 right now, you’ll get a lot of details about the Sony CyberShot DSC-HX5 digital camera. (Maybe IBM should re-consider using HS43 instead of HX5 to avoid any lawsuits.) It also makes it very clear that it is part of their eX5 architecture, so we’ll see if it gets announced that way.

Speaking of announcements…

Rumour #2: While it is clear that Intel is waiting until March (31, I think) to announce the Nehalem EX and Westmere EP processors, I’m hearing rumours that IBM will be announcing their product offerings around the new Intel processors on March 2, 2010 in Toronto. It will be interesting to see if this happens so soon (4 weeks away) but when it does, I’ll be sure to give you all the details!

That’s all I can talk about for now as “rumours”. I have more information on another IBM announcement that I can not talk about, but come back to my site on Feb. 9 and you’ll find out what that new announcement is.

UPDATED 1/22/2010 with new pictures
Cisco UCS B250 M1 Extended Memory Blade Server
Cisco UCS B250 M1 Extended Memory Blade Server

Cisco’s UCS server line is already getting lots of press, but one of the biggest interests is their upcoming Cisco UCS B250 M1 Blade Server. This server is a full-width server occupying two of the 8 server slots available in a single Cisco UCS 5108 blade chassis. The server can hold up to 2 x Intel Xeon 5500 Series processors, 2 x dual-port mezzanine cards, but the magic is in the memory – it has 48 memory slots.

This means it can hold 384GB of RAM using 8GB DIMMS. This is huge for the virtualization marketplace, as everyone knows that virtual machines LOVE memory. No other vendor in the marketplace is able to provide a blade server (or any 2 socket Intel Xeon 5500 server for that matter) that can achieve 384GB of RAM.

So what’s Cisco’s secret? First, let’s look at what Intel’s Xeon 5500 architecture looks like.

intel ram

As you can see above, each Intel Xeon 5500 CPU has its own memory controller, which in turn has 3 memory channels. Intel’s design limitation is 3 memory DIMMs (DDR3 RDIMM) per channel, so the most a traditional server can have is 18 memory slots or 144GB RAM with 8GB DDR3 RDIMM.

With the UCS B-250 M1 blade server, Cisco adds an additional 15 memory slots per CPU, or 30 slots per server for a total of 48 memory slots which leads to 384GB RAM with 8GB DDR3 RDIMM.


How do they do it? Simple – they put in 5 more memory DIMM slots then they present all 24 memory DIMMs across all 3 channels to an ASIC that sits between the memory controller and the memory channels. The ASIC presents the 24 memory DIMMs as 1 x 32GB DIMM to the memory controller. For each 8 memory DIMMs, there’s an ASIC. 3 x ASICs per CPU that represents 192GB RAM (or 384GB in a dual CPU config.)

It’s quite an ingenious approach, but don’t get caught up in thinking about 384GB of RAM – think about 48 memory slots. In the picture below I’ve grouped off the 8 DIMMs with each ASIC in a green square (click to enlarge.)

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

With that many slots, you can get to 192GB of RAM using 4GB DDR3 RDIMMs– which currently cost about 1/5th of the 8GB DIMMs. That’s the real value in this server.

Cisco has published a white paper on this patented technology at so if you want to get more details, I encourage you to check it out.

The first blade server with the upcoming Intel Nehalem EX processor has finally been unveiled. While it is known that IBM will be releasing a 2 or 4 socket blade server with the Nehalem EX, no other vendor has revealed plans up until now. SGI recently announced they will be offering the Nehelem EX on their Altix® UV platform.

Touted as a “The World’s Fastest Supercomputer”, the UV line features the fifth generation of the SGI NUMAlink interconnect, which offers up a whopping 15 GB/sec transfer rate, as well as direct access up to 16 TB of shared memory. The system will have the ability to be configured with up to 2048 Nehalem-EX cores (via 256 processors, or 128 blades) in a single federation with a single global address space.

According to the SGI website, the UV will come in two flavors:

SGI Altix UV 1000

Altix UV 1000 – designed for maximum scalability, this system ships as a fully integrated cabinet-level solution with up to 256 sockets (2,048 cores) and 16TB of shared memory in four racks.

Altix UV 100 (not pictured) – same design as the UV 1000, but designed for the mid-range market; based on an industry-standard 19″ rackmount 3U form factor. Altix UV 100 scales to 96 sockets (768 cores) and 6TB of shared memory in two racks.

SGI has given quite a bit of techinical information about these servers in this whitepaper, including details about the Nehalem EX architecture that I haven’t even seen from Intel. SGI has also published several customer testimonials, including one from the University of Tennessee – so check it out here.

Hopefully, this is just the first of many announcements to come around the Intel Nehalem EX processor.

I recently heard some rumours about IBM’s BladeCenter products that I thought I would share – but FIRST let me be clear: this is purely speculation, I have no definitive information from IBM so this may be false info, but my source is pretty credible, so…

4 Socket Nehalem EX Blade
I posted a few weeks ago my speculation IBM’s announcement that they WILL have a 4 socket blade based on the upcoming Intel Nehalem EX processor ( – so today I got a bit of an update on this server.

Rumour 1: It appears IBM may call it the HS43 (not HS42 like I first thought.) I’m not sure why IBM would skip the “HS42” nomenclature, but I guess it doesn’t really matter. This is rumoured to be released in March 2010.

Rumour 2: It seems that I was right in that the 4 socket offering will be a double-wide server, however it appears IBM is working with Intel to provide a 2 socket Intel Nehalem EX blade as the foundation of the HS43. This means that you could start with a 2 socket blade, then “snap-on” a second to make it a 4 socket offering – but wait, there’s more… It seems that IBM is going to enable these blade servers to grow to up to 8 sockets via snapping on 4 x 2 socket servers together. If my earlier speculations ( are accurate and each 2 socket blade module has 12 DIMMs, this means you could have an 8 socket, 64 cores, 96 DIMM, 1.5TB of RAM (using 16GB per DIMM slot) all in a single BladeCenter chassis. This, of course, would take up 4 blade server slots. Now the obvious question around this bit of news is WHY would anyone do this? The current BladeCenter H only holds 14 servers so you would only be able to get 3 of these monster servers into a chassis. Feel free to offer up some comments on what you think about this.

Rumour 3: IBM’s BladeCenter S chassis currently uses local drives that are 3.5″. The industry is obviously moving to smaller 2.5″ drives, so it’s only natural that the BladeCenter S drive cage will need to be updated to provide 2.5″ drives. Rumour is that this is coming in April 2010 and it will offer up to 24 x 2.5″ SAS or SATA drives.

Rumour 4: What’s missing from the BladeCenter S right now that HP currently offers? A tape drive. Rumour has it that IBM will be adding a “TS Family” tape drive offering to the BladeCenter S in upcoming months. This makes total sense and is well-needed. Customers buying the BladeCenter S are typically smaller offices or branch offices, so using a local backup device is a critical component to insuring data protection. I’m not sure if this will be in the form of taking up a blade slot (like HP’s model) or it will be a replacement for one of the 2 drive cages. I would imagine it will be the latter since the BladeCenter S architecture allows for all servers to connect to the drive cages, but we’ll see.

That’s all I have. I’ll continue to keep you updated as I hear rumours or news.

Since the hit movie AVATAR surpassed the $1 Billion Revenue mark this weekend I thought it would be interesting to post some information about how the movie was put together – especially since the hardware behind the magic was the HP BL2x220c.

According to an article from, AVATAR was put together at a visual effects production house called Weta Digital located in Miramar, New Zealand. Weta’s datacenter sits in a 10,000 square foot facility however the film’s computing core ran on 2,176 HP BL 2x220c Blade Servers. This added up to over 40,000 processors and 104 terabytes of RAM. (Check out my post on the HP BL 2x220c blade server for details on this 2 in 1 server design by HP.)

The HP blades read and wrote data against 3 petabytes of fast fiber channel disk network area storage from BluArc and NetApp. According to the article, all of the gear was connected by multiple 10-gigabit network links. “We need to stack the gear closely to get the bandwidth we need for our visual effects, and, because the data flows are so great, the storage has to be local,” says Paul Gunn, Weta’s data center systems administrator.

The article also highlights the fact that the datacenter uses water cooled racks to keep the racks and storage cooled. Suprisingly, the water cooled design, along with a cool local climate, allows Weta to run their datacenter for less cost than running air conditioning (all they pay for is the cost of running water.) In fact, they recently won an energy excellence award for building a smaller footprint that came with 40% lower cooling cost.

Summary of Hardware Used for AVATAR:

  • 34 racks – each with 4 HP BladeSystem Chassis, 32 servers (16 BL2x220c)
  • over 40,000 processors
  • 104 TB RAM

Since I don’t want to re-write the excellent article from, I encourage you to click here to read the full article.