You are currently browsing the tag archive for the ‘UCS’ tag.

What is Cisco’s blade server market share? That seems to be the mystery question that no one can really answer. The previous IDC quarterly worldwide server report mentioned nothing about Cisco, yet readers and bloggers alike claim Cisco is #3, so what IS the true answer. Read the rest of this entry »

Advertisements

Thanks to fellow blogger, M. Sean McGee (http://www.mseanmcgee.com/) I was alerted to the fact that Cisco announced on today, Sept. 14, their 13th blade server to the UCS family – the Cisco UCS B230 M1.

This newest addition performs a few tricks that no other vendor has been able to perform. Read the rest of this entry »

UPDATED 1/22/2010 with new pictures
Cisco UCS B250 M1 Extended Memory Blade Server
Cisco UCS B250 M1 Extended Memory Blade Server

Cisco’s UCS server line is already getting lots of press, but one of the biggest interests is their upcoming Cisco UCS B250 M1 Blade Server. This server is a full-width server occupying two of the 8 server slots available in a single Cisco UCS 5108 blade chassis. The server can hold up to 2 x Intel Xeon 5500 Series processors, 2 x dual-port mezzanine cards, but the magic is in the memory – it has 48 memory slots.

This means it can hold 384GB of RAM using 8GB DIMMS. This is huge for the virtualization marketplace, as everyone knows that virtual machines LOVE memory. No other vendor in the marketplace is able to provide a blade server (or any 2 socket Intel Xeon 5500 server for that matter) that can achieve 384GB of RAM.

So what’s Cisco’s secret? First, let’s look at what Intel’s Xeon 5500 architecture looks like.

intel ram

As you can see above, each Intel Xeon 5500 CPU has its own memory controller, which in turn has 3 memory channels. Intel’s design limitation is 3 memory DIMMs (DDR3 RDIMM) per channel, so the most a traditional server can have is 18 memory slots or 144GB RAM with 8GB DDR3 RDIMM.

With the UCS B-250 M1 blade server, Cisco adds an additional 15 memory slots per CPU, or 30 slots per server for a total of 48 memory slots which leads to 384GB RAM with 8GB DDR3 RDIMM.

b250-ram

How do they do it? Simple – they put in 5 more memory DIMM slots then they present all 24 memory DIMMs across all 3 channels to an ASIC that sits between the memory controller and the memory channels. The ASIC presents the 24 memory DIMMs as 1 x 32GB DIMM to the memory controller. For each 8 memory DIMMs, there’s an ASIC. 3 x ASICs per CPU that represents 192GB RAM (or 384GB in a dual CPU config.)

It’s quite an ingenious approach, but don’t get caught up in thinking about 384GB of RAM – think about 48 memory slots. In the picture below I’ve grouped off the 8 DIMMs with each ASIC in a green square (click to enlarge.)

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

Cisco UCS B250 ASICS Grouped with 8 Memory DIMMs

With that many slots, you can get to 192GB of RAM using 4GB DDR3 RDIMMs– which currently cost about 1/5th of the 8GB DIMMs. That’s the real value in this server.

Cisco has published a white paper on this patented technology at http://www.cisco.com/en/US/prod/collateral/ps10265/ps10280/ps10300/white_paper_c11-525300.html so if you want to get more details, I encourage you to check it out.

Cisco’s own Omar Sultan and Brian Schwarz recently blogged about Cisco’s Unified Computing System (UCS) Manager software and offered up a pair of videos demonstrating its capabilities. In my opinion, the management software of Cisco’s UCS is the magic that is going to push Cisco out of the Visionary quadrant of the Gartner Magic Quadrant for Blade Servers to the “Leaders” quadrant.

The Cisco UCS Manager is the centralized management interface that integrates the entire set of Cisco Unified Computing System components. The management software not only participates in UCS blade server provisioning, but also in device discovery, inventory, configuration, diagnostics, onitoring, fault detection, auditing, and statistics collection.

On Omar’s Cisco blog, located at http://blogs.cisco.com/datacenter, Omar and Brian created two videos. Part 1 of their video offers a general overview of the Management software, where as in Part 2 they highlight the capabilities of profiles.

I encourage you to check out the videos – they did a great job with them.

Previously known as “Palo”, Cisco’s virtualized adapter allows for a server to split up the 10Gb pipes into numerous virtual pipes (see belowpalo adapter) like multiple NICs or multiple Fibre Channel HBAs.  Although the card shown in the image to the left is a normal PCIe card, the initial launch of the card will be in the Cisco UCS blade server. 

So, What’s the Big Deal?

When you look at server workloads, their needs vary – web servers need a pair of NICs, whereas database servers may need 4+ NICs and 2+HBAs.  By having the ability to split the 10Gb pipe into virtual devices, you can set up profiles inside of Cisco’s UCS Manager to apply the profiles for a specific servers’ needs.  An example of this would be a server being used for VMware VDI (6 NICs and 2 HBAs) during the day, and at night, it’s repurposed for a computational server needing only 4 NICs.

Another thing to note is although the image shows 128 virtual devices, that is only the theoretical limitation.  The reality is that the # of virtual devices depends on the # of connections to the Fabric Interconnects.  As I previously posted, the servers’ chassis has a pair of  4 port Fabric Extenders (aka FEX) that uplink to the UCS 6100 Fabric Interconnect.  If only 1 of the 4 ports is uplinked to the UCS 6100, then only 13 virtual devices will be available.  If 2 FEX ports are uplinked, then 28 virtual devices will be available.  If 4 FEX uplink ports are used, then 58 virtual devices will be available. 

Will the ability to carve up your 10Gb pipes into smaller ones make a difference?  It’s hard to tell.  I guess we’ll see when this card starts to ship in December of 2009.

eWeek recently posted snapshots of Cisco’s Unified Computing System (UCS) Software on their site: http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss

Take a good look at the software because the software is the reason this blade system will be successful because they are treating the physical blades as a resource – just CPUs, memory and I/O.  “What” the server should be and “How” the server should act is a feature of the UCS Management software.  It will show you the physical layout of the blades to the UCS 6100 Interconnect, it can show you the configurations of the blades in the attached UCS 5108 chassis, it can set the boot order of the blades, etc.  Quite frankly there are too many features to mention and I don’t want to steal their fire, so take a few minutes to go to:  http://www.eweek.com/c/a/IT-Infrastructure/LABS-GALLERY-Cisco-UCS-Unified-Computing-System-Software-199462/?kc=rss.

News Flash: Cisco is now selling servers!

Okay – perhaps this isn’t news anymore, but the reality is Cisco has been getting a lot of press lately – from their overwhelming presence at VMworld 2009 to their ongoing cat fight with HP.  Since I work for a Solutions Provider that sells HP, IBM and now Cisco blade servers, I figured it might be good to “try” and put together a comparison between the Cisco and IBM.  Why IBM?  Simply because at this time, they are the only blade vendor who offers a Converged Network Adapter (CNA) that will work with the Cisco Nexus 5000 line.  At this time Dell and HP do not offer a CNA for their blade server line so IBM is the closest we can come to Cisco’s offering.  I don’t plan on spending time educating you on blades, because if you are interested in this topic, you’ve probably already done your homework.  My goal with this post is to show the pros (+) and cons (-) that each vendor has with their blade offering – based on my personal, neutral observation

Chassis Variety / Choice: winner in this category is IBM. 
IBM currently offers 5 types of blade chassis: BladeCenter S, BladeCenter E, BladeCenter H, BladeCenter T and BladeCenter HT.   Each of the IBM blade chassis have unique offerings, such as the BladeCenter S is designed for small or remote offices with local storage capabilities, whereas the BladeCenter HT is designed for Telco environments with options for NEBS compliant features including DC power.  At this time, Cisco only offers a single blade chassis offering (the 5808).

IBM BladeCenter H

IBM BladeCenter H

 
Cisco UCS 5108 

Cisco UCS 5108

Server Density and Server Offerings: winner in this category is IBM.  IBM’s BladeCenter E and BladeCenter H chassis offer up to 14 blade servers with servers using Intel, AMD and Power PC processors.  In comparison, Cisco’s 5808 chassis offers up to 8 server slots and currently offers servers with Intel Xeon processors.  As an honorable mention Cisco does offer a “full width” blade (Cisco UCS B250 server)  that provides up to 384Gb of RAM in a single blade server across 48 memory slots offering up the ability to get to higher memory at a lower price point.   

 Management / Scalability: winner in this category is Cisco. 
This is where Cisco is changing the blade server game.  The traditional blade server infrastructure calls for each blade chassis to have its own dedicated management module to gain access to the chassis’ environmentals and to remote control the blade servers.  As you grow your blade chassis environment, you begin to manage multiple servers. 
Beyond the ease of managing , the management software that the Cisco 6100 series offers provides users with the ability to manage server service profiles that consists of things like MAC Addresses, NIC Firmware, BIOS Firmware, WWN Addresses, HBA Firmware (just to name a few.) 

Cisco UCS 6100 Series Fabric Interconnect

Cisco UCS 6100 Series Fabric Interconnect

With Cisco’s UCS 6100 Series Fabric Interconnects, you are able to manage up to 40 blade chassis with a single pair of redundant UCS 6140XP (consisting of 40 ports.) 

If you are familiar with the Cisco Nexus 5000 product, then understanding the role of the  Cisco UCS 6100 Fabric Interconnect should be easy.  The UCS 6100 Series Fabric Interconnect do for the Cisco UCS servers what Nexus does for other servers: unifies the fabric.   HOWEVER, it’s important to note the UCS 6100 Series Fabric Interconnect is NOT a Cisco Nexus 5000.  The UCS 6100 Series Fabric Interconnect is only compatible with the UCS servers. 

UCS Diagram

Cisco UCS I/O Connectivity Diagram (UCS 5108 Chassis with 2 x 6120 Fabric Interconnects)

If you have other servers, with CNAs, then you’ll need to use the Cisco Nexus 5000.   

The diagram on the right shows a single connection from the FEX to the UCS 6120XP, however the FEX has 4 uplinks, so if you want (need) more throughput, you can have it.  This design provides each half-wide Cisco B200 server with the ability to have 2

 

CNA ports with redundant pathways.  If you are satisified with using a single FEX connection per chassis, then you have the ability to scale up to 20 x blade chassis with a Cisco UCS 6120 Fabric Interconnect, or 40 chassis with the Cisco UCS 6140 Fabric Interconnect.  As hinted in the previous section, the management software for the all connected UCS chassis resides in the redundant Cisco UCS 6100 Series Fabric Interconnects.   This design offers a highly scaleable infrastructure that enables you to scale simply by dropping in a chassis and connecting the FEX to the 6100 switch.  (Kind of like Lego blocks.)

 

On the flip side, while this architecture is simple, it’s also limited.  There is currently no way to add additional I/O to an individual server.  You get 2 x CNA ports per Cisco B200 server or 4 x CNA ports per Cisco B250 server. 

As previously mentioned, IBM has a strategy that is VERY similar to the Cisco UCS strategy using the Cisco Nexus 5000 product line.  IBM’s solution consists of:

  • IBM BladeCenter H Chassis
  • 10Gb Pass-Thru Module
  • CNA’s on the blade servers

 

Until IBM and Cisco design a Cisco Nexus switch that integrates into the IBM BladeCenter H chassis, using a 10Gb pass-thru module is the best option to get true DataCenter Ethernet (or Converged Enhanced Ethernet) from the server to the Nexus switch.  The performance for the IBM solution should equal the Cisco UCS design, since it’s just passing the signal through, however the connectivity is going to be more with the IBM solution.  Passing signals through means NO cable 

 

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

BladeCenter H Diagram with Nexus 5010 (using 10Gb Passthru Modules)

consolidation –  for every server you’re going to need a connection to the Nexus 5000.  For a fully populated IBM BladeCenter H chassis, you’ll need 14 connections to the Cisco Nexus 5000.  If you are using the Cisco 5010 (20 ports) you’ll eat up all but 6 ports.  Add a 2nd IBM BladeCenter chassis and you’re buying more Cisco Nexus switches.  Not quite the scaleable design that the Cisco UCS offers.

IBM offers a 10Gb Ethernet Switch Option from BNT (Blade Networks) that will work with converged switches like the Nexus 5000, but at this time that upgrade is not available.  Once it does become available, it would reduce the connectivity requirements down to a single cable, but, adding a switch between the blade chassis and the Nexus switch could bring additional management complications.  That is yet to be seen. 

 

IBM’s BladeCenter H (BCH) does offer something that Cisco doesn’t – additional I/O expansion.  Since this solution uses two of the high speed bays in the BCH, bays 1, 2, 3 & 4 remain available.  Bays 1 & 2 are mapped to the onboard NICs on each server, and bays 3&4 are mapped to the 1st expansion card on each server.  This means that 2 additional NICs and 2 additional HBAs (or NICs) could be added in conjunction with the 2 CNAs on each server.  Based on this, IBM potentially offers more I/O scalability.

 

And the Winner Is…

It depends.  I love the concept of the Cisco UCS platform.  Servers are seen as processors and memory – building blocks that are centrally managed.  Easy to scale, easy to size.  However, is it for the average datacenter who only needs 5 servers with high I/O?  Probably not.  I see the Cisco UCS as a great platform for datacenters with more than 14 servers needing high I/O bandwidth (like a virtualization server or database server.)  If your datacenter doesn’t need that type of scalability, then perhaps going with IBM’s BladeCenter solution is the choice for you.  Going the IBM route gives you flexibility to choose from multiple processor types and gives you the ability to scale into a unified solution in the future.  While ideal for scalability, the IBM solution is currently more complex and potentially more expensive than the Cisco UCS solution. 

Let me know what you think.  I welcome any comments.