You are currently browsing the category archive for the ‘HP’ category.

Last week at VMworld 2010 I had the opportunity to get some great pictures of HP and Dell’s newest blade servers. The HP Proliant BL620 G7, the HP Proliant BL680 G7 and the Dell PowerEdge M610X and M710HD. These newest blade servers are exciting offerings from HP and Dell so I encourage you to take a few minutes to look. Read the rest of this entry »

Advertisements

The Venetian Hotel and Casino Data CenterThey make it look so complicated in the movies. Detailed covert operations with the intent to hack into a casino’s mainframe preceeded by weeks of staged planned rehearsals, but I’m here to tell you it’s much easier than that.

This is my story of how I had 20 seconds of complete access to The Venetian Casino’s data center, and lived to tell about it.

Read the rest of this entry »

Along with the Intel blade server announcements, Tuesday HP also announced two new AMD based blades, the BL465c G7 and BL685G7. Although originally viewed as a refresh of their existing AMD blade servers, while at the HP Tech Forum in Las Vegas, I found there were a few interesting facts. Let’s take a look. Read the rest of this entry »

(Updated 6/22/2010, 1:00 a.m. Pacific, with updated BL620 image and 2nd Switch pic)

As expected, HP announced today new blade servers to their BladeSystem lineup as well as a new converged switch for their chassis. Everyone expected updates to the BL460 and BL490, the BL2x220c and even the BL680 blade servers, but the BL620 G7 blade server was a surprise (at least to me.) Before I highlight the announcements, I have to preface this by saying I don’t have a lot of technical information yet. I attended the press conference at HP Tech Forum 2010 in Las Vegas, but I didn’t get the press kit in advance. I’ll update this post with links to the Spec Sheets as they become available.

The Details

First up- the BL620 G7

The BL620 G7 is a full-height blade server with 2 x CPU sockets designed to handle the Intel 7500 (and possibly the 6500) CPU. It has 32 memory DIMMS, 2 hot-plug hard drive bays and 3 mezzanine expansion slots.

HP Proliant BL620 Blade Server

HP Proliant BL620 Blade Server

BL680 G7
The BL680 G7 is an upgrade to to the previous generation, however the 7th generation is a double-wide server. This design offers up to 4 servers in a C7000 BladeSystem chassis. This server’s claim to fame is that it will hold 1 terabyte (1TB) of RAM. To put this into perspective, the Library of Congress’s entire library is 6TB of data, so you could put the entire library on 6 of these BL680 G7’s!

HP BL680 G7 Blade Server

“FlexFabric” I/O Onboard
Each of the Generation 7 (G7) servers is coming with “FlexFabric” I/O on the motherboard of the blade server. These are NEW NC551i Dual Port FlexFabric 10Gb Converged Network Adapters (CNA) that supports stateless TCP/IP offload, TCP Offload Engine (TOE), Fibre Channel over Ethernet (FCoE) and iSCSI protocols.

Virtual Connect FlexFabric 10Gb/24-Port Module
The final “big” announcement on the blade server front is a converged fabric switch that fits inside the blade chassis. Titled, the Virtual Connect FlexFabric 10Gb/24-Port Module, it is designed to allow you split the Ethernet fabric and the Fibre fabric at the switch module level, inside the blade chassis INSTEAD OF at the top of rack switch. You may recall that I previously blogged about this as being a rumour, but now it is true.

The image on the left was my rendition of what it would look like.

And here are the actual images.

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP Virtual Connect FlexFabric 10Gb/24-Port Module

HP believes that converged technology is good to the edge of the network, but that it is not yet mature enough to go to the datacenter (not ready for multi-hop, end to end.) When the technology is acceptable and mature and business needs dictate – they’ll offer a converged network offering to the datacenter core.

What do you think about these new blade offerings? Let me know, I always enjoy your comments.

Disclaimer: airfare, accommodations and some meals are being provided by HP, however the content being blogged is solely my opinion and does not in anyway express the opinions of HP.

NOTE: IDC revised their report on May 28, 2010. This post now includes those changes.

IDC reported on May 28, 2010 that worldwide server sales for Q1 2010 factory revenues increased 4.6 4.7% year over year to $10.4 billion in the first quarter of 2010 (1Q10). They also reported the blade server market accelerated and continued its sharp growth in the quarter with factory revenue increasing 37.1% 37.2% year over year, with shipment growth increasing by 20.8% compared to 1Q09. According to IDC, nearly 90% of all blade revenue is driven by x86 systems, a segment in which blades now represent 18.8% of all x86 server revenue.

While the press release did not provide details of the market share for all of the top 5 blade vendors, they did provide data for the following:

#1 market share: HP increased their market share from 52.4% in Q4 2009 to 56.2% in Q1 2010

#2 market share: IBM decreased their market share from 35.1% in Q4 2009 to 23.6% in Q1 2010.

The remaining 20.2% of market share was not mentioned, but I imagine they are split between Dell and Cisco. In fact, based on the fact that Cisco was not even mentioned in the IDC report, I’m willing to bet a majority of that I’m working on getting some visibility into clarification on that (if you’re with Dell or Cisco and can help, please shoot me an email.)

According to Jed Scaramella, senior research analyst in IDC’s Datacenter and Enterprise Server group, “”In the first quarter of 2009, we observed a lot of business in the mid-market as well as refresh activity of a more transactional nature; these factors have driven x86 rack-based revenue to just below 1Q08 value. Blade servers, which are more strategic in customer deployments, continue to accelerate in annual growth rates. The blade segment fared relatively well during the 2009 downturn and have increased revenue value by 13% from the first quarter of 2008.”

For the full IDC report covering the Q1 2010 Worldwide Server Market, please visit http://www.idc.com/getdoc.jsp?containerId=prUS22356410

new link: http://www.idc.com/getdoc.jsp?containerId=prUS22360110

What if you could run a graphics intensive application, like CAD from Chicago, and you were sitting in Atlanta? What if you could work on a multi-million dollar animated movie feature from the luxury of your home? These and more could be possible with the HP WS460c G6 Workstation Blade.

The HP WS460c G6 Workstation Blade technically isn’t a new product. Inside, it holds similar features to the BL460 G6 blade server. It’s got the same form, uses the same processors, memory, storage and certain mezzanine cards. In fact, the mezzanine card is where the difference really lies. What’s the difference between a “workstation” and a “server”? Two things: Operation System and Graphics Cards. Traditionally workstations use desktop operating systems, and require very intensive graphics adapters with dedicated graphic processor units (GPUs) and dedicated graphics memory. With this in mind, HP designed the WS460c G6 Workstation Blade to both run a desktop O/S and support graphic adapters designed to handle heavy graphic workloads. HP also designed an expansion unit, called the WS460c G6 Graphics Expansion Blade to enable the WS460c to handle the same full-size graphics adapters as found in workstation desktops. Let’s take a quick look at the WS460c Workstation Blade first.

Workstation Details
Processor: Up to two (2) Intel® Xeon® 5500 or 5600 Series processors
Memory:Twelve (12) DIMM slots; up to 192GB
Storage Controller:HP Smart Array P410i Controller (RAID 0/1) with optional 256MB or 512MB Battery-Backed Write Cache (BBWC)
Internal Drive Support: Up to two (2) small form factor (SFF) SAS hot plug hard disk drives
Network Controller: Embedded NC532i Dual Port Flex-10 10GbE Multifunction Server Adapter – note the device driver will only support 1Gbps speed and Flex-10 is not supported at this time. Requires 1Gbps only interconnect switches in enclosure.
Mezzanine Support:Two (2) I/O expansion or graphics adapter mezzanine slots to support:
a graphics adapter mezzanine (NVIDIA Quadro FX 770M, FX880M, FX2800M or FX 3600M) or a dual-port Fibre Channel Mezzanine options for SAN connectivity (Choice of Emulex or QLogic)

Graphic Card Options
Professional 2D & 3D graphics with hardware acceleration via graphics subsystem

  • NVIDIA Quadro FX 770M(256MB) graphics single card kit
  • NVIDIA Quadro FX 770M(256MB) graphics dual-card kit
  • NVIDIA Quadro FX 770M(512MB) graphics single card kit
  • NVIDIA Quadro FX880M (1G) graphics single card kit
  • NVIDIA Quadro FX2800M (1G) graphics single card kit
  • NVIDIA Quadro FX 3600M(512MB) graphics single card kit
  • NVIDIA Quadro FX 5800 (4.0 GB) graphics kit – supported on HP WS460c G6 Graphics Expansion Blade only.
  • NVIDIA Quadro FX 4800 (1.5 GB) graphics kit – supported on HP WS460c G6 Graphics Expansion Blade only.
  • NVIDIA Quadro FX 3800 (1.0 GB) graphics kit – Supported on HP WS460c G6 Graphics Expansion Blade only.

Supported Operating Systems

1. Windows Vista® Business Blade PC Edition with 1 RDL (Remote Desktop License) 32-bit with downgrade to Windows® XP Professional 32-bit SP2 custom installed can be ordered—Windows XP Pro SP2 is the only operating system that can currently be ordered factory-installed.

2. Windows® XP Professional x64 Edition— This OS is obtained from Microsoft, often through the customer’s volume licensing agreement.

3. Windows Vista Business Blade PC Edition, 32-bit version— Recovery media for this OS is included with the blade workstation.

4. Windows Vista Business Blade PC Edition, 64-bit version— Recovery media for this OS can be obtained from HP.

5. Red Hat Enterprise Linux® 4.5 (and later), 64-bit— This OS is acquired by the customer from Red Hat, while HP provides the required Linux drivers.

6. Red Hat Enterprise Linux 5.2 (and later), 64-bit— This OS is acquired by the customer from Red Hat, while HP provides the required Linux drivers.

Form Factor
HP ProLiant WS460c G6 and WS460c G6 Graphics Expansion Blade are both half-height server blades that plugs into the HP BladeSystem c3000 and c7000 enclosures. When the Graphics Expansion Blade is used with the WS460c G6 (shown on the right,) the blade takes up two bays, therefore the maximum density per enclosure would be reduced. As a side note, HP does have a 2nd workstation blade, HP ProLiant xw2x220c Blade Workstation which offers two workstation nodes per blade, however it only has a Xeon 5400 processor, so I don’t see it sticking around unless HP does a technology refresh at which time I’ll post an update.

So, at this point, you may be thinking – there’s a workstation blade, which sits in the HP BladeSystem c3000 or c7000 enclosure, but how do you use it? This is where the value of HP comes to light. The HP WS460c G6 Workstation Blade is just a small piece of the overall puzzle. There are a few other components needed to make it a “complete workstation solution.” Let’s take a look at what this overall solution looks like.

In summary, the graphics are compressed at the workstation blade, then sent, over Ethernet, to the client, which then decompresses the graphics signal and displays it on the monitor. Keyboard and mouse movements are captured and sent back over Ethernet to the workstation blade and the cycle repeats. (Click on the image for a larger view.)

As you can see, there are some extra pieces required:

  1. HP Remote Graphics Software (RGS) – this software handles compression / decompression of the graphics between the blade device and the client device. For more on this software, check out this HP whitepaper (Adobe PDF.)
  2. A Client device that can work with the software. While nearly any PC will work, HP recommends the HP gt7725 Thin Client – an HP thin client device with an AMD Turion Dual Core 2.3 GHz processor, 2 GB of memory and 1 GB of flash memory and RGS is factory-installed.

Advantages of Running Workstation Blades
At this point, you may be asking, what’s the purpose of running workstations on blades? Why wouldn’t you just buy desktop workstations? Well, there are a few reasons to use a workstation blade environment.

  • Security – if the workstations are on blades, then the data would reside in the datacenter, where it can be protected from the the risk of security exposures from local drives and USB ports, as well as through system theft or loss.
  • Multi-location flexibility – the design of the workstation blade + client device enables the user to be local or remote. This provides additional unprecedented flexibility of working where you need to be, not where you have to be.
  • Multi-user access – the workstation blades can be dedicated to an individual user, or they can be shared across users. This provides additional flexibility of allowing a single user to have multiple workstations, a feature that is very costly with traditional desktop workstations

Let me know your thoughts of the HP WS460c G6 Workstation Blade. Are you using it, or do you know anyone using it? What recommendations would you offer to HP for future workstation blades?

I heard a rumour on Friday that HP has been chosen by another animated movie studio to provide the blade servers to render an upcoming movie. To recount the movies that have used / are using HP blades:

So, as I look at the vast number of movies that have chosen HP for their blade server technology, I have to wonder WHY? HP does have some advantages in the blade marketplace, like having market share, but when you review HP with Dell, you would be surprised as to how similar the offerings are:

When you compare the two offerings, HP wins in a few categories, like the ability to have up to 32 CPUs in a single blade chassis – a valuable feature for rendering accomplished with the HP BL2x220c blade servers. However, Dell also shines in areas, too. Look at their ability to run 512GB of memory on a 2 CPU server using FlexMem Bridge technology. From a pure technology comparison (taking out the management and I/O of the equation), I see Dell offering very similar product offerings as HP and I have to wonder why Dell has not been able to get any movie companies to use Dell blades. Perhaps it’s not a focus of Dell marketing. Perhaps it is because HP has a history of movie processing on HP workstations. Perhaps movie companies need 32 CPUs in a chassis. I don’t know. I welcome any comments from Dell or HP, but I’d also like to know, what do you think? Let me know in the comments below.

Wow. HP continues its “blade everything” campaign from 2007 with a new offering today – HP Integrity Superdome 2 on blade servers. Touting a message of a “Mission-Critical Converged Infrastructure” HP is combining the mission critical Superdome 2 architecture with the successful scalable blade architecture. According to HP’s datasheet for the Integrity Superdome, “The Integrity Superdome is an ideal system to handle online transaction processing (OLTP), data mining, customer relationship management, enterprise resource planning, database hosting, telecom billing, human resources, financial applications, data warehousing and high-performance computing.” In other words – it’s a mission-critical workhorse. Previous generations of the HP Integrity Superdome required a dedicated enclosure, but with this new announcement HP will enable users to combine Superdome, Itanium and x86 servers all in the same rack.

Superdome 2 Modular Architecture

The Chassis

The HP Superdome 2 blade will consist of a new blade chassis which will be exclusively used for the HP Superdome 2 architecture. From what I can tell, it appears this new blade chassis will hold 12 power supplies and hold up to 8 HP Integrity Superdome 2 blade servers. It also appears the new chassis will offer an Insight Display like the HP BladeSystem c3000 and c7000. Based on the naming schema, I’m willing to be HP will call this the HP BladeSystem c9000 chassis once it is released in the later part of this year, but for now it’s known as the “SD2 Enclosure“.

The server

Based on the Intel Itanium 9300, the blade server name that HP has chosen is “HP CB900 i2” cell blade. Little is known about the specs for this server, but HP has announced it will be offered in 8 and 16 socket “building blocks.”

I/O Expanders

To the mix of the scalability, HP will also offer up “I/O Expanders” that will be available for the CB900’s to access. Traditionally, I/O Expanders provide the PCI Express slots for the servers to access. HP’s approach of breaking out the I/O slots into a dedicated chassis provides large I/O expansion, even for blade servers. Again, the specifics of this component have not been provided by HP, but I expect for HP to release this info when the server becomes available later this year.

The Value

Why blades? The blade architecture that HP has built provides a modular architecture so users can build a Superdome platform once and grow it as it is needed by adding I/O and cells (blade servers.) Looking at the bigger picture, by using the already established blade architecture, HP is able to create a “common platform” so everything from workstations to mission-critical systems can use the same design.

Using a common platform means lower cost for HP to manufacture and faster time-to-market for future products. Not only will the blade chassis enclosure feature common components (power supplies, etc) be the same between the x86 and Superdome 2 offerings, but there will be common networking and common management as well.

Also, by using the blade architecture, the HP Integrity Superdome 2 blades can take advantage of the “FlexFabric” – one of the HP Converged Infrastructure messaging that reference the flexibility of having granular control over your Ethernet and Fibre environment with components like VirtualConnect.

While I have doubts on the future of Intel’s Itanium 9300 processor, I commend HP for this Superdome 2 announcement. It will be interesting to see how it is adopted. Let me know what your thoughts are in the Comments section below.

Chalk yet another win up for HP.

It was reported last week on www.itnews.com.au that Digital production house Dr. D. Studios is in the early stages of building a supercomputer grid cluster for the rendering of the animated feature film Happy Feet 2 and visual effects in Fury Road the long-anticipated fourth film in the Mad Max series. The super computer grid is based on HP BL490 G6 blade servers housed within an APC HACS pod, is already running in excess of 1000 cores and is expected to reach over 6000 cores during peak rendering by mid-2011.

This cluster boasted 4096 cores, taking it into the top 100 on the list of Top 500 supercomputers in the world in 2007 (it now sits at 447).

According to Doctor D infrastructure engineering manager James Bourne, “High density compute clusters provide an interesting engineering exercise for all parties involved. Over the last few years the drive to virtualise is causing data centres to move down a medium density path.”

Check out the full article, including video at:
http://www.itnews.com.au/News/169048,video-building-a-supercomputer-for-happy-feet-2-mad-max-4.aspx

InfoWorld.com posted on 3/22/2010 the results of a blade server shoot-out between Dell, HP, IBM and Super Micro. I’ll save you some time and help summarize the results of Dell, HP and IBM.

The Contenders
Dell, HP and IBM each provided blade servers with the Intel Xeon X5670 2.93GHz CPUs and at least 24GB of RAM in each blade.

The Tests
InfoWorld designed a custom suite VMware tests as well as several real-world performance metric tests. The VMware tests were composed of:

  • a single large-scale custom LAMP application
  • a load-balancer running Nginx
  • four Apache Web servers
  • two MySQL servers

InfoWorld designed the VMware workloads to mimic a real-world Web app usage model that included a weighted mix of static and dynamic content, randomized database updates, inserts, and deletes with the load generated at specific concurrency levels, starting at 50 concurrent connections and ramping up to 200. InfoWorld’s started off with the VMware tests first on one blade server, then across two blades. Each blade being tested were running VMware ESX 4 and controlled by a dedicated vCenter instance.

The other real-world tests included serveral tests of common single-threaded tasks run simultaneously at levels that met and eclipsed the logical CPU count on each blade, running all the way up to an 8x oversubscription of physical cores. These tests included:

  • LAME MP3 conversions of 155MB WAV files
  • MP4-to-FLV video conversions of 155MB video files
  • gzip and bzip2 compression tests
  • MD5 sum tests

The ResultsDell
Dell did very well, coming in at 2nd in overall scoring. The blades used in this test were Dell PowerEdge M610 units, each with two 2.93GHz Intel Westmere X5670 CPUs, 24GB of DDR3 RAM, and two Intel 10G interfaces to two Dell PowerConnect 8024 10G switches in the I/O slots on the back of the chassis

Some key points made in the article about Dell:

  • Dell does not offer a lot of “blade options.” There are several models available, but they are the same type of blades with different CPUs. Dell does not currently offer any storage blades or virtualization-centric blades.
  • Dell’s 10Gb design does not offer any virtualized network I/O. The 10G pipe to each blade is just that, a raw 10G interface. No virtual NICs.
  • The new CMC (chassis management controller) is a highly functional and attractive management tool offering new tasks like pusing actions to multiple blades at once such as BIOS updates and RAID controller firmware updates.
  • Dell has implemented more efficient dynamic power and cooling features in the M1000e chassis. Such features include the ability to shut down power supplies when the power isn’t needed, or ramping the fans up and down depending on load and the location of that load.

According to the article, “Dell offers lots of punch in the M1000e and has really brushed up the embedded management tools. As the lowest-priced solution…the M1000e has the best price/performance ratio and is a great value.”

HP
Coming in at 1st place, HP continues to shine in blade leadership. HP’s testing equipment consisted of a c7000 nine BL460c blades, each running two 2.93GHz Intel Xeon X5670 (Westmere-EP) CPUs and 96GB of RAM as well as embedded 10G NICs with a dual 1G mezzanine card. As an important note, HP was the only server vendor with 10G NICs on the motherboard. Some key points made in the article about HP:

  • With the 10G NICs standard on the newest blade server models, InfoWorld says “it’s clear that HP sees 10G as the rule now, not the exception.”
  • HP’s embedded Onboard Administrator offers detailed information on all chassis components from end to end. For example, HP’s management console can provide exact temperatures of every chassis or blade component.
  • HP’s console can not offer global BIOS and firmware updates (unlike Dell’s CMC) or the ability to powering up or down more than one blade at a time.
  • HP offers “multichassis management” – the ability to daisy-chain several chassis together and log into any of them from the same screen as well as manage them. This appears to be a unique feature to HP.
  • The HP c7000 chassis also has power controlling features like dynamic power saving options that will automatically turn off power supplies when the system energy requirements are low or increasing the fan airflow to only those blades that need it.

InfoWorld’s final thoughts on HP: “the HP c7000 isn’t perfect, but it is a strong mix of reasonable price and high performance, and it easily has the most options among the blade system we reviewed.”

IBM
Finally, IBM’s came in at 3rd place, missing a tie with Dell by a small fraction. Surprisingly, I was unable to find the details on what the configuration was for IBM’s testing. Not sure if I’m just missing it, or if InfoWorld left out the information, but I know IBM’s blade server had the same Intel Xeon X5670 CPUs as Dell and HP used. Some of the points that InfoWorld mentioned about IBM’s BladeCenter H offering:

  • IBM’s pricing is higher.
  • IBM’s chassis only holds 14 servers whereas HP can hold 32 servers (using BL2x220c servers) and Dell holds 16 servers.
  • IBM’s chassis doesn’t offer a heads-up display (like HP and Dell.)
  • IBM had the only redundant internal power and I/O connectors on each blade. It is important to note the lack of redundant power and I/O connectors is why HP and Dell’s densities are higher. If you want redundant connections on each blade with HP and Dell, you’ll need to use their “full-height” servers, which decrease HP and Dell’s overall capacity to 8.
  • IBM’s Management Module is lacking graphical features – there’s no graphical representation of the chassis or any images. From personal experience, IBM’s management module looks like it’s stuck in the ’90s – very text based.
  • The IBM BladeCenter H lacks dynamic power and cooling capabilities. Instead of using smaller independent regional fans for cooling, IBM uses two blowers. Because of this, the ability to reduce cooling in specific areas, like Dell and HP offer are lacking.

InfoWorld summarizes the IBM results saying, ” if you don’t mind losing two blade slots per chassis but need some extra redundancy, then the IBM BladeCenter H might be just the ticket.”

Overall, each vendor has their own pro’s and con’s. InfoWorld does a great job summarizing the benefits of each offering below. Please make sure to visit the InfoWorld article and read all of the details of their blade server shoot-out.