LightTrends

 

Intel Buys Cray's Interconnect Business Unit - Why?

(Download Chinese Translation)

Intel (NASDAQ: INTC) is buying Cray's (NASDAQ: CRAY) high-speed networking assets, adding to the recent flurry of acquisitions—Oclaro (NASDAQ:OCLR) merging with Opnext (NASDAQ:OPXT) and Sumitomo Electric Device Innovations USA, Inc. (SEDU) buying Emcore's (NASDAQ: EMKR) parallel optics assets.  The deal, set to close in June for $140 million, will strengthen Intel's position in traditional data centers through the investment in high-performance computer (HPC) technologies.

What does Intel’s purchase mean to the optical transceiver and copper cabling business going forward? Having already tried in the optical transceiver business and exited, Intel is now inching back into the interconnect space with acquisitions, their silicon photonics group, and the Thunderbolt efforts at the consumer level. With the announcement of each acquisition and new product, the puzzle pieces of Intel strategy begin to fall into place.

The current deal includes Cray's Gemini technology and Aries interconnect products being developed for Cray's next-generation Cascade supercomputer, which is based on Intel Xeon microprocessors.  Gemini is an interconnect scheme capable of transferring tens of millions of messages per second and is designed for multicore—Intel's forte. The Aries chip is part of the new Cascade HPC system for the Defense Advanced Research Projects Agency’s (DARPA) HPC program. Aries enables hundreds of thousands of x86 cores and hundreds of teraflops to be aggregated into a couple hundred cabinets of servers. The Aries interconnect does not utilize the AMD HyperTransport or Intel's QuickPath Interconnect bus scheme but rather the PCI-Express bus, which Intel also developed. LightCounting believes 12-channel active optical cables (AOC) products are involved with this technology.

Intel has been rapidly redesigning its business strategy to address smart phones and tablets, where it has virtually no play; the ARM microprocessor currently dominates this market.  Low-power ARM microprocessors, sometimes arrayed as high as 600 cores per microprocessor, have been threatening Intel's business in the data center, albeit in a small way. Recently, Intel has become significantly more aggressive in protecting its dominant position in the data center and their latest investment in HPC technologies is part of this strategy.

Why InfiniBand?

Ethernet is a protocol capable of data center reaches and long-haul telecom as well circling the globe. InfiniBand is all about low latency in processing the protocol stack. InfiniBand is primarily used to connect servers to memory systems where very low latency over short reach is needed, such as inside a computer. InfiniBand has an end-to-end latency of less than 1 us whereas Ethernet about 2.5 to 4 us, depending on whom you talk to and their vested interests.  Linking computers, large memory subsystems, PCI Express Flash RAM and SSD storage—all have technical elements in Intel's skill set.

With a high-end microprocessor such as the Xeon grabbing prices of $1,000+ and with little competition from only AMD, it is no wonder that the mergers and acquisitions in InfiniBand-related technologies are at the heart of the strategy.  Intel's core driving engine is to maximize selling microprocessors and remove all the reasons why a consumer to a corporation would not purchase them.  If associated technologies become a limiting factor, Intel invents or buys into the game to control its destiny.

More Pieces to the Puzzle

Intel's Data Center and Connected Systems Group manages the microprocessors and chipsets for servers, storage, and networking equipment.  The group has been acquiring interconnect silicon and software assets from companies in the last 18 months. In January, Intel acquired the InfiniBand-related line cards software and silicon assets from QLogic for $125 million.  In July, Intel bought Fulcrum Microsystems for its 10GbE and 40GbE silicon chips for switches and routers used in top-of-rack switches.  Now, it has acquired the high-speed interconnect assets from Cray.

Although most of the server industry is content with the more-typical high-volume microprocessors, the big money is in high-end data centers and high-performance computers (HPCs), or supercomputers, where high speed is more important than price sensitivity—exactly the reverse of traditional mainstream data centers. 

Advanced technologies often are developed with government funding with budgets in the billions of dollars.  Making the HPC Top 10 list proves to the world that a country is a technical leader.  China recently put a massive effort into claiming the number-one slot in the Top 500 in 2010.  Japan won it last year. This area has high political content in the world economy.

AOCs started with HPCs and are now poised to move down into the mainstream data center, targeting InfiniBand, Ethernet, Fibre Channel, SAS and PCI Express links.  The next product poised to migrate from advanced technology HPC-land is the embedded optical module (EOM).  Used extensively in IBM's Blue Gene and Blue Waters HPCs, the EOM technologies from Avago and US Conec have been on the trade show circuit for the last two years, and many system designers in switches, routers, and high-speed servers have been impressed with their possible use in data centers and telecom applications.

HPC designers have been wrestling with computer architectures for a long time and have developed many architectures that look promising for use in traditional high-end data centers (these developments is likely what sparked Intel’s interest).  In advanced computer architectures, the interconnect scheme plays a critical role and has become a system design limiting factor. Although a few highly customized have $200-million budgets, the bulk of the HPC systems outside of the Top 10 are simply standard data center servers and memory systems linked together by an InfiniBand top-of-rack switch instead of an Ethernet switch.  Additionally, one of the newest and popular approaches to high-speed computing is to use PC gaming NVidia and AMD graphic chips with parallel compilers for parallel processing.  In fact, the Chinese Tianhe-1A made extensive use of standard servers, with NVidia GPUs and 35,000 AOCs to achieve its number-one position in the Top 500 in 2010.

More and more HPCs are being built from standard data center and PC components, and they can be upgraded easily, unlike custom HPCs. So it makes sense for Intel to buy into the game and try to dominate the HPCs area with high-priced, standard products that can make their way into the next-generation data center and target high-end systems where the big money is.  The microprocessor and SSD technologies, coupled with the InfiniBand and top-of-rack switch silicon, are key elements and hence have been at the core of Intel’s M&A strategy recently. LightCounting believes the M&A activity is not over yet.

The majority of HPCs are implemented with Intel processors today—some with AMD, and the rest as custom architectures.  Considering prices over $1,000 per microprocessor and requirements of 50,000 to 100,000 processors per advanced data center, it is clear why Intel is executing this strategy.  As an example, the Oak Ridge National Lab will spend $200 million "upgrading" is HPC. At SC2011, Intel showed the first working samples of its 50-cores Knights Corner microprocessor.

Intel makes a popular 10GBASE-T controller chip that links the PCI Express bus to the 10GBASE-T silicon.  Intel has a relationship with Aquantia, the last 10GBASE-T Phy startup not yet acquired (Broadcom has its own parts; Marvel bought SolarFlare's assets, and PLX bought Teranetics. See LightCounting's report on 10GBASE-T:  http://www.lightcounting.com/10gbase.cfm

Now, add Intel's behind-the-scenes efforts to commercialize the Thunderbolt consumer interconnect technology, formerly known as LightPeak. LightPeak was announced at the Intel Developer's Forum (IDF) in 2010 and created a stir in the optical interconnect industry.  Thunderbolt is the new and improved version and is at the core of the PC, display, and notebook product line at Apple. Intel makes a router chip that resides on the motherboard that enables two 10Gbps channels to be daisy chained and to pack the DisplayPort protocol riding on top of the PCI Express protocol.  While DisplayPort is a video transfer protocol, this chip is most likely capable of putting Ethernet, SATA, SAS and other high-speed protocols onto the PCI Express bus protocol. Rumors suggest that an AOC will be released in 2012. See LightCounting's in-depth report on AOCs:  http://www.lightcounting.com/activeoptical_2012.cfm

Will Intel try to transfer the Thunderbolt technology into the data center, linking servers to top-of-rack switches and competing with expensive and power-hungry 10GBASE-T?

This would compete with the 10GBASE-T efforts and impact the direct attach copper cabling industry, which has enjoyed enormous popularity as 10GBASE-T strives to develop the 28-nm, low-power versions of its chips.  LightCounting expects the 28-nm 10GBASE-T products to hit volume production in 2014.  But will this be too late?  10GBASE-T is surrounded by direct attach copper at very low power and cost on one side, low-cost AOCs on another side, and perhaps a Thunderbolt version on yet another. Intel also has a massive silicon photonics effort behind the scenes.

LightCounting Analysis

Is Intel gearing up to introduce silicon for many key aspects of a server/switch rack?  They have server microprocessors, SSDs and FLASH, top-of-rack switch silicon, 10GBASE-T controllers interconnecting the servers to switches not to mention Thunderbolt in the consumer space?  What will be the reaction from Cisco, Dell, IBM, and HP when Intel starts getting into their game by enabling a flood of low-cost "white box" vendors to prosper in servers and switches?

Intel is clearly gearing up for a big-play in the next-generation data center and interconnect space. Having entered the optical transceiver and interconnect space in 2000, it sold its assets to Emcore and others and exited the business. While we are speculating a bit trying to formulate the complete picture with only a few puzzle pieces, an image is starting to form that will change the landscape in the copper cable, optical component, and optical transceiver interconnect spaces.

The strategy appears to be to start at the technology high ground with very high selling prices, derive lots of profits, transfer the technology to the next-generation data center, and blow away the competition, while developing the interconnect technologies along the way.  Lastly, it is clear that at very high data rates, the optical interconnect will have to be embedded inside the same package as the high-speed microprocessor and other circuits. Intel's overall strategy will likely impact optical and transceiver companies in the future as increasing data rates make copper interconnects less viable and eventually move transceivers onto large chips. 

By Brad Smith, Senior Vice President, LightCounting

Upcoming Event Participation:

The LightCounting team will be at the following industry events:

To set up a briefing with one of our analysts at these industry events, please contact Renee Isley, (Renee@LightCoutning.com).

About LightCounting:

LightCounting, LLC. a leading optical communications market research company, offering semi-annual market update, forecast and state of the industry reports based on analysis of publicly available information and confidential data provided by more 20 leading module and component vendors. LightCounting is the optical communications market’s source for accurate, detailed and relevant information necessary for doing business in today’s highly competitive market environment. Privately held, LightCounting is headquartered in Eugene, Oregon. For more information, go to: http://www.LightCounting.com or follow us on Twitter at: http://www.twitter.com/lightcounting


    Media Contact:

Rebecca B. Andersen
Pacific Bridge Marketing
(202) 596-2652
RAndersen@PacificBridgeMarketing.com