Photobucket

RESEARCH NOTES FROM IEEE 802.3 STANDARDS MEETING AND SUPERCOMPUTING 2013

Ethernet Celebrates 30 years of the 802.3 Standards Process and Heads Off into Exciting New Directions

With the help of Dr. Robert Metcalf [Inventor of Ethernet in 1973], the IEEE 802.3 Standards Association celebrated thirty years of work on the Ethernet standard in its November plenary meeting.  After spending some time looking back and reflecting, Dr. Metcalf and the group articulated a vision of the future of Ethernet in vehicles.  It won't come quickly but it will be big.  Initiatives for 1GbE over one twisted-wire pair (P802.3bp), Power over Data Lines (PoDL Study Group) and Interspersing Express Traffic (P802.3br PAR) will become the enablers to revolutionize intra-vehicle and inter-vehicle networking.  Why should LightCounting’s clients care?  A presentation in the 400GbE Study Group suggests traffic of 240 Tbps added to backhaul networks by 2023 to communicate with 80 million networked cars in North America alone.  (We hope those networked drivers will be keeping their eyes on the road!) Optical backhaul is a top growth market tracked by LightCounting and it is included in the preview of Market Forecast database to be released on December 17th.

Did we mention 400GbE?  Are we getting ahead of ourselves when 100G switches are not yet available?  175 attendees of the 400GbE Study Group do not agree.  Mega data center builders, telecom companies and service providers want to get moving on 400G solutions for router-to-router and router-to-transport links.  LightCounting was pleased to support the presentations with market data on transceiver deployments.

Step one: agree on objectives.  Check.  The Study Group adopted four PMD objectives:  100m over multimode fiber, 500m, 2km and 10km over single mode fiber.  Objectives for 400G over copper had insufficient support, but it’s still early and this could change.  Much of the Study Group discussion revolved around supporting breakout functionality after LightCounting noted in earlier ad hoc meetings that most 40GigE modules were split out to 4x10G.  Breakout functionality was not adopted but a lower bit error rate of 10-13 was adopted to avoid more frequent errors at this higher data rate. 

Next step: Support the objectives with agreement on “The 5 Criteria” including broad market potential, compatibility, distinct identity, technical feasibility and economic feasibility.  There is much work to do here.  An optimistic schedule would transition the team from a Study Group to a Task Force in May.  That’s when the real fun will begin to decide how to implement the objectives.  Will we have 16 lanes of 25G?  How about 8 lanes of 50G?  Should we be simply combining four 100G solutions as 40GigE did by combining four 10GigE lanes?  Will this be the last hurrah for optical NRZ or is this the time to make the big jump to advanced modulation?  LightCounting expects interesting technical contributions and some fireworks in the coming meetings.  Stay tuned!

Supercomputing Show Emphasizes Speed, Density and Power

SC13 brings 11,000 people together for the latest in high performance computing (HPC), networking, storage and analysis.  HPC is no longer simply about computing power for scientific simulation.  Big Data Analytics and cloud computing are merging with HPC and architectures have never been more varied. 

InfiniBand has been the staple of HPC interconnects and Active Optical Cables (AOCs) are the path to long reach InfiniBand connections as shown or explained by many of the SC13 exhibitors.  But it’s really all about low latency and RDMA (Remote Direct Memory Access) across processors.  So cloud computing companies are considering RoCE (RDMA over Converged Ethernet) to stay within an Ethernet ecosystem.  Either way, AOCs will continue to be a cost effective and convenient way to transport these protocols in HPC systems. 

Description: C:\Users\Dale\Pictures\SC13\Arista 12x100G line card with Finisar EOMs.JPG HPC is also about the density of processors and interconnects.  Switch vendors have run out of room on their faceplates with QSFP+ modules and Embedded Optical Modules (EOMs) are one answer.  Arista Networks’ founder Andy Bechtolsheim showed LightCounting their new 100GbE line card with twelve MPO connectors on the faceplate fed by EOMs from Finisar.  These are 12x10G devices that can be split out multiple ways at the connector interface.


Description: C:\Users\Dale\Pictures\SC13\Fujitsu next-gen CPU board with EOMs.JPG

Proprietary interconnects need maximum speed and width.  Cray showed their XC30 with up to 120 CXP ports running 12x12.5G AOCs connecting groups of blades.  Fujitsu showed their next-gen supercomputer blade using eight of Finisar’s new 12x25G EOMs (Finisar calls them BOAs) fed by three Sparc64 processors. 

Description: C:\Users\Dale\Pictures\SC13\Samtec EOM links - optic and Cu.JPG

Samtec was seeing interest in their FireFly™ interconnect system that offers mix and match embedded copper or optical links plugging on the board with a common connector.  The 12x28G copper link can reach 13” and 0.5m 14G per lane.  The EOM version extends the reach to 100m at 14G. 



Some newer approaches really minimize cabling.  Clustered Systems Company showed a high-density rack of up to 260 Intel Xeon processors all interconnected by PCI-Express and an orthogonal copper backplane.  InfiniBand is used to connect multiple chassis if needed.  Numascale has the technology to  connect 216 servers (10,000 cores) with 240GB of memory per server with no cable longer than 1.5 meters.  Their shared memory architecture means one image of Linux sees all processors and all the memory as contiguous.


Description: intel-xeon-fabric-integrationFinally, Intel briefed analysts on their intention to integrate network adapters and fabrics into their next-gen Xeon processors and Xeon Phi coprocessors.  10GigE will be integrated next year and their Next-Gen Fabric (InfiniBand) will see integration in 2015+.  The coming Xeon Phi coprocessor known as Knights Landing will use the Cray HPC interconnect controller which has 100Gbps interconnect specs and 32 PCI-Express Gen 3 lanes. 

Source: Intel

Latest Research from LightCounting:

About LightCounting Market Research

LightCounting is a leading optical communications market research company, offering semiannual market updates, forecasts, and state-of-the-industry reports based on analysis of primary research with dozens of leading module, component, and system vendors as well as service providers and other users. LightCounting is the optical communications market’s source for accurate, detailed, and relevant information necessary for doing business in today’s highly competitive environment. Privately held, LightCounting is headquartered in Eugene, Oregon. For more information, go to http://www.LightCounting.com, or follow us on Twitter at http://www.twitter.com/lightcounting.