Moving from HPEC to HPESC

HPEC (High Performance Embedded Computing) is an acronym used by many VPX product vendors today. With the availability of fast processors like DSPs, GPGPUs, and the new i7 Intel machines, we have the “cycles” to process mountains of data. But, the first few generations of high-speed serial links still left us I/O-bound. The newest generations of fabric silicon are finally giving us the bandwidth to take advantage of the powerful processors in the market, initiating HPEC in many advanced applications.

Enter multiprocessing

When you put multiple processors in slots on a backplane, you increase the demand for bandwidth on the local network for IPC (InterProcessor Communications) and data-sharing. Some of the fabrics have squirrelly protocols and tree-structures that add latency. As the fabric frequencies increase, they introduce serious signal integrity problems for board and backplane designers. You run into synchronization problems between processors attached to the backplane network. Software architecture becomes a nightmare. So, we are still I/O bound.

More links and faster fabrics

To solve these problems, we are seeing VPX systems designed with many more serial links between boards in the backplane, to increase the aggregate bandwidth of the system. We are running out of pins on 3U VPX, and the 6U implementations are getting pretty saturated with multiple links on the data plane. Higher-frequency copper-based fabrics may give us more throughput, but they bring those pesky SI problems with them. To get to High Performance Embedded Super Computing (HPESC), we must go to optical links (either single-mode or multi-mode).

Figure 1: The VITA 66 VPX Fiber Optical Interconnect specification is establishing a baseline for optical technology in backplanes.
(Click graphic to zoom by 1.9x)

Recent developments in optical

In the past few months, we have seen some exciting developments in the optical spectrum. In May, Mellanox (the purveyors of Infiniband) bought Kotura (a maker of silicon-photonics optical engines). Mellanox announces that Infiniband is going optical in the near future. Early last year, ST Micro (a semiconductor maker) bought Luxtera (a maker of high-volume silicon photonics optical chips). In February, Cisco (a networking company) bought Lightwire (a silicon-photonics chip maker). Intel is said to be working furiously on their silicon-photonics chips. And the PCI-SIG has announced OCuLink, an optical and copper cable link for use inside and outside the chassis. In February, IBM and Dow Corning announced new techniques and materials for making optical backplanes.

Who is driving HPESC?

The traditional embedded markets (military, industrial, medical, and telecom) are not driving this transition. It’s the Data Center and Cloud Computing markets that are driving us to optical: their servers are more I/O bound than our VPX applications. But, the data center servers operate in a climate-controlled environment with no significant shock and vibration exposure. The silicon-based photonics chips (as opposed to the present hybrid optical engines that use silicon for the logic and InP and InGaAs for the emitters and detectors) promise much better operational temperature ranges. The SP chip packaging engineers have no real incentive to design for the heavy shock and vibration extremes we find in most VPX critical system applications.

The basic technology philosophy at VITA is to adopt, adapt, and, as a last resort, create. We will adopt these new optical links to move to HPESC. Like we do now with present commercial silicon, we may have to screen (adapt) SP chips for our needed operating temp ranges. But, getting those commercial chips to handle extreme shock and vibration environments could be a significant challenge (create).