Hurry Up and Wait? Why the Rush to 40G?

I am sitting here writing this blog entry from another airport after spending the week talking to customers. Once again one of the prime topics has been 40G ATCA, and in particular, “What’s driving the need?”, “When will it be really ready for deployment in carrier networks?” and “What are the related technologies that all need to come together around it?”

Let’s not fall into the trap of thinking that 40G ATCA is just the chassis backplane and switch; there’s much more to it including the processing blades (packet processors for the most part) and external ports to hit the densities that can use 40G backplanes.

ATCA tends to sit behind routers / switches today and that market has only just deployed 10G connections. At this point in the network 40GbE connections are few and far between and 100GbE (CXP-type connections) is a pipe dream today and far more expensive than 10 x 10G.

40G backplanes have been around for awhile now, and based on our test cards the BER (Bit Error Rates) are looking acceptable…first tick, backplanes and test chips with representative drive characteristics are working OK.

So, let’s move on to the 40G hub switches. There are a couple vendors claiming to have hub switches and I have no reason to disbelieve them. But as with the transition to 10G ATCA, these first-movers needed to select less capable 40G switching silicon with compromises including port count (meaning not all slots can be 40G, or your uplink capacity is reduced), limited packet manipulation capabilities (e.g., load balancing, de-tunneling, etc.) or other limitations such as internal cross-connect bandwidth. This may present challenges when it comes to deployment – either in terms of delays waiting for final silicon or limited end system capabilities.

Looking at the application landscape, which applications really need 40G per slot and what processing technologies can really make use of it? In general it’s only the packet processors such as the NetLogic XLP that start to significantly exceed 10G today. These devices are only just starting to sample now with boards coming to market in Q1 2011.

Contrast this to the 10G ATCA transition, which was a move from 1G and was really just suitable for the control plane. When 10G ATCA hit the market the processing technology to make use of it was ready, mature and shipping…but that’s not the case today. I will accept that there are exceptions to this rule, but in general 10G is suitable for many applications today and most customers and apps can live with it until 40G matures.

In summary, 40G provides a great story for the marketing guys and something to talk about. But my time in the field is telling me that customers are taking a somewhat more pragmatic view. Yes they want it, yes there’s a need, but it needs to come together at a solution level and still has some way to go yet before it reaches maturity. Apart from start-ups looking to differentiate their products or niche applications that just cannot live with 10G, for the most part developers are looking to evaluate 40G in 1H 2011 and deploy in 2H 2011 or even 2012. And in fact their main interest is to understand how much performance the new processors such as NetLogic XLP832, Cavium Octeon II and future Intel devices will give them in terms of performance. Only then can they make realistic estimates of system architectures and data flows.

Unlike 10G there is a little more time to make selections. This will be a step-wise process….find your processors for 40G, benchmark them and then select your switch and platforms. I expect this will provide a more effective and efficient path to 40G rather than compromising on capabilities, especially around the switch – which, let’s face it – is the heart of the system.

Twitter icon