I recently wrote an article for EDN describing the challenges of deploying DPI in relation to SDN (Software Defined Networking) and NFV (Network Functions Virtualization). In the article, I outlined 5 guidelines to consider when developing DPI solutions for SDN and NFV. Since writing the article, we have completed additional benchmarking on the latest processors – comparing x86 vs. Packet Processors. The results were very interesting so I wanted to provide an update on this and other items covered in the article.
1) Homogenous vs. Heterogenous: While Intel x86 processors can be a cost-effective solution for the low-to-medium performance range, when DPI performance grows into hundreds of Gbps, packet processing becomes a bottleneck. Alternate technologies, particularly for functions such as load balancing and packet classification tasks, are essential to scale to these levels of performance.
2) Performance and Balance: Dual socket x86 servers typically reach 20 to 30 Gbps of application processing, driving blade architectures in a single chassis such as Radisys T-Series platforms towards 500+ Gbps. However, when looking at the headline figures, ensure that you are clear on what is being claimed. It is important to balance connectivity, load balancer throughput and application processing to suit the needs of your DPI application. And remember, as the intelligence within the load balancer increases, this may not scale in a 1:1 ratio as not all traffic needs to go to the DPI engines. For instance, a system may consist of 1000Gbps aggregate IO, 800G load balancer and front end filtering but only need 400G of DPI processing. This is because external links aren’t 100% loaded and load balancers handle most of the traffic so they are usually close to port count in terms of performance (packet loss is not acceptable so you need to allow for statistical bursts) but only a percentage of the traffic needs to go to DPI blades. Service chaining is a particularly hot topic for NFV and directly relates to this topic. We will cover this in a future blog but if you want to know more about data plane load balancing as it relates to service chaining, please contact us for information.
3) Scalability: Most vendors today offer four tiers of scalable performance: 1) Individual dual socket server appliance; 2) Clustered appliances; 3) Bladed chassis with centralized load balancer; and 4) Clustered bladed chassis. For very large scale DPI needs, the bladed chassis with centralized load balancer approach offers the benefit of greater scalability and efficiency, but also makes service chaining a real possibility. In particular, by de-coupling the load balancing function from the end application, these two functions can also scale independently.
4) Integration with SDN and NFV Network Management: The need to integrate with OpenStack and OpenFlow is mandatory as mobile operators push for NFV and SDN to become a reality. However, the flow table sizes and update rates required for large DPI systems (scaling to 100s of millions of entries) do not align with the performance of today’s OpenStack and OpenFlow orchestration layer tools. Future DPI applications must be designed in such a way as to ensure table sizes and update rates are not compromised, while also ensuring that the platform can be effectively managed as a virtual function in the network. This is an area of expertise for Radisys where we have demonstrated table sizes of 100Ms of flows while adhering to the concepts of SDN and NFV.
5) Application Processors: x86 or Packet Processors? Traditionally, DPI vendors used packet processors for the DPI function, but some developers have since switched to Intel as x86 platforms, augmented with DPDK (Data Plane Developers Kit) or their in-house equivalent, to simplify development vs. bare metal. However, the new packet processors now use Linux to avoid the need for complex bare metal tool chains, and are even being virtualized ready for the NFV era. We recently ran Snort benchmarks (a good DPI test case) on a packet processor and were amazed by the performance achieved by only using the cores. (The next step is re-running the test using the DPI hardware accelerator enabled). We also ran back-to-back forwarding performance tests and once again the results significantly exceeded our expectations. We are making these available under NDA – contact us to find out more.
For more details on the challenges involved in deploying DPI in relation to SDN and NFV, check out the full article here.