Add Network Based Programmable Intelligence to Real-time Voice and Video
Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) technology are game changers in enabling Communications Service Providers (CSPs) and Systems Integrators (SIs) to analyze live video and audio traffic in their networks and monetize it. Rather than building media processing into end devices, service providers can use their networks’ media processing to keep media analytics centralized, cost-effective and scalable.
The Radisys Engage portfolio adds another dimension to media processing in communications services: programmable voice and video analytics. This allows CSPs and SIs to program the Engage platform to provide instruction on “what to look for” in an existing real-time video stream – without any upgrade to endpoint equipment. With its high-performing NFV acceleration capabilities, the Engage Media Server supports the processing of large volumes of media, enabling a wide variety of applications that require “on-demand” processing of media to serve customers while at the same time analyzing the media to adapt services and resources.
Process high volumes of media streams for real-time and off-line analysis
Apply speech analytics and computer vision to media applications cost-effectively
Leverage best-in-class hardware assist technology optimized for AI and ML frameworks and algorithms with high density and scalability
Eliminate the need to send sensitive content offsite to cloud-based processing solutions with on-premise support for local processing of media content with security or privacy requirements
Computer Vision for IoT, Security and Authentication Use Cases
Today's 5G and private IoT networks bring significant increases in uplink performance, bandwidth and speed, along with the ability to support edge computing.Read More
5G Media Processing
Enhance User Interactivity and Enable Immersive AR/VR Experiences by Moving Advanced Media Processing to the Edge
With the promise of 5G and the explosion of connected devices, communication is evolving to the next era of hyper-connectivity and immersive AR/VR experiences. The challenges for operators are to ensure low latency, super high bandwidth availability, and massive localized densities with cost economies. Distributing media processing at the edge dramatically improves network bandwidth utilization and helps service providers realize the low latency expectations of 5G while enabling rich new applications such as augmented reality-based retail merchandising and 4K live streaming on a 5G network.
Our virtual Media Server software is highly optimized to deliver exceptional performance in virtualized and cloud environments at the fixed broadband and mobile access edge. By integrating a mix of technologies including 5G, IoT, cloud edge computing, speech analytics, augmented reality, computer vision, conversational artificial intelligence, and machine learning, our Media Server software enables immersive multi-media user interactions and ensures a high-quality customer experience.
Integrated Speech Recognition Reduces Cost and Improves User Experience
Speech interaction over the Internet or with devices has taken off with the capabilities of Siri, Cortana, Alexa, and Google in smartphones, smart speakers, automobiles and more. For many interactions, speech is more efficient than other input/output (I/O) methods. In the context of phone calls, particularly for mobile users, it’s also safer. Our Engage portfolio combines the ability to process speech ranging from a small vocabulary of key words and commands to natural language interaction in the context of voice and video calls or conversational AI-based self-service interactions. It offers a much more cost-effective alternative to traditional speech recognition approaches because keyword detection is an integrated feature, lowering the cost and complexity of deploying speech features and improving the performance of applications through reduced latency in response.
Enjoy lower costs in comparison to traditional speech recognition approaches
Increase the accuracy of speech recognition by improving the quality of the media
Support in-network recognition cost-effectively with high scalability for millions of subscribers
Optimize performance by avoiding latency, complexity, and extra processing of sending media to an external recognizer
Enable new services without requiring additional software to be downloaded or installed on a subscriber’s mobile device
Media Server Integrated In-Call Speech Recognition Solution Powers Next-Gen Service Provider Products and ServicesRead More
On the Fly Transcoding at the Edge
Edge Media Adaption for Real-time User Generated Content
User-generated audio and video content is reshaping media viewing and listening habits. Our Engage Media Server platform enables service providers to optimize media to ensure the best user experience based on the device, network conditions, codec, and service levels. It can improve the quality of media in real-time or “offline,” based on whether the content is expected to be streamed or broadcasted. User-generated content also requires media adaptation for it to be viewed by users on multiple devices. The economies to perform off-line transcoding and storage may not always make sense and sometimes mandate transcoding on the fly. Our Media Server enables both offline and on-demand transcoding, ensuring the lowest cost approach. Engage Media Server can be distributed both in the core of the network and at the edge, adding another dimension to cost efficiency by processing media where it’s most bandwidth-efficient and latency optimized.