LatenceTech live at major trade shows

LatenceTech recently participated at two major telecom trade shows where we officially presented our solution to the “world” and perform live demos of our real-time monitoring solution for 5G and Private Cellular Networks and gave a glimpse of our new advanced analytics

The first event we participated was Mobile World Congress in Las Vegas on September 28th-30th where we had our own “startup booth”. We met with Carriers, Telecom Vendors, Private Network Vendors, Consultants, System Integrators, Drive Test equipment vendors, Modem Vendors and many others. Thanks to all our visitors for coming to our booth and for the great conversations.

LatenceTech Mobile World Congress Las Vegas 2022 booth with Benoit Gendron, CEO and Nicolas Gorse, CTO.

The second event we participated was Arch Summit in Luxembourg on October 26th and 27th. This is an event mainly sponsored by Vodafone and Vodafone innovation arm named Tomorrow Street. Similarly, we had our own booth where we showcased our solution and performed live demo using the live measurements from the Encqor 5G Network back in Montreal.

LatenceTech Arch Summit Booth with Benoit Gendron, CEO and Emmanuel Audousset, CRO.

Both events help us get valuable feedback on our solution and its positioning to support 5G networks and Private LTE/CBRS networks. We also got positive comments related to our new advanced AI/ML based analytics focused on Anomaly Detection of latency variations, on latency prediction in the near term and on latency “root cause” analytics.

If you missed us at these events, to not hesitate to contact us for a private presentation and a live demo.

LatenceTech at Collision!

LatenceTech team were present at the Collision 2022 event last week in Toronto. We have met prospective customers, technology partners and potential investors as we have now opened our first fundraising campaign.

Here’s a short video clip (<1min) our CEO presenting our booth:

Benoit Gendron, LatenceTech’s CEO presenting our booth at Collision

Contact us if you are interested in having a similar live demonstration or if you wish to trial our software enabling precise and continuous measurement of network and application end-to-end latency.

LatenceTech partnering with Ori Industries

Enjoy latency-optimised networks globally, powered by real-time 5G analytics

Get details be clicking on the above link or below

Download Solution Brief here

Latency is a big challenge for enterprises; it makes it difficult to deliver at scale, and it can be expensive to mitigate. Ori Industries and LatenceTech understand the unique challenges for scaling software businesses, and are partnering to provide a joint edge-native solution that lets network owners manage, analyse and improve latency private networks for edge deployments.

Solution Overview

Enterprises require scalable, highly reliable and low-latency networks to operate seamlessly at scale. Ori Global Edge offers a single orchestration layer for multiple deployments to be managed across any infrastructure environment. By leveraging LatenceTech edge insights and analytics, the solution provides guaranteed low-latency, high-performance connectivity for enterprises. LatenceTech in turn tracks ultra-low latency behaviours in changing environments to analyse variations, detect anomalies, prevent degradations and obtain insights to improve private & 5G networks.

LatenceTech is a 5G-ready solution that provides the intelligence needed to understand and support end-to-end network performance, including network latency and quality of service (QoS). LatenceTech helps businesses to improve network speed and latency, increase real time analytics of Quality of Experience (QoE), provide optimised networks globally and decrease mobile data usage with predictive network analytics. LatenceTech allows for latency-optimised networks with realtime 5G analytics.

Guaranteed Low Latency at Scale

The best way to increase reliability and resilience is to combine edge compute capabilities with advanced analytics. LatenceTech runs best-in-class real-time AI & ML models to identify the best locations across distributed networks, leveraging Ori Global Edge’s highly available platform to orchestrate workloads with guaranteed low latencies.

LatenceTech is an innovative way to track, predict and deploy latency-optimised 5G networks. Using SaaS and AI, and in partnership with Ori, LatenceTech helps mobile operators, telecom vendors and enterprises to track, predict and secure the new benefits of 5G and Private LTE cellular networks. LatenceTech collects real-time 5G network data to perform aggregation and advanced analytics using AI & ML tools, and present key 5G indicators to accurately track customer commitments & SLAs. The joint solution offers a new approach to monetising low-latency 5G connectivity by providing tools to monitor contractual commitments related to quality of service.

Let’s Talk

The joint solution is an advanced analytics and edge computing platform that combines the best of both computing worlds. It’s a cloud-based service that provides real-time data analysis and advanced visualisations, while also allowing you to store data in the cloud or on servers at your location.

To find out more about how LatenceTech and Ori Industries enable operators to run private networks with unparalleled performance, extracting key data from their infrastructure to minimise costly lag and to enable use cases to operate at scale securely, download the Solution Brief.

Mobile apps have “PING” for breakfast!

Why operators should measure what the customer sees…

Mobile operators are under pressure to offer super-responsive connectivity with low latency: whether it’s for your favourite multi-player game or whether it’s to ensure that zoom call with your boss runs smoothly.

The problem is, there’s quite a gap between the network latency measured with ICMP (via the “ping” command) and the effective latency experienced by an application which never actually uses ICMP. 

ICMP (specified in RFC 792) operates in a layer just above IP, meaning its messages are carried as IP payload.  ICMP is often used to signal errors between hosts, but in addition to the error messages, it defines one of the Internet’s most famous applications: ping. Most TCP/IP implementations support Ping directly in the operating system thereby avoiding the time taken to decapsulate the transport-level segment, identify the correct socket and send the message on to the application process.

These steps are necessary when interacting with applications such as HTTP (the application used to request a web page) or HTTPs (when performing a secured transaction). These applications and others, such as FTP, DNS or SMTP also use either TCP or UDP as a transport layer protocol which ICMP does not.

As wireless low-latency specialists, we make sure that our real-time latency tracking tool addresses these issues by performing different types of measurement campaigns. First, we do actively measure network latency using ICMP (knowing that ICMP may be routed differently than payload traffic). Hence, we also use other protocols like TWAMP and PTPd to get the most accurate results: important when we are measuring in milliseconds!

Second, we aim to measure the latency as the application sees it.  We use an intelligent mix of the HTTP/HTTPS/TCP/UDP protocols to mimic the application’s behaviour. We measure multiple times  per second and perform statistical analysis to generate an aggregated view. We accumulate these results to  perform advanced analysis like anomaly and trend detection – all in near real-time – and send notifications back to the application (and to OSS/NMS systems) if something is wrong.  To complement this, we also perform real-time measurements of the bandwidth and reliability (using packet loss count) in addition to taking snapshots of the path(s) taken by the data. This allows us to perform route cause analyses whenever the latency results are outside accepted boundaries for a prolonged period.

The results of both these campaigns can then be displayed and used for comparison and evaluation.

The diagram below shows a snapshot of LatenceTech’s Compact Dashboard presenting latency results for a mobile gaming application running on the  Encqor 5G (NSA) network using a MmWave RAN site and a Mobile EDGE platform in Montréal Canada in early 2022.

Real-time dashboard showing live low-latency results from MMwave measurements

Here we can see the difference between network level latency (10.6ms) and application level (17.2ms) latency by means of results which were aggregated over a 5- minute period. The 38% difference is significant! Knowing the network latency is great but having factual and continuous knowledge about application-level latency is a prerequisite to deploying a time-critical mobile applications.  

Below is another dashboard example, showing results from a low frequency band  and testing the latency from the device to server residing, this time, on the cloud.

Real-time dashboard showing live low-latency results from low-band measurements

We can see here the impact of the lower frequencies and longer distances to data processing, underlining the important role that MmWave and Edge computing will have in low-latency communications.

Time-sensitive mobile applications will require reliable, sustained low-latency connectivity which is best served by 5G technology.  But 5G alone will not be enough to deliver the service users need.  Tools to perform continuous tracking of  quality of service, and in particular network and application latency levels will be central to providing a reliable, high-performance service. Being able to rely on this quality of service means that new applications can be developed, and these will generate new economic benefits and value to end customers.  With the right network performance, think of all the innovate services that will be developed…

Short list of time critical application requiring ultra-low latency connectivity

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it… your knowledge is of a meager and unsatisfactory kind.”

–Lord Kelvin

We would like to hear your remarks and comments.

Contact us for more information

The Secrets Hidden in CPRI

Written by Parsa Alamzadeh and Siân Morgan

If you want to start a riot in a room full of radio engineers, start talking about CPRI and watch those conference muffins fly.   

person holding magnifying glass
Photo by Maurício Mascaro on Pexels.com

The Common Public Radio Interface (CPRI) was defined by 3GPP to connect two functions in the cellular radio network: it links the Remote Radio Units (RRU) with the Base Band Units (BBU) over fibre.

CPRI’s first challenge is the extremely strict jitter and latency budgets – it’s a serial interface and round-trip latency must be kept in the range of 100 µs.

The second issue is that it has become a largely proprietary interface, making mixing and matching BBU and RRU vendors a nearly impossible task – leading to complaints about vendor lock-in.

It turns out that the data carried in the CPRI interface can be used to detect and predict network issues, as long as the correct high-performance unsupervised learning algorithms are applied.

The final problem is with 5G. LTE, already requires high bandwidth links:  for example, 10 MHz of spectrum to a 2×2 MIMO antenna requires 1.2 Gbps.   In 5G – and especially in the mmWave range – the CPRI data streams are exploding in size and the network will soon run out of fibre.

Luckily, eCPRI (enhanced) was introduced to address these problems – and it turns out it can also be used to detect and predict network issues, as we discovered when we applied high-performance, unsupervised learning to huge volumes of power measurements extracted from the interface.

All this data came from the Encqor 5G testbed network; a state-of-the-art 5G platform deployed in five cities in Canada using Mid-Band (N78 / 3.5GHz) and experimental mmWave (N261 / 28 GHz) spectrum.

Over 1 Gbps of data was filtered and distributed into 1000 payloads of over 98 000 datapoints – at a rate of 1 payload per milliseconds (ms).

The CPRI data was flying at over one Gbps per second and to be able to predict anomalies in the network we needed to extract, process, and analyze it in under 10 ms.  As a basis for comparison, that’s over 20 times faster than the average person’s reaction time. (Want proof?  check out this little test https://humanbenchmark.com/tests/reactiontime).

The useful power measurements we needed to extract was in the IQ portion of the interface (In-Phase and Quadrature signal samples).  With some bitshifting we were able to filter out and discard over 850 Mbps per second of data and distribute what was left into 1000 payloads of over 98 000 datapoints each – at 1 payload per ms.

The fun part came once the raw data was in the payloads.  It involved converting the datapoints into complex numbers, reshaping the data, and performing a Fourier transform on it to convert it from the time to the frequency domains.  

Fig. 1, preprocessed IQ Data, where the x-axis represents the frequency and y-axis represents the normalized power measurement. This graph was obtained after using the Autoencoder to select the anomaly candidates. Blue dots represent normal data and red dots represent anomaly candidates.

Detecting which points were truly anomalies was a challenge because we did not have any labels or classifications. Hence, we needed to take an unsupervised learning approach that allowed for high performance and parallel processing. We settled on a Deep Neural Network model, using autoencoders on data that had been brought to a lower dimension. By itself, it was capturing too many points, so we then used binning and thresholding to improve the output as shown in the figures below.

Fig. 2, On the left we have the histogram of anomalies, where every 32 sub-frequencies are grouped up together, and a threshold is indicated with a dashed red line. Bins that have a value greater than the threshold value are then considered as anomalous sub-frequencies. On the right hand side we have the original data with bins 

This model allowed us to detect anomalous power measurements and associate them with the specific antenna frequencies that were causing the problems. Because we can process the data so quickly (<10ms), problems that would normally go undetected can be identified and alerts or corrective action can be taken before any customers or industrial applications are affected.

Fig. Favoring reactiveness vs. accuracy in ML-based anomaly detection

Our general approach has been to favour reactiveness over accuracy. This generates warnings immediately as issues occur thereby keeping 5G applications fully secured. Warning (or alerts) can be used to instruct the 5G application to revert into “Safe Mode”. This is especially important in transport (e.g., robotaxis), manufacturing (teleoperation of remote equipment), and other industrial use cases. Furthermore, a network orchestrator could easily subscribe to these warnings, and after receiving a high volume of them, send self-healing commands to the affected 5G RAN sites.

It turns out that both the CPRI and eCPRI interfaces have a wealth of information in there – it’s just a matter of having the high-performance AI-based algorithms to extract and process their hidden secrets.