LatenceTech at Collision!

LatenceTech team were present at the Collision 2022 event last week in Toronto. We have met prospective customers, technology partners and potential investors as we have now opened our first fundraising campaign.

Here’s a short video clip (<1min) our CEO presenting our booth:

Benoit Gendron, LatenceTech’s CEO presenting our booth at Collision

Contact us if you are interested in having a similar live demonstration or if you wish to trial our software enabling precise and continuous measurement of network and application end-to-end latency.

LatenceTech partnering with Ori Industries

Enjoy latency-optimised networks globally, powered by real-time 5G analytics

Get details be clicking on the above link or below

Download Solution Brief here

Latency is a big challenge for enterprises; it makes it difficult to deliver at scale, and it can be expensive to mitigate. Ori Industries and LatenceTech understand the unique challenges for scaling software businesses, and are partnering to provide a joint edge-native solution that lets network owners manage, analyse and improve latency private networks for edge deployments.

Solution Overview

Enterprises require scalable, highly reliable and low-latency networks to operate seamlessly at scale. Ori Global Edge offers a single orchestration layer for multiple deployments to be managed across any infrastructure environment. By leveraging LatenceTech edge insights and analytics, the solution provides guaranteed low-latency, high-performance connectivity for enterprises. LatenceTech in turn tracks ultra-low latency behaviours in changing environments to analyse variations, detect anomalies, prevent degradations and obtain insights to improve private & 5G networks.

LatenceTech is a 5G-ready solution that provides the intelligence needed to understand and support end-to-end network performance, including network latency and quality of service (QoS). LatenceTech helps businesses to improve network speed and latency, increase real time analytics of Quality of Experience (QoE), provide optimised networks globally and decrease mobile data usage with predictive network analytics. LatenceTech allows for latency-optimised networks with realtime 5G analytics.

Guaranteed Low Latency at Scale

The best way to increase reliability and resilience is to combine edge compute capabilities with advanced analytics. LatenceTech runs best-in-class real-time AI & ML models to identify the best locations across distributed networks, leveraging Ori Global Edge’s highly available platform to orchestrate workloads with guaranteed low latencies.

LatenceTech is an innovative way to track, predict and deploy latency-optimised 5G networks. Using SaaS and AI, and in partnership with Ori, LatenceTech helps mobile operators, telecom vendors and enterprises to track, predict and secure the new benefits of 5G and Private LTE cellular networks. LatenceTech collects real-time 5G network data to perform aggregation and advanced analytics using AI & ML tools, and present key 5G indicators to accurately track customer commitments & SLAs. The joint solution offers a new approach to monetising low-latency 5G connectivity by providing tools to monitor contractual commitments related to quality of service.

Let’s Talk

The joint solution is an advanced analytics and edge computing platform that combines the best of both computing worlds. It’s a cloud-based service that provides real-time data analysis and advanced visualisations, while also allowing you to store data in the cloud or on servers at your location.

To find out more about how LatenceTech and Ori Industries enable operators to run private networks with unparalleled performance, extracting key data from their infrastructure to minimise costly lag and to enable use cases to operate at scale securely, download the Solution Brief.

Mobile apps have “PING” for breakfast!

Why operators should measure what the customer sees…

Mobile operators are under pressure to offer super-responsive connectivity with low latency: whether it’s for your favourite multi-player game or whether it’s to ensure that zoom call with your boss runs smoothly.

The problem is, there’s quite a gap between the network latency measured with ICMP (via the “ping” command) and the effective latency experienced by an application which never actually uses ICMP. 

ICMP (specified in RFC 792) operates in a layer just above IP, meaning its messages are carried as IP payload.  ICMP is often used to signal errors between hosts, but in addition to the error messages, it defines one of the Internet’s most famous applications: ping. Most TCP/IP implementations support Ping directly in the operating system thereby avoiding the time taken to decapsulate the transport-level segment, identify the correct socket and send the message on to the application process.

These steps are necessary when interacting with applications such as HTTP (the application used to request a web page) or HTTPs (when performing a secured transaction). These applications and others, such as FTP, DNS or SMTP also use either TCP or UDP as a transport layer protocol which ICMP does not.

As wireless low-latency specialists, we make sure that our real-time latency tracking tool addresses these issues by performing different types of measurement campaigns. First, we do actively measure network latency using ICMP (knowing that ICMP may be routed differently than payload traffic). Hence, we also use other protocols like TWAMP and PTPd to get the most accurate results: important when we are measuring in milliseconds!

Second, we aim to measure the latency as the application sees it.  We use an intelligent mix of the HTTP/HTTPS/TCP/UDP protocols to mimic the application’s behaviour. We measure multiple times  per second and perform statistical analysis to generate an aggregated view. We accumulate these results to  perform advanced analysis like anomaly and trend detection – all in near real-time – and send notifications back to the application (and to OSS/NMS systems) if something is wrong.  To complement this, we also perform real-time measurements of the bandwidth and reliability (using packet loss count) in addition to taking snapshots of the path(s) taken by the data. This allows us to perform route cause analyses whenever the latency results are outside accepted boundaries for a prolonged period.

The results of both these campaigns can then be displayed and used for comparison and evaluation.

The diagram below shows a snapshot of LatenceTech’s Compact Dashboard presenting latency results for a mobile gaming application running on the  Encqor 5G (NSA) network using a MmWave RAN site and a Mobile EDGE platform in Montréal Canada in early 2022.

Real-time dashboard showing live low-latency results from MMwave measurements

Here we can see the difference between network level latency (10.6ms) and application level (17.2ms) latency by means of results which were aggregated over a 5- minute period. The 38% difference is significant! Knowing the network latency is great but having factual and continuous knowledge about application-level latency is a prerequisite to deploying a time-critical mobile applications.  

Below is another dashboard example, showing results from a low frequency band  and testing the latency from the device to server residing, this time, on the cloud.

Real-time dashboard showing live low-latency results from low-band measurements

We can see here the impact of the lower frequencies and longer distances to data processing, underlining the important role that MmWave and Edge computing will have in low-latency communications.

Time-sensitive mobile applications will require reliable, sustained low-latency connectivity which is best served by 5G technology.  But 5G alone will not be enough to deliver the service users need.  Tools to perform continuous tracking of  quality of service, and in particular network and application latency levels will be central to providing a reliable, high-performance service. Being able to rely on this quality of service means that new applications can be developed, and these will generate new economic benefits and value to end customers.  With the right network performance, think of all the innovate services that will be developed…

Short list of time critical application requiring ultra-low latency connectivity

When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it… your knowledge is of a meager and unsatisfactory kind.”

–Lord Kelvin

We would like to hear your remarks and comments.

Contact us for more information

The Secrets Hidden in CPRI

Written by Parsa Alamzadeh and Siân Morgan

If you want to start a riot in a room full of radio engineers, start talking about CPRI and watch those conference muffins fly.   

person holding magnifying glass
Photo by Maurício Mascaro on Pexels.com

The Common Public Radio Interface (CPRI) was defined by 3GPP to connect two functions in the cellular radio network: it links the Remote Radio Units (RRU) with the Base Band Units (BBU) over fibre.

CPRI’s first challenge is the extremely strict jitter and latency budgets – it’s a serial interface and round-trip latency must be kept in the range of 100 µs.

The second issue is that it has become a largely proprietary interface, making mixing and matching BBU and RRU vendors a nearly impossible task – leading to complaints about vendor lock-in.

It turns out that the data carried in the CPRI interface can be used to detect and predict network issues, as long as the correct high-performance unsupervised learning algorithms are applied.

The final problem is with 5G. LTE, already requires high bandwidth links:  for example, 10 MHz of spectrum to a 2×2 MIMO antenna requires 1.2 Gbps.   In 5G – and especially in the mmWave range – the CPRI data streams are exploding in size and the network will soon run out of fibre.

Luckily, eCPRI (enhanced) was introduced to address these problems – and it turns out it can also be used to detect and predict network issues, as we discovered when we applied high-performance, unsupervised learning to huge volumes of power measurements extracted from the interface.

All this data came from the Encqor 5G testbed network; a state-of-the-art 5G platform deployed in five cities in Canada using Mid-Band (N78 / 3.5GHz) and experimental mmWave (N261 / 28 GHz) spectrum.

Over 1 Gbps of data was filtered and distributed into 1000 payloads of over 98 000 datapoints – at a rate of 1 payload per milliseconds (ms).

The CPRI data was flying at over one Gbps per second and to be able to predict anomalies in the network we needed to extract, process, and analyze it in under 10 ms.  As a basis for comparison, that’s over 20 times faster than the average person’s reaction time. (Want proof?  check out this little test https://humanbenchmark.com/tests/reactiontime).

The useful power measurements we needed to extract was in the IQ portion of the interface (In-Phase and Quadrature signal samples).  With some bitshifting we were able to filter out and discard over 850 Mbps per second of data and distribute what was left into 1000 payloads of over 98 000 datapoints each – at 1 payload per ms.

The fun part came once the raw data was in the payloads.  It involved converting the datapoints into complex numbers, reshaping the data, and performing a Fourier transform on it to convert it from the time to the frequency domains.  

Fig. 1, preprocessed IQ Data, where the x-axis represents the frequency and y-axis represents the normalized power measurement. This graph was obtained after using the Autoencoder to select the anomaly candidates. Blue dots represent normal data and red dots represent anomaly candidates.

Detecting which points were truly anomalies was a challenge because we did not have any labels or classifications. Hence, we needed to take an unsupervised learning approach that allowed for high performance and parallel processing. We settled on a Deep Neural Network model, using autoencoders on data that had been brought to a lower dimension. By itself, it was capturing too many points, so we then used binning and thresholding to improve the output as shown in the figures below.

Fig. 2, On the left we have the histogram of anomalies, where every 32 sub-frequencies are grouped up together, and a threshold is indicated with a dashed red line. Bins that have a value greater than the threshold value are then considered as anomalous sub-frequencies. On the right hand side we have the original data with bins 

This model allowed us to detect anomalous power measurements and associate them with the specific antenna frequencies that were causing the problems. Because we can process the data so quickly (<10ms), problems that would normally go undetected can be identified and alerts or corrective action can be taken before any customers or industrial applications are affected.

Fig. Favoring reactiveness vs. accuracy in ML-based anomaly detection

Our general approach has been to favour reactiveness over accuracy. This generates warnings immediately as issues occur thereby keeping 5G applications fully secured. Warning (or alerts) can be used to instruct the 5G application to revert into “Safe Mode”. This is especially important in transport (e.g., robotaxis), manufacturing (teleoperation of remote equipment), and other industrial use cases. Furthermore, a network orchestrator could easily subscribe to these warnings, and after receiving a high volume of them, send self-healing commands to the affected 5G RAN sites.

It turns out that both the CPRI and eCPRI interfaces have a wealth of information in there – it’s just a matter of having the high-performance AI-based algorithms to extract and process their hidden secrets.

Shopping for latency?

Why IT’s wish-list just got a little longer

Written by Siân Morgan, link to Bio

When the IT department starts shopping around for a telecommunication services provider, it has a lot of things on its mind.  It’s looking at connectivity requirements: Wi-Fi on the shop floor, mobile phones for the sales team, wired Ethernet in the office. Then come the critical applications it needs to run the business:  soft-phones, on-line collaboration, and network analytics.  Finally, it is balancing quality against price before laying down the cash. What does quality mean to the IT department today?  Coverage.  Throughput.  Reliability. Chances are that low latency is not on the list…yet.

asphalt blur car city
Photo by Pixabay on Pexels.com

Pretty soon, low latency is going to move its way up to one of the important factors influencing telecom purchase decisions – especially for enterprises whose businesses are becoming simpler and more productive thanks to 5G.

The standards are promising…

The 5G standards promised low latency (75% lower than LTE) but that’s just the beginning: 6G is coming up with even more drastic targets (1 microsecond in the radio portion!). When businesses start to revolutionize their operations with wireless connectivity and new services such as Augmented or Virtual Reality (think assisted trouble-shooting and virtual training experiences), latency problems in the network are going to become glaringly obvious – and potentially damaging to the bottom line.

…but what about the reality?

Operators have mostly been deploying a first version of 5G – anchoring the radio to LTE and using the LTE core network (called Non-Standalone, or NSA). But even with these early-technology deployments there is evidence that wireless network latency is creeping down.

Benchmarking has shown that 5G can improve round-trip latency by 15 ms, but depending on the distance to the nearest server, the total latency can be over 100ms.  With cloud gaming applications requiring under half of that for a decent experience, let’s say there’s room for improvement.

In an August 2021 paper, researchers from the Universities of Minnesota and Michigan performed tests of commercial 5G networks and were able to demonstrate that 5G improved round-trip latency by between 6 and 15ms, depending on the radio band.   

Rootmetrics performed other 5G network benchmarking and measured latencies of between 22 to 42 ms in Seoul and 46 to 127 ms latencies in Los Angeles. To put this is in context, a decent cloud gaming experience needs a latency of between 10 and 50ms. Let say there’s still room for improvement! In a bid to help commercial networks reduce their end-to-end latency, new technologies are being deployed and tested in leading edge labs across the world.

mmWave is not available in all countries, but testing has shown that it can reduce latency by between six and eight milliseconds, compared to a low band 5G deployment

In Montreal’s outdoor 5G lab, Encqor, LatenceTech recently performed latency tests with experimental mmWave spectrum. The results from late 2021 demonstrate a sustained average ultra-low latency of 10ms measured from the device to the edge, using ICMP. The application-level latency was also measured using TCP/UDP and HTTP and averaged about 17ms, all while offering over 1Gbps  of throughput as shown below:

The results highlight the great performance of mmWave.  These high frequency bands are above 20 Ghz and have a lower end-to-end delay than the low bands because of wider carriers and shorter transmission time intervals. (For a more in-depth discussion on why, check out this blog).

Not all countries have auctioned off their mmWave spectrum yet, but as the research paper from the Universities of Minnesota and Michigan discovered, commercial deployment of these high frequency bands can make a real difference. They found that mmWave provided a six to eight millisecond reduction compared to a low-band 5G deployment.  Qualcomm estimates that there are over 120 5G mmWave devices, including smartphones, PCs, CPEs and hotspot devices.

Meanwhile, there are other cutting-edge technologies that will soon lower 5G latency even further, fueling both critical business applications and the cloud-gamer experience.

Closer is better

Multi-Access Edge Computing (MEC) involves installing computing resources at distributed geographical locations, closer to where the users need their service. When some of the critical data processing is done near the end-user, network response times become lower. It is still early days and both the technology and the business case for MEC need to be fine-tuned. StarlingX  is a great example of an initiative (from the Open Infrastructure Foundation) designed to solve the technical problems with delivering ultra-low-latency use cases on a virtualized edge infrastructure.

gray double bell clock
Photo by Moose Photos on Pexels.com

Streamlining and optimizing

To be “truly 5G” operators are going to move away from the LTE network and deploy 5G Stand-Alone (SA) Cores.  In fact, T-Mobile and Rogers already have, but since it also requires support in the handset, the impact on end-users has been mitigated so far.  The standalone core will allow for network slicing, or the dedication of network processing to services with similar traffic profiles.  This means that low-latency services can make use of network functions optimized for their strict performance requirements.  During testing, T-Mobile stated that they were able to obtain 40% improvements in latency on their SA core.

“…with cutting-edge AI technologies…enterprises will be able to trust that the network can deliver the performance they need” says Benoit Gendron, LatenceTech’s CEO.

What gets measured gets…. paid

Of course, this latency revolution means more than just gluing together a bunch of technical acronyms. Companies whose critical operations depend on low-latency communications need consistent performance from one end of the pipe to the other and they need it at the application layer. Most importantly they need proof that the service they are paying for is delivering the quality they were promised: the throughput, the up-time, and the end-to-end latency. Measuring and reporting on these performance indicators is going to be an important piece of the solution.  Says Benoit Gendron, CEO of LatenceTech, “with cutting-edge AI technologies we can identify trends in 5G latency which will help operators apply predictive maintenance and resolve issues before they become apparent. Ultimately, being able to measure, predict and report on latency will allow operators to sell it more effectively, and enterprises will be able to trust that the network can deliver the performance they need”.

References

[1]          A. Narayanan et al., “A variegated look at 5G in the wild: performance, power, and QoE implications,” in Proceedings of the 2021 ACM SIGCOMM 2021 Conference, Virtual Event USA, Aug. 2021, pp. 610–625. doi: 10.1145/3452296.3472923.

[2]          “RootMetrics_South_Korea-5G_report-1H-2021-final.pdf.” Accessed: Mar. 14, 2022. [Online]. Available: https://assets.ctfassets.net/ob7bbcsqy5m2/5iBWRRoiEgnEVxVp9WcXMK/59725d8bb80c3240b2d9ea1c382ccbea/RootMetrics_South_Korea-5G_report-1H-2021-final.pdf

[3]          “RootMetrics_Gaming_Report_Final.pdf.” Accessed: Jan. 11, 2022. [Online]. Available: https://downloads.ctfassets.net/ob7bbcsqy5m2/4xIeqsGvxfw4fejLt2ChdV/e07972594acb5f9f86b4cfac322d4cee/RootMetrics_Gaming_Report_Final.pdf

[4]          “Open Source Edge Cloud Computing Architecture – StarlingX.” https://starlingx.io/ (accessed Mar. 14, 2022).

[5]          “T-Mobile Launches World’s First Nationwide Standalone 5G Network,” Aug. 04, 2020. https://www.businesswire.com/news/home/20200804005636/en/T-Mobile-Launches-World%E2%80%99s-First-Nationwide-Standalone-5G-Network (accessed Jan. 11, 2022).

[6]          Qualcomm, Fierce Wireless, “Millimeter wave is the missing piece of the 5G puzzle,” Jan. 25, 2022.