Mobile internet usage outweighs network coverage, GSMA report points out

GUEST NOTE: New and emerging digital experiences highlight the importance of understanding and optimizing how traffic arrives and departs from an end user.

A lot has happened in the last two years that has collectively put latency – and especially low and ultra-low latency – on the map.

Latency – the lag between when the packet leaves the streaming source and when it arrives at a device – takes many forms, the most common being lag, dropped frames, buffering and with this content quality scaled down.

To be clear, latency was already a topic of interest for quite a few cohorts of users.

{loadposition stephen08}

On the consumer side, this included gamers, viewers of ultra high definition video on demand such as live sports, and regional and remote users with satellite connectivity.

On the enterprise side, latency could be an issue for real-time use cases, such as high-frequency or algorithmic trading, or for data collection at remote “edge” sites like mining sites. and gas platforms.

However, latency was really brought into the mainstream during the work-from-home revolution. It brought new metric to understand the complex interconnected networks used to send and receive user traffic, and in particular the impact of different times of day on latency.

Latency and its impact on web applications is even a regular measurement under look by the Australian Competition and Consumer Commission.

What you need to understand about latency is that it is not only important for the performance of today’s applications, but also for those of tomorrow.

In particular, one of the goals of network engineers and end users is to have low or – possibly – ultra-low latency connections to the Internet. As PwC Australia Remarksthis is already happening to some extent thanks to the growing footprint of 5G networks.

The truth is, ultra-low latency connections will have to become even more ubiquitous for the era of the Metaverse. High-performance connections are likely to be crucial for broad participation in environments that rely heavily on real-time interaction.

Any performance lag between users would not only be very noticeable, but would also undermine the promise of the experience.

If latency impinges on these experiences, customers will switch to other providers, watch different events, or engage with other people and businesses. The real benefit of ultra-low latency, then, is that content, user experiences, and data are delivered in near real-time, forming the foundation for better user experiences and enabling new business models.

What does latency look like

For the purposes of this article, I will explain latency, its challenges, and its opportunities, primarily in terms of consumption.

The content pipeline, from creation to transmission to eventual reception by the consumer device, requires processing, bandwidth and time.

Viewing a live event on a consumer device can often take up to 10 seconds.

While the average HD cable broadcaster experiences 4-5 second latency, about a quarter of content networks experience 10-45 second latency.

In the absence of a current standard, low latency streaming generally means that video is delivered to a consumer’s screen less than 4-5 seconds after the live action, while ultra-low latency is lower to that.

So-called “glass-to-glass” latency – from camera lens to viewer screen – is often around 20 seconds, while high-definition cable TV content sets the benchmark for low latency , about 5 seconds latency.

Reduced latency

There are many causes of latency in broadcast and distribution networks. Simply encoding a (live) video stream into packets to be sent over a network introduces delays to the video stream. Add to that delivery through a variety of third-party networks to the end user’s device, and the latency escalates. Also, different protocols have different strengths and weaknesses, and the primary consideration isn’t always to reduce latency.

Latency can be reduced by adjusting the encoding workflow for faster processing. However, this will lead to inefficiencies – and higher costs – elsewhere. Smaller network packets and video segments incur more overhead and less bandwidth, but will reduce latency, while larger segments increase overall bandwidth and efficiency at the expense of a real-time experience.

The media capture and encoding workflow is a good place to look for latency reduction opportunities. A well-tuned workflow can quickly deliver encoded video segments, but focusing on reducing processing time is not the only goal. Spending more time processing can often produce more compact data streams, reducing overall network latency. Thus, there is a dial between processing efficiency and transport network efficiency, and content publishers need to find the right balance.

While creating an efficient method of recording, encoding, and initially delivering content can help eliminate inefficiencies and latency early in the process, much of the actual latency occurs during delivery. Minimizing or reducing delivery latency requires planning and optimization, but also accepting the trade-offs between latency and cost.

Content companies need to find solutions for both the system front-end and the network delivery components to achieve the lowest possible latencies.