Skip to main content

8 posts tagged with "technical"

View All Tags

Cloud Video Content Protection: 3 Proven Methods (2023)

· 11 min read
Marcello Violini
Founder at Teyuto

Did you know that cloud video content consumption has jumped by 71% in the past year alone? With this rapid growth, protecting your video content is more paramount than ever. In fact, according to a recent study, 45 percent of businesses experienced a cloud data breach in 2022. Keep your company from becoming a statistic; immediately protect your valuable digital assets.

With the complete shift to online education in the post-epidemic era, the demand for short videos and film and television dramas has shown significant growth too.

cloud video streaming

For example, course enrollments on the e-learning platform Coursera increased by 444% in 2020 compared to the previous year.

Similarly, according to a survey conducted by PwC in 2021, 46% of respondents said they had subscribed to at least one new streaming service during the pandemic.

The COVID-19 pandemic has significantly increased demand for online education and OTT (over-the-top) platforms, transforming how we learn and consume content. In addition, millions of new subscribers and billions of dollars in revenue are anticipated for the OTT market in the future.

During the coronavirus lockdown, online video consumption set new records. And now this trend is gaining momentum.

According to a report by Datareportal, the global consumption of online video has increased by 92 percent in 2023.

75% of American consumers surveyed by Deloitte said they had increased streaming video services during the pandemic.

It is used to develop high-quality video services that viewers are willing to pay for; it is necessary not only to offer an unrivalled user experience but also to ensure that video distribution is comfortable and secure at the same time.

The Evolution of Content Protection

Mass video-on-demand (VoD) distribution began long before browsers could play secure video streams. Cable networks were the first to give access to content and face its protection. For them, protection mechanisms were created, which for 30 years have demonstrated a certain degree of reliability.

“At least 290,000 jobs and $29 billion in lost revenue in the film and television industry alone are directly attributable to digital theft, which harms creative industries and the people who work in them.”

At that time, the main problem was that people could freely connect to cable networks and access content closed initially to them or available only by subscription. As a solution, set-top boxes with unique cards were released. They identified the user and decoded the encrypted signal.

The need to protect content on the Internet also only appeared after a while. At first, the network hosted only free video content. And large copyright holders still need to download it, fearing the lack of reliable protection systems. With the development of paid video streaming, the platform will organize access to it on its own.

Why do companies prefer CDN protection?

cdn

Today, almost any service that provides access to content uses CDN (Content Delivery Network). This geographically distributed infrastructure delivers fast content to web service users and sites.

“Teyuto's streaming security solutions protect VOD content from piracy and other illicit use, which can be crucial for content creators and distributors.”

The servers that are part of the CDN are located in such a way as to make the response time for service users minimal. This is often a third-party solution, and several providers (Multi-CDN) are often used simultaneously. In this case, any person with a link has access to the content hosted on the provider's nodes.

This is where the need to differentiate access rights to the content in a distributed and loosely coupled system arises, which, moreover, is open to everyone on the Internet.

Best Solution for protecting streaming video content

Unauthorized Access to Content

There are several options for accessing restricted content. The most popular methods can be divided into two categories.

At the authorization level

  • Through the transfer of account data to third parties, these include relatives, friends, colleagues, and the audience of public communities and forums.
  • Selling or leasing, where users resell or temporarily grant access to their accounts.
  • Leaks and theft of account data.

At the level of content delivery

  • Unauthorized video download: via a direct link from the page code or developer tools console; using browser plugins for downloading from video hosting sites; through the use of separate software (for example, VLC or FFmpeg);
  • Screencast using both software and hardware.

If, at the authorization level, it is clear how to provide protection (for example, by blocking two or more simultaneous sessions from one user, using single sign-on (SSO), or monitoring suspicious activity), then problems may arise with the second option. Let's take a closer look at content protection itself.

One of the well-known solutions for protecting streaming video content is HLS protocol encryption (HTTP Live Streaming). Apple called it HLS AES and proposed it for securely transferring media files over HTTP.

Encrypted video stream transmission

video stream transmission

Encrypted transmission of video streams is a method to protect video content from illegal acquisition and tampering, usually through the following steps:

  1. Select an encryption algorithm: Select a suitable encryption algorithm to encrypt the video stream to ensure the data cannot be stolen and decrypted during transmission. Currently, commonly used encryption algorithms are AES, RSA, etc.

  2. Signed URLS: Signed URLs are a prevalent security measure for protecting online video content. The server generates a unique URI with a time-limited signature when a user requests access to a video, allowing the user to view the video for a limited time. After the time limit expires, the URL becomes invalid, and the user can no longer view the video.

  3. Generate key: Generate an encryption key according to the selected encryption algorithm. The server usually completes this process, and the generated key will be distributed to the sender and receiver of the video stream.

  4. Encrypted video stream: The sender encrypts the cloud video with an encryption algorithm and key to protect the video content from illegal acquisition.

  5. Transmission of encrypted video stream: The encrypted video stream is transmitted through a secure channel, and protocols such as HTTPS and SSL can be used to ensure the security of the transmission process.

  6. The decryption of the video stream: The receiving end uses the same encryption algorithm and key to decrypt the received video stream to restore the original video content.

Although encrypted transmission of video streams can protect the security of video content, it will also have a particular impact on transmission efficiency and delay. A trade-off needs to be made between security and efficiency.

At the same time, factors such as transmission quality, network bandwidth, and device performance need to be considered in practical applications. Subsequently, an appropriate encryption scheme should be selected comprehensively.

Three Giant DRM Technologies in Multimedia Industry

Three main DRM (Digital Rights Management) technologies have firmly taken positions in multimedia content copy protection technologies:

  • Microsoft's PlayReady
  • Google's Widevine
  • Apple's Fairplay.

Two streaming protocols are widely used today. These are HLS, introduced by Apple in 2009, and the more recent MPEG-DASH, the first adaptive bitrate video streaming solution to achieve international standard status.

The coexistence of the two protocols and the increased need to play online videos in browsers have pushed for the unification of the content protection system.

Therefore, in September 2017, the World Wide Web Consortium (W3C) approved Encrypted Media Extensions (EME) - a specification for interacting browsers and content decryption modules based on five years of development by Netflix, Google, Apple, and Microsoft. EME provides the player with a standardized set of APIs for interacting with the CDM.

A complete DRM system and three whales of content encryption

Introducing DRM and authorization mechanisms is the most reliable way to protect against unauthorized access to video content. It is important to follow three recommendations to avoid compatibility issues and minimize holes.

Encrypt video content with multiple keys

The original video file is divided into several small parts, each encrypted with a separate key. More is needed to decrypt content that can be freely intercepted from a CDN to obtain the keys. The device should request them.

drm system

In this case, the received keys are only suitable for some, but only for several video files. There are no changes on the user side: the player receives decryption keys as the user watches the video.

This is already enough so that the video cannot be downloaded in the most obvious ways for ordinary users, for example, through the VLC player, FFmpeg, or the corresponding extensions built into the browser.

Receive keys to decrypt content through the license server.

At a minimum, all requests to the license server must go over a secure HTTPS channel to prevent a MITM attack (key interception). As a maximum, in this case, it is worth using a one-time password (OTP, One Time Password).

Issuing keys upon request for ID content or the key could be better security. To restrict access, it is necessary to authorize the user on the site - to identify him by session ID. In this case, the transmitted with the ID content or key. This can be a session or any other identifier that will uniquely identify the user.

The license server requests information about whether there is access to the content for this token, and only in the case of a positive response give the encryption keys. Usually, the license server accesses your API and stores the response result in its session to reduce the load on the service.

Limit the lifetime of keys.

A non-persistent license is used, valid only within the current session. The user's device requests a license before each playback or as the video plays.

When using persistent keys, the user gets access to the content even when it has already been revoked (except for videos that are available offline).

Other ways to protect content

Protecting video content is not limited to encryption and introducing a DRM system. You can add more ways to them.

Overlay dynamic watermarks in the player or when transcoding video. This can be a company logo or a user ID by which you can identify him. The technology is not capable of preventing video content leakage but is psychological.

DNA coding, when the video is encoded in 4-5 different options, each user has their own sequence of options. This process can be divided into two parts. Initially, the video chain is generated by embedding characters in each frame of the original uncompressed content.

The frames are encoded and sent to the server for storage. Next, the user requests secure content from the provider, which associates a digital fingerprint with the client. It can be created in real-time or taken from a database containing a character string related to the video chains.

These symbols are used to create watermarked videos by switching between groups of images from video chains.

In a Nutshell

Safeguarding streaming video content prevents piracy, theft, and illegal access. Protecting streaming video:

geoblocking

DRM: Protect your video content with DRM. DRM prevents screen recording and other piracy.

Watermarking: Put a visible or invisible watermark on your video footage to identify the source and track illicit distribution.

Geoblocking: Geoblock your videos to specified nations or areas. This can block foreign users

Safe Video Hosting: Choose a platform with encrypted data transfer, multi-factor authentication, and frequent security updates.

Restricted Access: Require login credentials, access codes, or other authentication to restrict streaming video material to approved users.

Legal Protections: Secure copyrights, trademarks, and legal action against piracy and infringement to protect your streaming video material.

Summarizing

Security technologies are becoming more advanced without interfering with the user and simplifying the life of developers. Teyuto provides its users with signed URLs as a security measure. It ensures that only authorized users can access their videos and that their content is protected against unauthorized sharing and distribution by utilizing signed URLs.

Teyuto offers a comprehensive video streaming solution for businesses and individuals looking to broadcast their content to a larger audience. It provides encryption, digital rights management (DRM), and streaming privacy to protect VOD content.

How to Create a Mobile Streaming App for Android? DIY Guide

· 16 min read
Marcello Violini
Founder at Teyuto

Live broadcasts from mobile devices allow you to keep in touch with your audience wherever you are. But developing an application is a challenging task. It involves several different processes, professionals, and technologies.

If you have an idea and want to implement it, we have the perfect action plan for you. In this article, we'll look deeper at how to create your own mobile streaming or live streaming app on Android.

STREAMING PROTOCOLS

Streaming protocols are used to send video and audio over public networks. One of the most popular protocols for delivering streams is RTMP. Most streaming platforms support its reception.

It is reliable and great for live broadcasts due to its low latency and TCP-based data packet relaying.

Streaming platforms offer popular and scalable broadcast formats - HLS and DASH - to distribute and play content on users' devices. Android devices have a native Media Player that supports HLS playback. So, let's focus on this protocol.

Continue reading the article to learn everything you need to build your app from scratch.

How to create a live streaming application: a step-by-step guide?

First of all, it doesn't matter if you are a big company or a small one. App development is relatively easy now. And to help you with that, we've prepared a detailed step-by-step guide of everything you need to know to learn how to create a successful app.

1. Define Application Goals

Ultimately, each product is designed to be a solution. So, what problem will your app solve? This is the basic answer to understanding your app's value proposition, which is why your future users will install it on their smartphones.

It doesn't matter if another solution exists to the same problem. The goal is to make your proposal unique to stand out from competitors.

Therefore, study the market and competition before building a solution. Analyze the competitive potential of other solutions related to your objective. This step ensures that you gain essential information to understand future users better.

2. Define Your App's Target Audience

For the uninitiated, your target audience is the most likely to be interested in your product or service. If you have a Delivery Application, your target audience is restaurant owners, food delivery people, and your customers.

If you have a toy sales application, your target audience is parents, grandparents, or anyone wanting to buy a child a gift. However, if you want an urban mobility application, you must target commute people and drivers seeking to work on platforms. They will be your target audience.

What is video streaming delivery?

streaming delivery

Video streaming delivery refers to how video content is transmitted and played in real-time over the internet without downloading the entire file before viewing. Simply put, you don’t have to wait for the entire movie to download before you can watch it. You can view it as it buffers.

There are two main ways of thinking about streaming video.

1. Live Streaming

2. Progressive download

Live Streaming involves the real-time delivery of video content over the internet. Several streaming protocols are commonly used for this purpose, including HTTP Live Streaming (HLS), Dynamic Adaptive Streaming over HTTP (DASH), and Real-Time Messaging Protocol (RTMP).

Below is a brief overview of each protocol.

1. HTTP Live Streaming (HLS)

HLS is an adaptive bitrate streaming protocol developed by Apple. It segments video files into smaller chunks and serves them over HTTP. HLS can adapt to network conditions by switching between different quality levels during playback.

Pros
  • Adaptive bitrate streaming, which provides a better viewing experience
  • Broad compatibility with various devices and platforms
  • It uses standard HTTP infrastructure, which simplifies content delivery
Cons
  • Slightly higher latency compared to protocols like RTMP
  • It may require additional encoding processes to create multiple quality levels

2. Dynamic Adaptive Streaming over HTTP (DASH)

DASH is another adaptive bitrate streaming protocol that uses HTTP for video delivery. Like HLS, DASH allows video content to be served in different quality levels, adapting to the viewer's network conditions.

Pros
  • Adapts to the viewer's network conditions, providing a better streaming experience
  • Compatible with a wide range of devices and platforms
  • Codec-agnostic, which enables content providers to use various video codecs
Cons
  • It may require additional encoding processes to create multiple quality levels
  • Not as widely supported as HLS on specific devices, such as Apple products

3. Real-Time Messaging Protocol (RTMP)

RTMP is a protocol for low-latency video streaming initially developed by Adobe Systems. RTMP maintains a persistent connection between the server and the client, allowing for faster video content delivery. However, RTMP is being replaced by modern HTTP-based protocols like HLS and DASH.

Pros
  • Low-latency streaming, ideal for real-time applications such as live events and gaming
  • Reliable video delivery, even over poor network connections
Cons
  • Limited compatibility with modern browsers and devices, as it requires Flash or additional software/plugins for playback
  • less efficient in terms of bandwidth usage compared to adaptive bitrate protocols like HLS and DASH

Each streaming protocol has advantages and disadvantages, so choosing the one that best fits your needs and audience is essential.

Progressive Download

Progressive download, also known as pseudo-streaming, delivers video content that allows viewers to start watching the video while it's still being downloaded. The video is progressively downloaded and buffered, enabling playback to start before receiving the entire file.

Pros

  • Faster initial playback, as viewers don't need to wait for the entire file to download
  • Compatible with most media formats and players

Cons

  • Since the entire video is downloaded, it is easier to redistribute copyrighted content without permission, which raises copyright concerns.
  • The playback quality is not adaptive to the viewer's network conditions, which may result in buffering or poor video quality if the connection is slow.

Overall, Progressive Download is suitable for short videos or when adaptive streaming is not required. However, for live events and adaptive streaming experiences, using streaming protocols like HLS, DASH, or RTMP is recommended.

In the next section, we will discuss video streaming using HLS, which is the most widely supported video streaming protocol.

How to integrate video streaming using HTTP Live Streaming?

HLS streams video by creating a media playlist that chunks the video content into smaller segments. These chucks are curated in m3u8 files. In other words, an m3u8 file is like a playlist of streaming videos.

streaming protocols

However, even if you download the m3u8 file, you cannot play it offline. This is because it simply contains the location of the next segment (URL or absolute path) and points the browser to it.

Integration using m3u8 files

When streaming delivery video using HTTP Live streaming, both live and on-demand delivery are suitable.

Live distribution means seeing the content in real time as it gets generated and distributed. For example, a live concert is streamed online.

On-demand delivery means watching ready-to-stream content irrespective of when it was generated. For example, streaming a recorded or post-production version of a concert after it has been concluded.

You can play the streamed file using HTML by specifying the file test.m3u8 as the source.

<video src=”./video/test.m3u8″ controls>

This means the file test.m3u8 is prepared in the video folder in the same hierarchy as the HTML.

*To operate the following code, it is necessary to prepare the necessary files in the location of ./video/test.m3u8.

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="utf-8">

<title></title>

</head>

<body>

<video src="./video/test.m3u8" controls>

</video>

</body>

</html>

Pseudo-streaming using progressive download is possible by specifying the MP4 video file in the source like this.

<source src=”./video/test.mp4″>

Large videos need to be split, but short videos can be played as mp4 files using the video tag.

*For the following code to work, you need to prepare the necessary files at the location of ./video/test.mp4.

<!DOCTYPE html>

<html lang="en">

<head>

<meta charset="utf-8">

<title> </title>

</head>

<body>

<video controls>

<source src="./video/test.mp4">

</video>

</body>

</html>

Using QuickTime Player

It can also be played with QuickTime Player.

<source src=”./video/test.mov”>

The video tag specified can be used by embedding it in HTML.

*To operate the following code, it is necessary to prepare the necessary files at the location of ./video/test.mov.

<!DOCTYPE html>

<html lang="ja">

<head>

<meta charset="utf-8">

<title>QuickTimePlayer</title>

</head>

<body>

<video controls>

<source src="./video/test.mov">

</video>

</body>

</html>

Note: The “.mov” format is often associated with QuickTime Player. However, it can be played by other media players as well.

Application Platform Choice

This is a question based on your audience: Android or iOS? Check which operating system your target audience uses the most to gauge which platform to build on. The platform of choice can vary significantly based on your target audience's region and socioeconomic classes.

android or ios

So, double-check before moving ahead in a particular direction. Being available on all platforms is very useful to increase application coverage and make them more democratic.

iOS or Android?

While your goal may be to release on both platforms eventually, it's risky and expensive to build an iOS and Android app simultaneously. It’s because you’d not only have to develop both of these applications without a viable proof of concept but also maintain them after that and issue constant updates.

Most developers choose to build an application for one platform to launch and release the application on the other later once the first version of the application is established and successful. Here are some other points to consider when choosing between the two platforms.

Making an iOS App is Faster and Less Expensive.

It is faster, easier, and cheaper to develop for iOS. According to research, iOS app development time is 30–40% shorter than Android. One reason iOS is easier to develop is that Android apps are usually part of Java, which involves writing more code than Swift, Apple's official programming language.

Another reason is that Android is an open-source platform. The lack of standardization means more devices, components, and software fragmentation to consider.

Apple's closed ecosystem means you're developing a few standardized devices and operating systems. The Apple App Store has stricter quality rules and expectations and a longer review process, so apps can take longer to get approved. Your app may only be accepted if it meets Apple's standards.

Developing an Android App Allows for More Flexibility with Features

What features will you offer through your business app? Since Android is open source, there is more flexibility to customize your app – building the features and functions your audience wants.

Of course, this open environment makes Android more susceptible to pirated apps and malware.

Apple is generally perceived as more secure due to its closed nature, primarily because iOS has a larger audience in the corporate market.

Maintaining the app on Android or iOS is easier if users upgrade the operating system.

Developing for Android can mean spending more time ensuring your app remains platform-compatible and preventing bugs and crashes for users running older operating systems.

Android users take longer to adopt new operating systems. A study shows over 50% of Android users used an Android OS launched over two years ago.

Costs Estimation to Build an App

How much does an app cost? The costs of creating an application are nothing new, but we need to know where these costs come from.

app estimation cost

Hiring developers or other third-party services is a fee, and the functional API built into the app includes employee salaries, badges, office rent, software payments, etc.

To estimate the software development price, you need to provide the company with some basic information about your project. Customers who want to know how to build an app often face the following questions:

  • Idea. For example, you want to create an application like Netflix. So you explain your idea to the company's technical experts.
  • Resource List. It is essential to discuss some vital features that need to be implemented. It's also good to have a description of all features (e.g., a map with pins, detect user location, etc.)
  • Engineers would be grateful for your design insight.
  • Examples of competitors' apps or websites. Instances help you show developers which features you love and don't like.
  • Design. There can be just the ideas of what you like.
  • Specification

Many companies help their customers to collect all the necessary data, as well as our company. Then you can come up with the idea, and we'll do the rest.

Analyze Software Cost Factors

Factors such as the number of platforms, architecture complexity, and animations can completely change the final price of software development. All these factors should be considered and double-checked beforehand.

UI / UX Design

People are visual creatures, so design becomes vital to breaking down software development costs.

UI/UX design can grab and engage users' attention. Developing the design can take a long time, depending on the type of website and its complexity.

Development

First, you should know that there are two types of web development: front and back. The front-end or client site is all that users can see and interact with. As for the backend or server side, it's like an engine for the app.

For example, when a user clicks the register button, the application connects to the server to verify the data. Then it returns a value to the user (e.g., wrong credentials, a user already exists, successful registration ). However, this is where the backend starts to work.

Therefore, it is necessary to provide support with many versions of this operating system and different screen resolutions.

Define the Application's Functionalities.

We already know how to create an app and what problems it will solve, but how? State very clearly what functions the application will perform. As each application must have its own MVP version, mandatory and complementary functions must be separated.

Therefore, clearly define how the application will run, as developers can more easily map out all the technologies needed for implementation.

The correct way to get an application's functionalities is through the software requirements specification service. In Requirements Analysis and Engineering, prototypes and descriptions, functional or not, are produced to encompass the entire production project.

The client and systems development team work together to align their ideas and turn them into something tangible.

Type of Development

Now we need more technology. The first step is to understand what types of applications can be developed and their particularities, as well as the form and language of application developers.

  1. Native: The application program developed especially for the platform adopts the programming language predetermined by the manufacturer;
  2. Webapp: a mobile responsive website;
  3. Hybrid: Applications developed for Android and iOS using a single source code using a specific framework.

What is a native application?

A native app is exactly what comes to mind when talking about an app. It's the type of app commonly found in app stores. They are built in a unique language for a given operating system.

Two types of operating systems are dominant on smartphones: Android and iOS.

native app

The difference between them is more than just aesthetic, as an app developed for one only works for the respective platform. After all, each platform has its own tools and interface elements.

A native app is programmed in the language of its respective operating system, such as Java and Kotlin on Android and Objective-C and Swift on iOS — but there are also other languages ​​for each system.

Features of Native Apps

Because they are programmed exclusively for the operating system, the native application is faster and more reliable than the others. This is because it presents a better user experience using all the features smartphones offer, such as cameras, GPS, and push notifications.

This custom programming for the operating system makes the performance of the native application optimal. Native apps also have a longer usage time than others because they can work without an internet connection.

When programming a native application, developers adhere to guidelines provided for each operating system, such as the Android and iOS design guides, which contain best practices for providing a good user experience.

Some examples of great native apps that you probably use are WhatsApp, Netflix, Facebook Messenger, and Uber.

The native app only works on the platform it was developed on. If you want it on multiple platforms, you can opt for a development plan encompassing Objective-C and Javascript. Costs can also be higher because you have to maintain apps in each App Store. But your user's option to download your app, use it offline, and the excellent performance it will get is worth the investment.

What are Web Apps?

The web app is a website designed that emulates a mobile app experience on a web browser. It is programmed to recognize the user accessing it via a smartphone and adapt to it.

Mobile-optimized codes provide a good user experience. These are excellent options when presenting content or having a mobile presence online because they are cheaper, easier to develop, and can operate on Android and iOS devices. At some level, they involve HTML5, Cascading Style Sheets (CSS), and Javascript programming.

However, since they are not “native” to the device, web apps require an internet connection to be accessed and cannot use all the features of your device. They are slower than native applications because they are not integrated into the operating system.

As the web app will not be in the app stores, you lose an essential source of traffic and downloads. Your logo does not always stay on the user's screen, and its access is usually shorter than that of a native application. Also, your returning user base will be smaller, and they need to log in to access the app.

In addition, web apps do not have the same security as other applications, which can compromise your device.

What is a hybrid app?

The hybrid app is a mixture of a native app and a web app. These applications are built using HTML5, CSS, and Javascript language. This code is placed inside a container, integrating your device's functionalities and offering a better user experience than web apps.

Conclusion

Analyze how much you have to invest, the planned development time, and the application's features. Remember, the focus on ensuring a good user experience will return maximum benefit.

Using ready-made native apps with white-label options like Teyuto can be a convenient and cost-effective solution for businesses looking to enter the mobile or smart TV app market quickly.

Low Latency Streaming For Business: 3 Case Studies

· 11 min read
Marcello Violini
Founder at Teyuto

The low latency of live video streaming, such as second screen use, live reporting, and online video games, is vital in ensuring the best possible user experience.

Here's a big secret: when it comes to media, "live" rarely really means "live". Let's say you're at home watching a live show and seeing an audience member jump onstage. The audience at the venue saw it happen at least 30 seconds before you did.

This is because it takes time to move chunks of data (pieces of information used in numerous multimedia formats) from one place to another. This delay between a camera capturing video and displaying video is called LATENCY.

What is low latency?

So, what is low latency if several seconds of latency is considered normal? It's a subjective term. By default, the latency of the famous Apple HLS streaming protocol is 30-45 seconds.

When people talk about low latency, they often talk about getting it down to single-digits. However, the term low latency also encompasses what is often called real-time streaming (we're talking milliseconds here).

Also read: What are streaming protocols; How do they work?

When is low latency important?

No one wants noticeably high latency, of course, but in what contexts does low latency really matter?

The typical 30-45 second delay is manageable for most streaming scenarios. Returning to our concert example, it's irrelevant if the guitarist broke a string 36 seconds ago and you just found out.

But for some streaming use cases, latency is a critical business consideration. For instance, Amazon found that users' purchases dropped by 1% for every additional 100 milliseconds of waiting.

Similarly, According to Google's calculations, they could lose 8 million daily searches if they slowed down their search results by just four-tenths of a second.

Let's look at some streaming of them where low latency is undeniably essential.

Second screen Experiences

According to the concept of the second screen, we can understand it as the simultaneous consumption of television and the Internet. Example: watching TV programs or commercials and, at the same time, using smartphone or tablet apps to interact with the content (opinions, polls, etc.).

If you're watching an event on TV and a second-screen app, you can tell at a glance if there's a latency issue, which will cause discomfort.

Imagine that a sports channel offers a second-screen application so that you can see alternate camera angles and exchange comments with other users. The game's winning score is shown on the TV but isn't transmitted to the app until a minute later. The time for exchanging game feedback in the app has passed.

However, the sweet spot here isn't the ultra-low "real-time" latency we'll discuss next. This is because there is also latency for the television broadcast. If you're watching on digital cable, as most families do, the transmission latency can be up to six seconds. Your second screen app only needs to match this level of latency to deliver a fantastic experience in sync with your TV content.

Video Chat

This is where ultra-low latency live streaming comes into play. We've all seen televised interviews where the reporter is talking to someone at a remote location. The latency in the exchange of messages results in long pauses, sometimes with the two parties talking over each other.

video chat

This is because latency acts on both – it may take a second for the reporter's question to reach the respondent and another second for the respondent's response to return to the reporter.

This conversation can quickly become uncomfortable. When prompt responses are important, the acceptable limit is about 150 milliseconds of latency in each direction. This time frame is short enough for smooth conversation without awkward pauses.

Bets and Bids

Activities like auctions and sports betting are exciting because of their fast pace. And that speed requires real-time streaming. For example, horse racing tracks have traditionally been shared via satellite with other tracks around the world and allow their viewers to place bets online.

Satellite delays can be costly. Ultra-low latency streaming eliminates these troublesome delays and reduces dropouts. Likewise, online auctions are big business, and any delay could mean that bids need to be recorded correctly. Fractions of seconds make all the difference.

Online Video Game

Online video game

Anyone who has ever screamed, “This game is stealing!” in front of a screen knows that time is critical for players. A latency of less than 100 milliseconds is mandatory. No one wants to use a streaming service to ultimately find they're shooting enemies that aren't there anymore.

How does low-latency streaming work?

Now that you know what low latency is and when it matters, you're probably wondering: how do you provide low latency streaming? As with most things in life, low-latency streaming involves trade-offs.

You will have to balance three factors to find the right mix:

  • Encoding protocol and compatibility between device and player
  • Audience size and geographic distribution
  • Video resolution and complexity

The streaming protocol you choose makes a big difference. Let's analyze this:

Apple HLS is among the most widely used streaming protocols due to its reliability, but it is unsuitable for true low-latency streaming. This is because HLS is an HTTP-based protocol that transmits chunks of data. In simple terms, every video file is converted into more petite video “chunks” for the video to buffer for a smooth playback adequately.

low latency streaming

This means that at least 6 seconds of the chunk must be generated, encoded, transmitted, decoded, and buffered on the viewer’s video player. So, there will be a latency of “at least 6 seconds” in this case. Since each video piece must be viewed in real-time, chunk size plays a vital role in latency.

The original configuration size of the Apple HLS data chunk is 10 seconds, leading to a latency of up to 45 seconds. Customization can reduce this significantly, but more is needed for an ultra-low latency scenario. Exacerbating the problem, your viewers will experience more buffering the smaller you make these chunks (as enough video will not get buffered on the device).

RTMP and WebRTC are the standards for low-latency streaming.

  • RTMP offers good low-latency streaming but requires a Flash-based player – a format no longer supported by web browsers.
  • WebRTC is the standard deployed on many platforms, allowing for low-latency delivery in an HTML5-based non-Flash environment. WebRTC, however, is a lossy protocol and tends to lose data, which impacts quality.

Other important considerations are your streaming server. You'll need a streaming technology that gives you fine-grained control over latency and video quality and gives you the most flexibility possible.

Case Studies on Low Latency Streaming for Business

Case Study 1: ESPN

ESPN is a prime example of a business that has embraced low-latency streaming to improve customer engagement. With the rise of online streaming services, traditional cable TV providers like ESPN have had to adapt to keep up with changing consumer habits.

To stay competitive, ESPN introduced low latency streaming for their live sports events, allowing viewers to watch games in real-time with minimal delay. This provided a more immersive and engaging experience for viewers, who could interact with the game and each other in real-time.

The result was a significant increase in engagement and viewership for ESPN. By providing a seamless streaming experience, ESPN was able to keep its customers engaged and interested in its content, ultimately driving revenue and growth for the business.

H2: Case Study 2: Zoom

Zoom is another business that has embraced low-latency streaming to enhance its customer support services. With the rise of remote work, video conferencing has become essential for businesses to communicate with their employees and customers.

However, traditional video conferencing technologies can have significant latency times, resulting in delays and poor-quality calls. To address this issue, Zoom introduced low latency streaming for their video conferencing services, providing users with a more seamless and immersive experience.

The result was a significant increase in user adoption and satisfaction with Zoom. By providing a real-time and high-quality video conferencing experience, Zoom increased productivity and reduced the need for in-person meetings, ultimately driving growth for the business.

H2: Case Study 3: Amazon Prime Video

Amazon Prime Video is a business that integrated low latency streaming to improve customer engagement and retention. With the rise of streaming services like Netflix and Hulu, Amazon Prime Video has had to compete to retain its customers and attract new ones.

Amazon Prime Video introduced low latency streaming for their live streaming services to stay competitive, allowing viewers to watch events in real-time with minimal delay. This provided a more immersive and engaging experience for viewers, who could interact with the event and each other in real-time.

The result was a significant increase in engagement and retention for Amazon Prime Video. By providing a seamless and immersive streaming experience, Amazon Prime Video kept its customers engaged and interested in its content, ultimately driving revenue and growth for the business.

How to reduce latency?

Even in the cloud computing era, server latency still needs to improve for many companies. So that this issue does not take on more significant proportions and cause damage to the business, it is worth paying attention to some tips.

case study amazon prime

1. Review the communication infrastructure

To send data — be it images, music, videos, or documents — you need to have a good communication infrastructure. In this case, the internet.

This network's infrastructure comprises devices and constraints such as routers, cables, and available bandwidth. Considering that this verification can be a complicated task for many people, it is always recommended to have the help of a qualified person in the IT area.

2. Know the type of latency that is disturbing the connection

In addition to finding out where the system latency is, it is essential to know its type, as this problem can have several causes.

The issue can be resolved quickly, but in other scenarios, the presence of a specialized person is essential for identifying the problem that the connection faces.

3. Count on automatic scaling

A server typically slows down when many users make requests simultaneously. A common situation involving this problem is that of an e-commerce company in times like Black Friday.

So that there are no slowdowns or crashes in the system during the increase in simultaneous accesses, the ideal solution is to have a service that offers automatic scalability, having its bandwidth and performance improved in case of a sudden increase in demand.

4. Implement distribution networks

The use of CDN technology is an excellent ally in reducing the latency of web applications.

Through it, it is possible to save copies of the data to be distributed based on geographic location, connecting users to the closest possible server and increasing the speed of data transfer.

Thus, it is understood that latency is inevitable. However, through the correct techniques and tools, it is possible to reduce this factor in online services and increase the quality of customer service.

Using a CDN and Reducing Latency

In practical terms, latency is the time between a user accessing a file or service and the server's response. This latency can be very long when the server is far from the user or the network is congested.

A Content Delivery Network reduces the distance by triggering the closest server, providing greater network bandwidth. Both factors combine to reduce latency significantly.

Also read: What is a Video CDN?

Overall, a CDN yields a better viewing experience for the user, with snappy effects, fast loading, and little delay between clicking and getting results.

Teyuto Offers Live Streaming Solutions Without Delay

Teyuto provides a seamless streaming solution with low latency streaming with HLS encryption, signed URL, and DRM delivery. It offers the following features:

  • White-label Streaming
  • Monetization
  • Optimum Security Features
  • Video API and Multi Bitrate Streaming

Teyuto is an excellent streaming service that provides powerful and cutting-edge features. Achieving ultra-low latency and satisfying viewers is simple with Teyuto's rich platform, which includes industry-leading HLS for robust and efficient streaming.

Summary

Live OTT delivery is gaining in popularity and usage. Among them, an increasing number of major media companies are adding live streaming to their service menu to differentiate themselves from their competitors in the OTT field. Media distributors can differentiate their OTT platforms by offering low-latency, high-quality video.

With so many low-latency streaming options, there is no one-size-fits-all solution for low-latency video delivery in every workflow. The best solution depends on the type of content you stream and the demands of your video workflow. To know more, book a consultation with our experts and get a personalized demo today.

What is an API: API Types &amp; Applications (2024)

· 13 min read
Marcello Violini
Founder at Teyuto

In simple terms, we explain how programs communicate with each other and practice API calls. You go to the site with vacancies and look for a job as a back-end developer, and almost every vacancy says that you need to be able to work with the REST API, SOAP API, or just API. What does all this mean, and why does a programmer need it? Let's figure it out.

What is API?

API (Application Programming Interface) is a set of ways and rules by which various programs communicate with each other and exchange data. All of these interactions occur through functions, classes, methods, structures, and sometimes constants from one program that others access. This is the basic principle of how the API works.

Let's say you buy a movie ticket with a bank card. During the purchase, the terminal accesses the API of the bank that issued your card and sends a payment request. And if you order a taxi through the application, it also accesses the payment system through the API.

The software interface is similar to the customer and seller contract. Only the client is an application that needs data, and the seller is the server or resource from which we take this data. Such an agreement prescribes the conditions for how and what data the client can receive.

The API is found almost everywhere:

  1. In programming languages, it helps functions communicate correctly with each other. The calling function must respect the data type and sequence of the parameters of the called function.
  2. An operating system helps programs retrieve data from memory or change OS settings. Therefore, you need to know its API to develop applications for a specific operating system.
  3. Services communicate through a programming interface on the web. If the API is open, then the creators of the source service publish the official documentation for working with it. So, for example, the Telegram documentation looks like this.

Even though the term is quite broad, most often in vacancies, we are talking about the third option.

Why is an API called an interface?

The interface is the boundary between two functional systems interacting and exchanging information. At the same time, the processes within each system are hidden from each other.

APi interface

Using the interface, you can use the capabilities of different systems without thinking about how they process our requests and what they have “under the hood.” For example, knowing how the smartphone handles pressing the touchscreen is unnecessary for making a call.

The only important thing is that the gadget has a “button” that always returns the same result in response to specific actions. Similarly, you can perform certain program functions using API calls without knowing how it works. That is why API is called an interface.

What is a Video API?

A video streaming API is designed to access a platform. Video APIs support a wide range of functionality to create, customize, and control workflows from encoding to playback.

This allows developers:

  • Insert video into the system;
  • Process this video;
  • Configure security options;
  • Deliver content to end users;
  • Manage recorded assets;
  • View analytics throughout their workflow.

While many video platforms also provide management capabilities through a user interface, this format does not offer the same level of control and customization as an API. User interfaces often restrict developers to pre-built tools and vendors, limiting access to more advanced settings.

For example, your video content management system ( CMS) may come with simple analytics, but what if you prefer more advanced information?

With an API, you can choose external services and functionality to integrate into your application, which means you are not tied to the specific analytics tool offered. API access to the raw data also allows you to customize how the analysis is presented.

How does the API work?

The structure of the interface is usually viewed from the perspective of the client and the server. The program that makes the request is called the client, and the program that sends the response is called the server. You can draw an analogy with the weather forecast.

In this case, the information base of the meteorological company will be the server, and the browser that displays the results will be the client.

There are four types of APIs. Each is designed for specific purposes and has its own characteristics.

SOAP API

We can term it a "simple object access protocol." The exchange of information between the program and the server is in the XML language. It is rarely used today, as more flexible interfaces exist.

RPC API

Remote procedure call. The client requests the necessary action from the server and receives a response, which causes the application function to be executed.

WebSocket API

Another modern web version. The JSON text format sends information to a client or server. A feature of this API variant is that the server can send callback messages, increasing program interaction efficiency.

REST API

Today, it is the most requested version. The program sends the necessary information to the server, which in turn executes the built-in functions and sends the final data to the client.

APIs differ in structure and purpose of use. You should choose such an API so that its specifics correspond to your task.

  • Private: They are part of the companies' systems and work only in them.
  • Public: They are freely available and can be used by any Internet user. But among them, there are also paid options that preserve the confidentiality of data.
  • Affiliate: This type can only be used by programmers who help organizations collaborate. A personal account is also provided.
  • Composite: This option implies the work of different APIs when the developer faces a challenging task.

Composing a Set of Functions in APIs of an Application

A software interface's functions are set by a specialist when it is created. Before choosing an API, you need to consider three indicators:

  • Features and capabilities of the client.
  • The information needed to call the function.
  • The software will receive data from the server by interacting with the API.

The user receives one or more hidden functions that process and issue program information. All workflows remain invisible to other people thanks to encapsulation. Software interfaces from the inside can be very different from each other.

There are, of course, specific standards accepted among developers. But by and large, a programmer in writing code is not limited by anything.

Some specialists in the general set assign functions responsible for registration and entry into the system. Others combine several tools that allow you to integrate a site into someone else's application or other web resources. And someone is used to grouping APIs. For example, functions for connecting a card - in one set and for working with payment - in another.

If you add everything to one group, the end user of the API will have the opportunity to choose on their own how to use the available functionality.

Why is API actively used in programming?

API in programming

Programming interfaces help you work more productively.

Encapsulation, for example, makes web development much more accessible. Some of the necessary components are already contained in the API. Thus, there is no need to understand the code of elementary functions. At the same time, this helps to ensure the safety of the program's functionality, excluding the human factor. This is best seen in large-scale projects such as Windows or Linux.

How the API helps to write reliable programs?

Usually, we need to learn how programs work internally. However, we sometimes do not care how they work. Therefore, the software implementation is called a “black box” and is hidden behind several levels of abstraction to make it convenient for users to use them.

Abstraction layers significantly speed up the development process because the programmer can use ready-made API functions in other applications. This is standard practice. For example, most operating systems expose their APIs to other programs so that they can:

  • Work with the file system,
  • Draw graphics
  • Store data,
  • Use networking opportunities,
  • Play audio, and so on.

Windows, Linux, or macOS determine which functions to call and which parameters to pass to perform specific actions. All this is described in the documentation for the API, which developers of other programs work with.

If some cloud computing API becomes faster at extracting the square root, then all programs using it - from online calculators to neural networks - will also start to work faster.

APIs of services and libraries allow developers to stay at the wheel. Why write code when you can use ready-made?

API with programmers

Here are the possibilities provided by the API:

  • For example Teyuto API assists in publishing video on demand and live streaming to any screen, globally. It is extremely helpful in building web and mobile video platforms.
  • Increases security: The API allows you to move functionality that must be protected into a separate application. This reduces the possibility of incorrect use of these functions by other programs.
  • Links to different systems. You can only do this with the API if you need to connect a payment system or authorization through social networks to the site.
  • Reduces development cost: Using a paid API is often cheaper than creating functionality from scratch.

A third-party API is usually safe because a commercial organization or a whole community of developers is working on it. And, of course, with its help, even working on complex projects becomes more accessible and more enjoyable.

What functions can be included in the API?

There are no special rules or restrictions on the set of functions for the API. Developers include methods useful for client applications to interact with their service.

For example, the text analysis API will have functions to search for all cognate words, count the number of unions, and identify frequently occurring phrases. API functions can solve more than just the practical tasks of specific applications. This can become a marketing element when API access is offered separately.

How do companies make money with APIs?

Companies—especially those developing complex applications—often provide customers access to their product APIs. For example, video editor creators may charge extra for rendering videos on their servers.

What is the API used for?

The software interface allows developers to:

  • Simplify and speed up the release of new products, as you can use ready-made APIs for standard functions;
  • Make development more secure by bringing several functions into a separate application, where they will be hidden;
  • Simplify the configuration of links between different services and programs and not cooperate with the creators of various applications to develop your product;
  • Save money by not having to develop all software solutions from scratch.

Before the advent of Windows and other graphical operating systems, programmers had to write thousands of lines of code to create windows on a computer screen. When Microsoft released the Windows API to developers, creating windows took just a few minutes.

Business APIs are needed to:

  • Conduct transactions
  • Integrate data flow with clients and partner systems
  • Improve the safety of automated processes;
  • Develop your own applications;
  • To innovate, for example, when working with clients.

In the 1990s, an organization that wanted to launch a customer relationship management (CRM) system was forced to invest heavily in software, hardware, and people. Companies now use cloud services like Salesforce. API-level access to Salesforce functionality allows businesses to enable critical elements of CRM functionality, such as viewing customer history.

API governments can:

  • Exchange data between departments
  • Interact with citizens and receive feedback.

In 40 US cities, the free Open311 API is used, allowing you to track issues based on the user's location.

A person only needs to send a photo of a pothole on the road and an indication of geolocation to the city system.

Video API Examples in Our Life

E-Learning: Video APIs can be used to create interactive e-learning platforms that offer students a rich multimedia experience. By integrating video hosting and playback functionality, e-learning platforms can offer a variety of video-based courses, webinars, and tutorials that students can access anytime, anywhere.

Live streaming: Moreover, it can create live streaming platforms for sports events, music concerts, and other live events. With video APIs, businesses and organizations can stream live content to audiences worldwide with high-quality video playback, interactive features, and real-time analytics.

Video conferencing: We can use Video APIs to build video conferencing applications for remote meetings, interviews, and consultations. By integrating video hosting, recording, and playback functionality, businesses and organizations can offer their clients and employees a seamless video conferencing experience.

Authorization Buttons

Many sites have buttons to register through existing popular and social network accounts. This is possible thanks to the APIs of Google, Facebook, Apple, and Twitter.

What to Look for in a Video API?

When choosing a video API, you should look for a solution that meets your workflow needs first and foremost.

video apis

You should also ensure the API has complete documentation updated with any changes and a good set of examples to guide you through simple workflows, ideally written in multiple languages.

Specifically, you want to find a video API that offers:

  1. Comprehensive functionality across your entire video stream workflow, including live streaming, VOD, playback, and more. Your best bet is with an integrated video platform like Teyuto.
  2. Informative resources, including documentation, forums, and video tutorials, get you started quickly.
  3. Developer tools such as sample code, options, GitHub repositories, custom modules, and testing tools to streamline the process.

Try Core by Teyuto

Core by Teyuto allows you to deliver your videos securely, provide secure user-level access, manage metadata, and track all video sessions and views at the user level. You don't have to use different vendors.

Supported Languages

  1. Shell
  2. Node
  3. Python
  4. Ruby
  5. PHP

Teyuto offers a complete and ready-to-use video API and technology set.

Core by Teyuto allows developers to integrate a platform's video encoding and hosting capabilities into their applications, allowing them to upload, encode, and host video content without developing these capabilities.

It offers the following features:

  • Upload Video
  • Live Streaming
  • Video Player
  • Analytics
  • CDN
  • HSL Encryption & Signed URL

Final Remarks

Video hosting APIs are a powerful tool for developers and businesses to integrate video hosting functionality into their websites and applications. They offer a range of features, including video management, uploading, transcoding, playback, and analytics.

What are streaming protocols; How do they work?

· 13 min read
Marcello Violini
Founder at Teyuto

Have you ever thought about how you can contribute to your company's strategies by investing in streaming? Technology is being increasingly used for various purposes: remote work, training, courses, entertainment, and sales, among many others.

Most of us rarely go even one day without watching streaming videos. The rise in popularity of this kind of consumer behavior towards content is due to the availability of video streaming protocols.

Streaming video protocols are special standardized rules and methods that break video files into smaller pieces to be delivered to the end user for reassembly and viewing.

Files must be compressed for transport; this process is achieved using a "codec" such as the most common H.264. Before files can be transferred, they must also be saved in a "container format" such as .mp4 or .avi.

The source of the video file can be directly from the camera of the broadcasting user in the case of a live broadcast or static files in the case of video on demand (VoD).

Development of Streaming Video Protocols

As the demand for video streaming continues to grow, thanks in part to increased internet penetration, the number of video streaming platforms is also on the rise. In the 1990s, streaming was mostly limited to sports broadcasts; in the 2000s, the technology began to take off with Flash and RTMP-based streaming. Then came YouTube, Netflix, and other platforms in the 2010s.

Live streaming as a format took off in the mid-2010s, with Periscope and Facebook Live launched shortly after.

The video streaming market is vibrant today, with multiple platforms, providers, and uses, including live audio, movie and game streaming. Along with these developments, the capabilities of video streaming protocols have also expanded.

There are several video streaming protocols in existence today. Some of them can be called obsolete standards, but they still apply. Others, on the contrary, are developing rapidly, primarily due to open source.

Some of the protocols are relatively recent and will take time to become widespread, but they are the ones with the most significant potential to shape the video streaming pattern of the future. Not all protocols support the same codecs.

Below we consider the most common of them.

HTTP Live Streaming (HLS)

HLS is the most commonly used protocol for streaming today. Apple originally released it in 2009 as part of the fight against the Flash format in the iPhone. This protocol is compatible with many devices, from desktop browsers, smart TVs, set-top boxes, and Android and iOS mobile devices to HTML5-based video players. Naturally, this allows streaming companies to reach the broadest possible audience.

HLS also supports adaptive bitrate streaming. It is a technology that allows dynamic video delivery to provide the best video quality for end users.

The only serious drawback of the HLS protocol is its considerable delay. Latency refers to the time it takes for the delivered content to travel from the source to the requested location and back, especially if large amounts of data are transferred.

Dynamic Adaptive Streaming over HTTP (MPEG-DASH)

MPEG-DASH is one of the latest streaming protocols developed by the Moving Pictures Expert Group (MPEG) as an alternative to the HLS standard. It is an open-source standard that can be configured for any audio or video codec.

Like HLS, MPEG-DASH supports adaptive bitrate streaming, allowing viewers to receive the highest quality video, depending on the level their network can support.

WebRTC

WebRTC is also an open-source project that aims to deliver streaming with real-time response. Originally developed exclusively for VoIP applications, it became popular in video chat and conferencing applications after Google bought it.

webrtc

Some of the most common consumer applications today, such as Google Meet, Discord, Houseparty, Gotomeeting, WhatsApp, and Messenger, use the WebRTC protocol.

What makes WebRTC unique is that it is based on peer-to-peer streaming. This method can be called the preferred solution if low latency is required for streaming.

Transmission Reliability and Security (SRT)

SRT is another open-source protocol developed by streaming technology provider Haivision. This protocol is the preferred protocol for members of the SRT Alliance: a group of companies that includes technology and telecommunications providers. The main advantages that SRT is known for are security, reliability, high compatibility and low latency streaming.

SRT can stream high-quality video even if network conditions are unstable. It is also independent of a single codec, allowing it to be used with any audio or video codec.

Real-Time Messaging Protocol (RTMP)

RTMP is a protocol already known to many. It was developed by Macromedia (now known as Adobe) to transfer audio and video files between a streaming server and Adobe Flash Player.

But with the phasing out of Flash in 2020, its use has become less about delivering video content and more about uploading live streams to the platform through RTMP-enabled encoders. This means that the video stream from the encoder is sent to the streaming platform via the RTMP protocol and then delivered to the end user via the standard HLS protocol.

Real-Time Streaming Protocol (RTSP)

RTSP is another legacy protocol developed for the entertainment industry and is primarily used to establish and manage multimedia sessions between endpoints. Although similar to the HLS protocol, it does not help transfer real-time streaming data.

What is TSP

RTSP servers must work alongside RTP and other protocols to perform their streaming tasks.

Although it supports low-latency streaming, incompatibility with most standard devices and browsers may be an issue. You can think of it as a protocol capable of delivering low-latency streaming to a select group of small audiences from a dedicated server.

Because most IP cameras still support RTSP, it is still the standard in surveillance and video surveillance systems.

What is the difference between RTSP and RTMP?

The Real-Time Messaging Protocol (RTMP) is a technology that works with the Transmission Control Protocol (TCP). Like RTSP, it was initially developed to transmit audio, video and other data in real-time. Its TCP compatibility allows advanced communication between the recording device and the server where the data is transmitted. Users can enjoy a consistent and reliable stream through their recording devices.

RTMP is commonly used as a protocol for live-streaming platforms. It converts streams into playable formats by leveraging low-cost encoders.

RTSP and RTMP share many common characteristics and do not compete. The decision to use one over the other depends on the demands of your platform and streaming operation in general.

What's excellent about RTMP and RTSP is that they are both low latency and can control streams by providing media on demand, in real-time, over a stable connection.

However, RTSP is perfect as a cheaper and simpler streaming alternative. It developed significantly due to its widespread use by engineers when RTMP was isolated as a proprietary technology. As mentioned earlier, RTSP is the default with most IP cameras. It's excellent for localized streams and as an input to conferencing or monitoring systems.

What is RTSP for WebRTC?

While RTSP is beneficial, it has its drawbacks. Streams should be repackaged for friendlier playback, but unfortunately, this can result in latency issues that may cause delays. Given the critical use of IP cameras in highly critical surveillance situations, it's critical that you can overcome latency issues to promote crisp, clear playback where you can identify what's happening on your screen.

One of the best ways to ensure better video delivery is to use Web Real-Time Communications (WebRTC). This API has transcended the streaming scene by converting RTSP feeds into real-time streams displayed in clear quality with no playback issues.

WebRTC is compatible with most browsers and keeps delivery under a second. It provides a more consistent viewing experience than RTSP, which can cause up to 20 seconds of latency.

WebRTC works by relaying RTSP content. Your application highlights the importance of working with an effective media server to ingest your IP camera stream and repackage it into WebRTC. You can access the URL of your web-hosted replay page whenever you want.

RTSP: An in-depth look

RTSP uses commands to send requests from the client to the server. This is all part of controlling and negotiating media streams.

RTSP uses the following commands:

  • Options
  • Announce
  • Describe
  • Setup
  • Play
  • Pause
  • Record
  • Redirect

These are coordinated to present the media in its best possible form. Users can access the content via a generated link when the data is transferred and repackaged on the server. The ability to play files on demand without storing them on your device physically is one of the biggest reasons why RTSP will continue to play a prominent role in the streaming world.

RTSP uses the following audio and video codecs:

  • AAC, AAC-LC, MP3, Speed, Vorbis, Opus e HE-AAC+ v1 e v2
  • H.265, VP9, VP8, H.264

As a protocol system, RTSP is rarely used for playback because it is not formatted to create a physical file that plays on a device. However, it is compatible with Quicktime Player, 3Gpp compatible mobile devices and VLC media player.

RTSP is great for low-latency streaming, but it's not optimized for quality of experience and scalability. For this reason, adaptive bitrate streaming is widely used in other contexts, especially when IP cameras are not in operation.

Differences Between Live and On-demand Streaming

VOD 2023

The protocols and encoders used in streaming transmission can be made in two different ways. Even the combination of the two models is usually a complete strategy for delivering content to customers.

Live streaming is one in which the generated signal is sent in real-time to the public. In this case, there is no need for storage. Audio and video are captured and converted using the encoder, then streamed directly over the internet from servers.

In on-demand, the transmission is made on demand. In this case, the recorded content (such as video lessons or podcasts) is stored on servers. As soon as the consumer presses play, the stream of that file starts playing immediately. All this with low latency.

What should I consider when choosing a video streaming protocol?

The choice of video streaming protocol depends on certain factors that may be important to your business needs. You may want to reach as wide an audience as possible or minimize latency. Of course, you need to pay attention to the security and confidentiality of streams.

Below is a rough guide on how to make a choice based on these factors.

Compatibility

If you want to reach the broadest possible audience with streaming content, a protocol compatible with most devices, platforms, and browsers will do. HLS is the best option in this case. The protocol can even choose it as the default solution if there is any doubt.

Delay

HLS provides the most comprehensive coverage for streaming but introduces the most latency in the transmission process. RTMP provides low latency streams but is not compatible with HTML5 video players.

SRT supports low-latency streams, while WebRTC provides real-time latency. If you choose one of these options, be aware that audience reaches may be at risk as these protocols are less widely supported in the streaming technology environment.

If you can't compromise on either coverage or latency, one option in this situation is to use the HLS protocol. Thus, you decide in favor of accelerated multimedia content and will be able to stream with ultra-low latency.

Privacy & Security

If the most important thing is to ensure the integrity and safety of streams on the way to the end user, it is worth using a protocol that provides security features. Most protocols, including the widely used HLS standard, provide secure streaming, but SRT is the protocol with best-in-class security and privacy features.

Adaptive Bitrate

As discussed earlier, adaptive bitrate allows for the best possible video quality based on network, device, and end-user software capabilities. HLS and MPEG-DASH are the protocols that support this feature. To learn more about adaptive bitrate streaming, you can read our blog.

Developing Successful Multimedia Applications

develop media application

If you are planning to develop your own video platform, consider:

  1. The costs associated with infrastructure
  2. Transcoding
  3. Delivery and playback of content in advance.

In such a case, consider a cloud-based VoD content management system or an all-in-one real-time streaming solution that integrates receiving, managing, processing, publishing, and other aspects of video streaming on a single platform.

Developing successful multimedia applications for the internet is a highly challenging problem. Most current streaming protocols are based on either TCP or UDP. Both have advantages and disadvantages.

TCP provides reliable service, packet retransmission, and congestion and flow control. While reliable service is desirable, in the case of TCP, it is accompanied by disadvantages such as increased latency and throttling throughput.

At each stage of packet loss, TCP, through its congestion control, decreases the transmission rate. Thereafter, the transmission rate gradually grows until a new packet loss occurs. For this reason, we avoid using TCP for real-time streaming applications.

Thus, UDP was the chosen transport protocol for real-time applications such as RTP and RTCP, although it is not reliable and does not have congestion control. Multi-casting techniques can efficiently distribute live audio and video to many receivers.

Some techniques currently used to improve the quality of streaming are:

  1. Delay receiver playback by one hundred milliseconds to lessen the effects of jitter
  2. Use audio and video over UDP to avoid a slow TCP start.
  3. Pre-acquire data during playback of stored media.
  4. Submit redundant information to compensate for losses.

And some problems remain, such as

  • All packages receiving equal service best effort.
  • Quality can reach unacceptable voice and video transmission levels when traversing moderately congested links.

Summary

In this article, we have talked about some protocols that allow the execution of multimedia applications over networks where current protocols have difficulty delivering the necessary characteristics for the proper functioning of these applications. There are several solutions with very different mechanisms for current challenges. Multiple data sources, selective packet dropping, congestion control, redundancy in data transmission, packet forwarding, and multiple packet forwarding are just a few tools of the new streaming protocols

At Teyuto, we offer the most popular streaming protocols, including RTMP in input, and output HLS and Dash to ensure the best possible video playback experience for viewers across a wide range of devices and platforms.

What is MPEG-DASH? History, Pros, and Cons of DASH

· 9 min read
Marcello Violini
Founder at Teyuto

Featured Image

The year was 2010. Over the past few years, digital video viewing had increased exponentially. And it led to an unexpected challenge. The growing demand for videos was met with a flurry of proprietary protocols and formats. Apple HLS. Adobe HDS. Microsoft Smooth Streaming. And they all had one thing in common — each was designed to work only with their specific players or devices. There wasn’t a way to deliver a single stream that could play on all devices.

This led to the birth of MPEG-DASH, an open standard for adaptive bitrate streaming over HTTP. For the uninitiated, MPEG-DASH is not a format like H.264 or AAC, but a delivery method that can be used with any number of codecs and containers, such as MP4 (H.264/AAC), WebM (VP8/Vorbis), or MPEG-2 TS.

Let's dive deeper into what is MPEG-DASH and how it works. But let's begin from where it all started.

A Brief History of MPEG-DASH

In the late 1990s, two new technologies emerged that would change how we consume videos forever — broadband internet and mobile devices. The release of the first iPhone in 2007 then became a watershed moment. It popularized on-the-go video consumption and created an insatiable appetite for mobile content.

Even during the early internet, videos started becoming a popular format. Portals like Newgrounds, Albino Blacksheep, and eBaum’s World were receiving decent traffic. Then, in 2005, YouTube was founded, and it changed everything. By 2006, the video-viewing platform was delivering 100 million video views per day. As broadband speeds increased and more people had access to high-speed internet, online video consumption grew at an exponential pace.

But there was a problem. The delivery of videos over the internet was not designed for this level of demand because of the primary streaming protocols (such as HTTP and RTSP). They didn’t offer any kind of quality control or guaranteed delivery [Note: transmission often involves losses of packets that need to be assessed and restreamed thereafter]. The viewer experience was affected as a result.

RTMP was great for streaming videos on web browsers. However, it was a proprietary protocol exclusive to Flash players and wasn’t optimized for mobile devices. This led to the development of new protocols by the early 2010s, including Apple HLS (HTTP Live Streaming) and Microsoft Smooth Streaming. Both of these formats evolved from the HP Laboratories' demonstration of SProxy in 2006, which converted a video into segments and streamed them using an HTTP web server. The new protocols, meanwhile, furthered this approach by incorporating Adaptive Bitrate Streaming (ABR) technology as well. ABR is the ability of a video player to switch between streams (from high definition to low definition or vice versa) based on the network conditions.

But such formats brought us back to square one as they were still proprietary and didn’t resolve the challenge of cross-platform streaming.

While video and TV companies were trying to figure out methods of delivering the best viewing experience, they also had little control over how consumers received their content. For instance, in 2011, Netflix found that almost half of its users were watching videos on their gaming consoles. Each console used a different format. So, it was hard to deliver a consistent experience across all platforms.

The same went for other digital video platforms like Hulu. They all had to design their services around the limitations of the various devices that their end-users used. Thankfully, a few people were cognizant of the situation before the faultlines started showing.

3GPP (3rd Generation Partnership Program) got the ball rolling for a non-proprietary, cross-platform standard in 2009 by developing Adaptive HTTP Streaming (AHS). In 2010, MPEG issued a call for proposals to standardize an adaptive bitrate streaming solution for the delivery of IP-based multimedia services. The proposal by 3GPP was accepted (3GPP AHS), and MPEG-DASH (Dynamic Adaptive Streaming over HTTP) was born.

By January 2011, it became a draft international standard, and by December 2011 an international standard. It was published as an ISO/IEC 23009-1 standard in April 2012. Since then, the streaming protocol has been revised two times, i.e., once in 2019 and once in 2022.

How Does MPEG-DASH Work?

MPEG-DASH is a delivery method that streams media via HTTP and works with any codec and container. This makes it different from other streaming protocols that are format specific, such as HLS and RTSP.

Different components of the MPD syntax

The idea behind using an adaptable container is to have a single manifest file that can work with multiple streams. The player then chooses the most appropriate stream based on network conditions and the capabilities of the device.

For instance, if you’re trying to watch a video on your mobile phone with a slow internet connection, the player will switch to a lower-bitrate video so that it doesn’t keep buffering. And if you move to an area with better network coverage or connect your phone to Wi-Fi, it will automatically switch back to the higher-bitrate video. This results in a much smoother viewing experience that doesn’t interrupt the video playback while switching between streams.

Network architecture of MPEG DASH

DASH streaming also involves a segmented file format. This means that videos are divided into small segments, typically 2 to 10 seconds long. These files are then stored on a web (HTTP) server using regular HTTP-based protocols. When a viewer wants to watch a video, they send an HTTP request for the manifest file (.mpd). The manifest file contains information about all the available streams, their respective bitrates, and their location on the server.

Working model of MPEG DASH

Based on this information, the player chooses an appropriate video and starts fetching video segments from the server. A predetermined number of segments are loaded in the client to avoid excessive bandwidth usage.

Although MPEG-DASH works with any type of video content and is codec-agnostic, the most commonly used codecs with MPEG-DASH are H.264/MPEG-4 AVC and H.265/HEVC for video, and AAC and MP3 for audio.

Advantages of MPEG-DASH

MPEG-DASH has a number of advantages over other streaming protocols. These include:

1. Interoperability

DASH is an interoperable solution that can work with any type of video content. So, you can use the same manifest file (.mpd) for videos encoded in H.264 as well as VP9 (a Google open-source video compression format). All you need is to have multiple streams for each type of encoding and specify the respective locations in the manifest file. The player will then automatically choose the appropriate stream based on network conditions and device capabilities.

2. Enhanced Viewing Experience

As mentioned earlier, one of the key features of MPEG-DASH is Adaptive Bitrate Streaming (ABR). ABR allows players to switch between different streams seamlessly without interrupting the video playback. This results in a much smoother viewing experience, especially on mobile devices where network conditions can change frequently.

Segmenting files into small chunks also makes MPEG-DASH more efficient than some other streaming protocols. When using RTSP/RTP, if a user wants to seek ahead or rewind a video stream, they have to issue a command back to the server, which then sends the appropriate data packets. This backchannel communication can add significant latency. It also increases the load on the server.

With MPEG-DASH, however, videos are already divided into small segments. So, if a user wants to fast-forward or rewind a video, they can directly fetch the required segment from the server without any backchannel communication. This helps reduce latency, decreases bandwidth requirement, and results in a better viewing experience.

3. Improved Scalability

DASH involves stateless HTTP servers. This means that there’s no need to maintain any session state information, which helps improve scalability. MPEG-DASH can also integrate into existing CDNs (Content Delivery Networks) easily as it uses standard HTTP protocols.

4. Reduced Costs

As MPEG-DASH is an open standard, you don’t have to pay any licensing fees to use it. It is also compatible with standard HTTP servers, so no expensive servers are required. Such servers can, however, improve your overall performance. Here, it must be noted that the cost advantage is directly linked to the implementation you seek. If you want a high-performance implementation, you'll need proprietary solutions that are built on top of DASH. Contact Teyuto’s experts today to know what suits your needs better.

Disadvantages of MPEG-DASH

While MPEG-DASH has many advantages over other streaming protocols, there are a few disadvantages as well:

1. Limited Support

This one comes in as a surprise. One of the core ideas to develop DASH was to ensure cross-platform compatibility. But DASH is still not compatible with a range of devices, especially Apple products. At times, even the browsers that do support DASH may need a separate player or plugin to play videos based on it.

2. Lack of Standards

While being an international standard itself, there are no specific standards in DASH for how files should be encoded, segments should be created, DRM should be signaled, and so on. As a result, each content provider has to develop its own solution, which can lead to inconsistencies across different platforms and players.

3. Fragmented Ecosystem

The lack of standards has also led to a fragmented ecosystem where some companies are using proprietary methods to encode and segment their videos. This makes it difficult for other providers to use these videos on their platforms as they would need to invest in developing new solutions specifically for them.

4. Security

One of the key disadvantages of MPEG-DASH is that it uses standard HTTP protocols for streaming videos. This makes it vulnerable to various types of cyberattacks, such as man-in-the-middle attacks and Denial-of-Service (DoS) attacks.

5. First-Mile Delivery

Although DASH is great for last-mile delivery of video streams, using it as an ingest protocol (or first-mile delivery) can lead to sizable latency. To overcome this limitation, other protocols such as RTMP are used to ingest videos (first-mile delivery) and DASH for server-to-client video distribution (last-mile delivery).

Conclusion

MPEG-DASH is one of the most popular streaming protocols today. It offers a number of advantages over other protocols, such as interoperability, enhanced viewing experience, reduced latency, and improved scalability. Though there are some challenges associated with the protocol as well, you can easily resolve them with a leading video streaming solutions provider.

If you have any queries or want to develop your customized streaming solution using DASH, get in touch with our experts today.

LL-HLS vs HLS vs LL-DASH: Low-Latency Streaming Compared in 2024

· 11 min read
Marcello Violini
Founder at Teyuto

Featured Image

In 2009, Apple introduced HTTP Live Streaming (HLS) as a way to stream live and on-demand audio and video content over the internet. It is now the most widely used video streaming protocol across the globe, with support for all major browsers and devices.

In this blog, we will dive into why LL-HLS was created, what it is, how it differs from ll hls vs hls, what are its salient features, how it fares against LL-DASH, and a few things to keep in mind while implementing it.

LL HLS vs HLS: What is the difference?

As discussed above, HLS is a streaming media delivery protocol that uses HTTP to deliver video and audio content over the Internet. It is quite a popular protocol used by OTT service providers and other major browser and devices.

On the other hand, ll hls is a variant of HLS that is optimized for low-latency streaming. It reduces the time between when a user initiates playback and when they see or hear the content (known as "latency"). This can be especially important for live streams, where even a few seconds of delay can make the experience less enjoyable for viewers.

What are the technical differences between hls and hls low latency streaming?

Buffering Protocols

The main technical difference between hls and hls low latency is the protocol both use to handle streaming. HTTP live streaming uses a unique buffering mechanism that waits for the entire segment to be downloaded before streaming. This can impact the latency, because the viewers have to wait for the entire segment to download before they can enjoy video content.

hls low latency streaming uses server push which contributes in reducing latency. This means that the server pushes the segments of the video to the receiving end. It starts playing back the video as soon as the first segments of the video download. It ultimately reduces latency because the viewers can watch the video while the other segments are being downloaded.

Latency

Since low latency hls uses chunked transfer encoding, it reduces latency. Therefore, hls low latency is typically around 2-5 seconds, compared to 6-30 seconds for HLS. This makes hls low latency streaming a better choice for live streaming applications where latency is critical.

Now, let's get down to the business.

Why Low-Latency HLS?

HLS, the predecessor of LL-HLS, was launched to stream high-quality content at scale across devices and platforms. However, its scale-oriented streaming architecture came at a price, i.e., latency. For the uninitiated, latency is the time it takes from the video creation (on a camera) to its final playback (on a user's device), also called "glass-to-glass latency". In between, this video stream has to be encoded (both audio and video), segmented, packaged, listed, downloaded, delivered, decoded, lip-synced, and buffered before its playback. The streaming protocol (like HLS) handles all of this heavy lifting.

While HLS did a great job in terms of quality and compatibility, over the years, its development consistently compromised on latency. And, it made sense. Back then, latency wasn't a problem. However, it is no longer the case. With the advent of social media and live streaming, people now want content in real time. They don't want to wait much longer. Here, a delay of 30-50 seconds is simply unbearable.

It only makes sense that Apple (which maintains HLS) would eventually come up with an optimized solution for low latency streaming. So, they did! In 2019, at WWDC, Apple announced Low-Latency HLS or LL-HLS. It was built on top of existing HLS specifications with some modifications to achieve low latency (<5s). Let's take a look at how LL-HLS does this without compromising quality or compatibility:

How Does Low Latency HTTP Live Streaming (LL-HLS) Work?

LL-HLS makes some major changes to the existing HLS specification. These changes include:

1. HLS Partial Segments

In LL-HLS, segments are further divided into parts (HLS partial segments), which decrease individual file sizes. This makes it possible to start playback even before the entire segment is downloaded (as opposed to HLS where you have to wait for the complete segment).

2. Delta Playlist Update

The playlist is updated in LL-HLS with less transfer cost as compared to HLS. This is done by requesting the server to provide delta updates, which update the relevant portions of the playlist already available with the client.

3. Update Blocking

The HTTP GET request of a player can contain "Delivery Directives" in LL-HLS. These are special query parameters requesting a future segment in the playlist response. The server then blocks this request until the specified segment is available. It eliminates playlist polling and, as a result, frees up the server and network bandwidth.

4. Preload Hints

To further reduce latency, LL-HLS introduces preload hints. They are special tags in the playlist that tell the player to start fetching a segment even before it is required for playback. So, the segment can be played immediately without any delay when needed.

5. Rendition Reports

LL-HLS minimizes the number of roundtrips during bit-rate adaptation. This is done by adding EXT-X-RENDITION-REPORT tags for all media playlists in a multivariant playlist. These tags provide information, such as the last Media Sequence Number and Part currently in the Media Playlist. This way, the client can request required parts from the server without having to fetch an entirely new Media Playlist.

LL-HLS vs HLS: What's The Difference?

There are some key differences between them that you should know about before deciding which one is better for your use case.

Schematic diagram of segment streaming in LL-HLS and HLS

Here are some differences and similarities between LL-HLS and HLS:

1. Latency

As we've seen, the biggest difference between LL-HLS and HLS is latency. With LL-HLS, Apple has managed to reduce it significantly (to sub-5 seconds) as compared to regular HLS (which has a latency of around 30 seconds. This latency is even lower than the latency in HD cable TV streaming. As a result, LL-HLS gives users a near-real-time viewing experience and should be prioritized if latency is important for a given use case.

2. Quality

There is no noticeable difference in quality between LL-HLS and HLS streams. Both provide high-quality video streaming at scale. However, LL-HLS is not the best for low network bandwidth conditions.

3. Compatibility

One of the best things about both HLS and LL-HLS is their compatibility with all major browsers and devices. Some of the popular browsers that support LL-HLS include AVPlayer (iOS), Exoplayer (Android), THEOPlayer, JWPlayer, HLS.js, VideoJS, and AgnoPlay. So, unlike other protocols, you don't have to worry about whether your viewers will be able to watch your stream or not.

4. Cost

The deployment of a regular HLS is cheaper than LL-HLS.

5. Implementation

Implementing LL-HLS is more complex than HLS because of its additional features (like preload hints and rendition reports). So, you'll need to have a good understanding of how it works before you can implement it.

Now, let's look at some advantages and disadvantages of using LL-HLS for low latency streaming:

Advantages Of Low Latency HTTP Live Streaming (LL-HLS)

The advantages of using LL-HLS for low latency streaming include:

1. Low Latency

As it is clear from its name, LL-HLS was designed with latency in mind. The streaming protocol delivers a near-real-time, glass-to-glass viewing experience. In certain scenarios, using LL-HLS, a latency of <2 seconds can also be achieved. This makes it ideal for live streams, such as live sports, news, game streaming, etc. where every second matters.

2. High Quality

Another advantage of using LL-HLS is that it doesn't sacrifice quality for latency. It uses the same codecs (like H.264 and H.265) as regular HLS and provides a high-quality video streaming experience under the desired network conditions.

3. Scalability

The challenge with most streaming protocols, especially the ones involving low latency, is that they are hard to scale. This is not the case with LL-HLS. It builds upon HLS and uses standard HLS packaging, which makes it considerably easy to implement and scale. As a result, you can engage thousands of concurrent users without any hassle.

4. Compatibility

One of the best things about LL-HLS is that it is compatible with all major browsers and devices, including iOS, Android, macOS, Windows, tvOS, and so on. This compatibility makes it possible to reach a larger audience with your live streams without having to worry about whether they will be able to watch it or not.

The disadvantages of using LL-HLS include:

1. New Protocol

LL-HLS is a new streaming protocol and hence, doesn't enjoy as extensive support as its predecessor. This can make it difficult to find certain information or troubleshoot problems that you might face during deploying the protocol.

2. Complex Implementation

Another disadvantage of using LL-HLS is that its implementation is more complex as compared to regular HLS because of its additional features. Apart from major workarounds that are already mentioned, LL-HLS has several optimizations that can at times become quite overwhelming.

3. Cost

The cost involved in implementing LL-HLS is also higher than regular HLS because of the extra infrastructure required for low latency streaming. However, this cost is worth it if your use case demands real-time content delivery.

LL-HLS vs. LL-DASH

Although LL-HLS is often also compared with WebRTC, its only fair comparison is with LL-DASH.

Here's a quick comparison of the two streaming protocols:

1. Proprietary Protocol

LL-HLS uses HTTP Live Streaming (HLS) which is a proprietary Apple protocol, while LL-DASH uses the open standard Dynamic Adaptive Streaming over HTTP (DASH).

2. Primarily Based on iOS

LL-HLS is designed specifically for Apple devices. However, since it's backward compatible with HLS players, it enjoys cross-platform and cross-device support as well. LL-DASH is not supported by Apple devices.

3. Latency

The latency of LL-HLS and LL-DASH is comparable. However, depending on the use case and computation required, either of them can have higher or lower latency.

4. Individually Addressable Parts

While LL-HLS "parts" are individually addressable (as tiny files or byte ranges in the entire segment), LL-DASH "chunks" (or "fragments") are not. This means that in LL-DASH, the client doesn't have to wait for the server to completely encode the segment before sending the preceding chunks across.

5. Playlist Update

In HLS as well as DASH protocol, the client polls the server at regular intervals (say 10 seconds) to check for updates in order to fetch new content. However, it is possible to achieve playlist update without any polling from clients in both LL-HLS and LL-DASH. While LL-HLS does so with its Delivery Directives (_HLS_msn=<M>, _HLS_part=<N>, & _HLS_skip=YES|v2), LL-DASH does not depend on manifest update for a player to make sense of a new chunk.

A comparison between LL-HLS vs. LL-Dash

6. Codecs and Encryption

In both LL-HLS and LL-DASH protocols, content protection uses MPEG-CENC (Common Encryption) standards. Both of these protocols also have support for Common Media Application Format (CMAF.) In terms of codecs, while LL-DASH is codec-agnostic, LL-HLS only allows specific codecs for encoding.

7. Quality Switching

Both protocols offer adaptive bitrate streaming. They help players automatically switch between multiple renditions based on changing network conditions without interrupting playback experience for viewers. However, LL-HLS is different in that it has multiple streams for different bitrates and resolutions. LL-DASH only has one stream for a particular bitrate and resolution.

8. Security

One more advantage that LL-HLS has over LL-DASH is related to content protection mechanism i.e., how do you know if your encoder produced an encrypted file with valid signatures? In order to check this, HLS protocol uses EXT–X–KEY tags whereas DASH relies on PSSH boxes inside MP4 files or separate init segments outside of MP4s called xlinks – both methods require extra network roundtrips which can introduce significant delays during live streaming events. To address this issue and make things simpler & efficient, Apple came up with a solution where it included KEY ID & IV values directly into m3u8 playlist so that players could validate those before downloading any segment – no extra request/response needed!

To Wrap It Up

LL-HLS is a great choice for low-latency streaming if you are looking for a protocol that is compatible with all major browsers and devices. However, it is important to keep in mind that its implementation is more complex as compared to regular HLS.

In case you need any help, feel free to reach out to us.

What is HTTP Live Streaming (HLS)? Pros &amp; Cons of HLS

· 14 min read
Marcello Violini
Founder at Teyuto

HTTP Live Streaming was born during a tumultuous time. The launch of the iPhone in 2007 set the scene for the smartphone wars. And, with it, came a tectonic shift in content consumption.

Mobile phones were already popular. Users were no longer tethered to their desktops; they could access the internet on the go! It was inevitable that smartphones would further accelerate this trend. So, content providers now had to deliver videos keeping up with this newfound mobility.

But there was a challenge – a sizable one at that! Adobe's Flash Player was the reigning champion of video delivery in those days. However, flash wasn’t well-optimized for mobile devices. For one, it was a battery hog, which was a major concern for users who wanted to watch videos on the go. Flash also wasn’t fine-tuned for touchscreens and certain mobile operating systems didn't have its support whatsoever.

Apple was quick to realize that a new standard was needed, one that could ensure RTMP-like streams on mobile devices, take advantage of the HTML5 specification, and be more efficient with bandwidth usage. So, in 2009, it proposed HTTP Live Streaming (HLS). The streaming protocol has since become the de facto standard for delivering video on all platforms and browsers.

In this article, we will take a closer look at what is HLS, how an HLS stream works, and when you should use it for your projects.

HTTP Live Streaming (HLS)

HTTP Live Streaming is an adaptive bitrate communication protocol created by Apple to deliver video and audio content over the internet. It uses one of the three key web standards – Hypertext Transfer Protocol (HTTP) – to transfer data between servers and clients.

HLS is designed for reliability and scalability on top of cross-device, cross-platform performance. These aspects make it ideal for a range of streaming applications including large-scale live events and video-on-demand (VOD).

When compared to other adaptive bitrate techniques such as MPEG-DASH, HLS is different in that it uses multiple streams with varying bitrates for a given resolution. DASH, conversely, uses a single stream for a bitrate on a certain resolution. So, while DASH provides better performance under fixed network conditions, in the real world, HLS has an edge!

HLS' ability to switch between different streams based on the network connection extends a superior streaming performance. This provides an unparalleled user experience in real-world conditions, as the internet speed always tends to vary, especially on cellular networks. Being an HTTP-based protocol, HLS is easily implemented across devices, making it an attractive option for content providers and platforms alike.

How does HTTP Live Streaming Work?

The working of HLS is fairly simple. A master playlist (also called a master manifest) – which contains information about resolutions, bitrate combinations (renditions), languages, codec, metadata, etc. – is sent to the player. Each of these renditions has a separate playlist (also called a child manifest) that lists out their names, sequence, and respective URLs (URIs).

The player then downloads these playlists and starts playback. While the video is playing, the client switches between renditions based on the network conditions of the device. All of this heavy lifting is done in the background and ensures uninterrupted playback.

The image above represents a multivariant HLS playlist. It begins with the #EXTM3U tag, which is compulsory and represents an extended M3U file. The #EXT-X-STREAM-INF informs the player that the next URL (URI) is another playlist file, i.e. the child manifest.

The #EXT-X-STREAM-INF tag contains several parameters, including BANDWIDTH (upper bound bitrate in bits per second) and CODECS (RFC-6381-based format identifiers for audio and video separated by comma). The CODECS parameter is optional but highly recommended. It informs which encoder is used for audio and video streams. The same holds for RESOLUTION (display size in pixels), FRAME-RATE (maximum frame rate), and AVERAGE-BANDWIDTH (average bitrate). For HDCP protection, you can also use the HDCP-LEVEL parameter. All you have to do is use TYPE-0 (for HD resolution) and TYPE-1 (for resolutions greater than HD).

Here’s another example of a more advanced multivariant playlist.

In this example, we can see a playlist with two groups of additional audio renditions (represented by GROUP-IDs of audio-lo and audio-hi). Every media element (in this case, audio files for different languages) should be represented by the tag EXT-X-MEDIA and its TYPE (AUDIO, VIDEO, SUBTITLES, or CLOSED-CAPTIONS.)

Also, every media selection group should have its elements encoded with the same characteristics. For example, they should have the same codec, maximum bandwidth, etc.

What is the Architecture of HLS?

HTTP Live Streaming has a three-tiered architecture – Origin Server, Distribution Server (edge), and Client.

1. Origin Server

The origin server receives an AV input and converts it into a compressed file ready to be distributed. Typically, it comprises a media encoder and a stream segmenter. The media encoder encodes the media file into compatible formats. The stream segmenter then splits the encoded media into small segments and creates an index file. The index file contains the metadata of the media segments.

2. Distribution Server (Edge)

The distribution server, or a CDN, is responsible for delivering the content to the clients. It comprises an HTTP server and a media server. The HTTP server stores the index file and the media segments. A media server streams the media files to the clients.

3. Client

The client is responsible for requesting and receiving the content from the distribution server. It comprises an HTTP client or a media player. The HTTP client requests the index file from the HTTP server. The media player then uses the index file to request the media segments from the media server and plays them.

Features

HTTP Live Streaming comes with a plethora of features that make it the go-to choice for content providers. Let’s have a look at some of its key features:

1. Live, on-demand, and event video streaming

One of the best things about HLS is that it supports live, on-demand, and event- playlists. This means that you can use the same protocol to live stream an event as well as an existing video file stored on your server. However, since HLS prioritizes video quality over latency, the end-to-end usage of the streaming protocol can lead to a delay of up to 45 seconds in streams. This challenge is typically resolved by using a different protocol (such as RTMP) for ingestion. It’s a part of the LL-HLS extension and brings the latency down to approx. 2 seconds.

2. Cross-platform compatibility

Another great deal about HLS is that it’s compatible with all major browsers and platforms. This includes Safari, Edge, Chrome, Firefox, Android, iOS, tvOS, Playstation 4, Xbox One, and more. So, whether you want to deliver content to users on desktop, mobile devices, or smart TVs, HLS is what you need!

3. Adaptive bitrate streaming

HTTP Live Streaming also supports multiple bitrates. This means that you can encode your videos into different bitrates for different devices and internet speeds. The client will then automatically switch between these renditions based on network conditions to ensure a seamless video delivery. This results in a better user experience with no sign of buffering or low-quality video streams.

4. Encryption and authentication

HTTP Live Streaming also supports encryption and authentication. This means that you can encrypt your video streams to protect them from unauthorized access. You can also authenticate users before they are able to access your content. This is especially useful if you want to deliver content behind a paywall or restrict access to certain countries or regions.

5. Support for multiple languages

Another great thing about HLS is that it supports multiple languages. This means that you can create variant playlists for different languages and the client will automatically switch between them based on user preference settings. So, whether you want to deliver content in English, Spanish, French, or any other language, HLS has got you covered!

6. Closed captioning

Closed captioning is a feature that allows people who are deaf or hard of hearing to follow along with audio content. HLS supports closed captioning by allowing you to embed captions into your video streams. These captions are then displayed on the screen along with the video for easy accessibility.

7. Subtitles

Subtitles are similar to closed captions but are typically used for foreign-language films or TV shows where viewers may not be familiar with the spoken dialogue. Similar to closed captioning, HLS allows you to embed subtitles into your video streams so that they can be displayed on the screen along with the video itself.

8. Audio descriptions

Audio descriptions are a type of audio track that describes what is happening on the screen for people who are blind or have low vision. HLS supports audio description tracks, called Descriptive Video Service (DVS), by allowing you to embed them into your video streams. Your DVS must be marked with the attribute CHARACTERISTICS="public.accessibility.describes-video".

Pros of Using the HLS Protocol

Now that we’ve looked at what is HLS and how it works, let’s take a look at some of its key advantages:

1. All-Device Delivery

As mentioned earlier, one of the best things about HLS is that it’s compatible with all major browsers and platforms. This means that you can use the same protocol to stream content to users on desktops, mobile devices, smart TVs, and more. So, whether you want to deliver content to users on iPhone or Android devices, HLS is your go-to streaming protocol!

2. Excellent Quality

When streaming digital content, quality is of the essence. Today, users are spoiled for choice. If they don’t like what they see on your site, they can easily move on to the next one. This is where HLS really shines. The streaming protocol uses adaptive bitrate streaming to automatically adjust video quality based on network conditions. This ensures that users always have a great viewing experience, regardless of their internet speed or connection quality!

3. Cost-Efficient

When it comes to streaming digital content, the cost is always a major concern. HTTP Live Streaming is an extremely cost-effective solution as it doesn’t require any additional hardware. All you need is a standard web server and you’re good to go!

4. Privacy and Security

Another great thing about HLS is that it supports encryption and authentication. This means that you can encrypt your video streams to protect them from unauthorized access. Using HLS, you can also create standard DRM solutions such as Microsoft PlayReady, Google Widevine, and Apple FairPlay. This also makes HLS an ideal choice for content that needs to be delivered behind a paywall or restricted to certain countries/regions.

Cons of Using the HTTP Live Streaming Protocol

While HTTP Live Streaming comes with a lot of advantages, it also has some drawbacks that you should be aware of before using it for your own projects. Let’s take a look at some of its key disadvantages:

1. Latency

One of the biggest challenges with HLS is latency – the time it takes for video data to travel from the server to the client. This can be problematic if you’re trying to stream live events where every second counts! There are various solutions available that can help reduce this latency (more on this later), but it’s something you should keep in mind if you’re planning on using HLS for your project.

2. Internet Speed

Another drawback of HLS is that it requires a minimum internet speed of 400 kbps for low-quality videos and up to 8 Mbps for HD quality. This can be a major issue in areas with poor internet coverage or for users with limited data plans.

Solutions to the Latency Problem

There are various solutions available that can help reduce latency when using HLS. Some of them include:

1. Reducing the Segment Size

While Apple recommends a length of 6 seconds, lowering the segment size (target duration) can help decrease latency significantly. It’s because the client waits for a published segment from the server. The server cannot publish it before the predetermined target duration. So, before a player can even download a segment, it is already delayed by the 'target duration' seconds. On top of it, multiple segments are kept as a buffer to ensure seamless delivery even in patchy networks. This includes every segment’s encoding, packaging, playlist listing, and downloading, which adds to the latency by a factor of 4 to 7 times your segment size.

2. Announcing Segments Beforehand

You can try announcing segments before they are actually available. This is done by setting the #EXT-X-PRELOAD-HINT tag to indicate the most likely location of the next stream.

3. Using a different ingest protocol

Using a different ingest protocol transfers the server-side encoding to a faster encoder, which introduces superior efficiency. You can also use Low Latency HLS (LL-HLS). LL-HLS is a part of the HLS protocol, but with a few workarounds. It has a latency of (around 2-5 seconds).

When to Use the HLS Protocol?

HTTP Live Streaming is an extremely versatile streaming protocol. It can be used for a wide range of applications including live events, on-demand content, and more. Here are some situations where you should use HLS:

1. If you want to delight your viewers

HLS was developed with the goal of providing an uninterrupted and great viewing experience to users. So, if you want to ensure that your viewers have the best possible experience when watching your content, HLS is the way to go!

2. If you’re streaming live events

HTTP Live Streaming is the perfect choice for streaming live events such as sports, concerts, conferences, etc. The protocol is designed for reliability and scalability so that you can stream large-scale live events without any hiccups! The only thing that you need to work around is latency, which has a couple of quick fixes.

3. If you want to reach a global audience

One of the best things about HLS is that it’s compatible with all major browsers and platforms. This means that you can use the same protocol to deliver content to users on different devices and across different geographical regions. So, if you want to reach a global audience with your content, HLS should be your go-to streaming protocol!

4. If you need a simple implementation

Another great thing about HLS is that it’s extremely easy to implement. All you need is a standard web server and you can start streaming content right away!

When Not to Use the HLS Protocol?

While HTTP Live Streaming comes with a lot of advantages, there are also some situations where you should avoid using it. Here are some situations where you shouldn’t use the HLS protocol:

1. If latency is a major concern for you

As mentioned earlier, one of the biggest challenges with HLS is latency – the time it takes for video data to travel from the server to the client. This can be problematic if you’re trying to stream live events like web conferencing or sports! There are various solutions available that can help reduce this latency, but it’s something you should keep in mind if you’re planning on using HLS for your project.

2. If you have limited bandwidth

Another drawback of HLS is that it requires a minimum internet speed of 400 kbps for low-quality videos. Anything less can cause challenges not only in streaming but also keeping the lip sync intact. So, if you’re looking to deliver content to users in these situations, HLS may not be the best choice for you!

To Wrap It Up...

HTTP Live Streaming is an extremely versatile streaming protocol. It comes with a lot of advantages including all-device delivery, excellent quality, cost-efficiency, privacy and security, support for multiple languages, closed captioning, subtitles, and audio descriptions.

While it has some drawbacks such as latency and internet speed requirements, there are various solutions available that can help address these challenges. So, if you’re looking for a streaming protocol that is compatible with all major browsers and platforms, delivers great quality, and is easy to implement, HLS should be your go-to choice!