network Archives • BlogGeek.me https://bloggeek.me/webrtctag/network/ The leading authority on WebRTC Thu, 31 Aug 2023 15:53:07 +0000 en-US hourly 1 https://bloggeek.me/wp-content/uploads/2021/06/ficon.png network Archives • BlogGeek.me https://bloggeek.me/webrtctag/network/ 32 32 Packetization https://bloggeek.me/webrtcglossary/packetization/ Sun, 16 Jul 2023 07:47:55 +0000 https://bloggeek.me/?post_type=webrtcglossary&p=73897 Packetization in WebRTC is the process used to take audio and video frames and prepare them for sending over the network. A media frame can be considerably smaller or larger than the MTU size which means we will either be underutilizing the network or fragmenting these frames over multiple network packets. To properly receive such […]

The post Packetization appeared first on BlogGeek.me.

]]>
Packetization in WebRTC is the process used to take audio and video frames and prepare them for sending over the network.

A media frame can be considerably smaller or larger than the MTU size which means we will either be underutilizing the network or fragmenting these frames over multiple network packets.

To properly receive such frames, construct them back and play them, RTP packetizes the media frames, splitting them into multiple packets in a way that makes dealing with packet loss, reordering and other network artifacts manageable.

Packetization is especially important in video codecs, where the frames are usually larger than a single network packet size. Different video codecs have different payload headers to them which indicate how to packetize and depacketize the codec in a specific manner, based on the capabilities and characteristics of the codec.

The post Packetization appeared first on BlogGeek.me.

]]>
MTU size https://bloggeek.me/webrtcglossary/mtu-size/ Sun, 16 Jul 2023 07:46:28 +0000 https://bloggeek.me/?post_type=webrtcglossary&p=73896 MTU stands for Maximum Transmission Unit. When data is sent over a network, it must adhere to the maximum transmission unit available. This means that if we are trying to send a chunk of data bigger than the MTU size, it will be fragmented into smaller chunks and sent in multiple packets over the network […]

The post MTU size appeared first on BlogGeek.me.

]]>
MTU stands for Maximum Transmission Unit.

When data is sent over a network, it must adhere to the maximum transmission unit available. This means that if we are trying to send a chunk of data bigger than the MTU size, it will be fragmented into smaller chunks and sent in multiple packets over the network (or not sent at all).

To efficiently send media over the network in real time, our goal would be not to have the packets we send fragmented by the network or dropped due to too small MTU size, which is why we end up packetizing the media frames prior to sending them.

Different routers and switches in the network may have different MTU size configured in them, so figuring out what is the effective MTU size of a given network communication path can be tricky.

In most networks, it is usually assumed that the MTU size is around the 1,500 bytes mark. The approach WebRTC took here is to set an MTU size of around 1,200 bytes and use it for its packetization calculations (give or take a few bytes). Using this lower value makes sure WebRTC will work well on most network configurations (including scenarios where packets are further wrapped by VPN tunneling bytes or additional layers of encryption).

The post MTU size appeared first on BlogGeek.me.

]]>
BWA (Bandwidth Allocation) https://bloggeek.me/webrtcglossary/bwa/ Mon, 03 Apr 2023 04:19:05 +0000 https://bloggeek.me/?post_type=webrtcglossary&p=73677 BWA stands for Bandwidth Allocation. BWA is an important aspect when dealing with large group calls, where participants may receive more than a single incoming media stream. In such cases, the decision of what percentage of the estimated bandwidth to allocate per incoming media stream becomes important. Since each participant is limited by the performance […]

The post BWA (Bandwidth Allocation) appeared first on BlogGeek.me.

]]>
BWA stands for Bandwidth Allocation.

BWA is an important aspect when dealing with large group calls, where participants may receive more than a single incoming media stream. In such cases, the decision of what percentage of the estimated bandwidth to allocate per incoming media stream becomes important.

Since each participant is limited by the performance of the device, the display resolution and the download speeds, there is likely to be a need to curb the amount of data being sent. Reducing that amount needs to be done based on the priorities of the given scenario, which is what BWA is meant to achieve. Such priorities can be defined by dominant speaker identification, displayed resolution, phases of the moon or any other heuristic.

BWA will usually get its upper bound of total bitrate from a BWE (bandwidth estimation) algorithm.

The post BWA (Bandwidth Allocation) appeared first on BlogGeek.me.

]]>
Packet pacing https://bloggeek.me/webrtcglossary/packet-pacing/ Fri, 11 Mar 2022 18:36:32 +0000 https://webrtcglossary.com/?p=4686 Transmitting many UDP packets that form a video frame in a burst at the same time increases the chances of packet loss and increases Jitter. For that reason, a “pacer” is typically used to spread out the sending out a bit (but no longer than the frame duration). See also https://datatracker.ietf.org/doc/html/rfc8298#section-4.1.2.6 for more information. QUIC […]

The post Packet pacing appeared first on BlogGeek.me.

]]>

Transmitting many UDP packets that form a video frame in a burst at the same time increases the chances of packet loss and increases Jitter. For that reason, a “pacer” is typically used to spread out the sending out a bit (but no longer than the frame duration).

See also https://datatracker.ietf.org/doc/html/rfc8298#section-4.1.2.6 for more information.

QUIC stands for Quick UDP Internet Connections.

QUIC is an experimental protocol by Google that is based on UDP and targeted at improving situations when you need multiple parallel sessions between two entities. This is the situation for virtually every web page on the internet, which usually requires more than a single resource file to be transmitted from the web server to the browser.

QUIC finds WebRTC in Google’s latest roadmap where it announced its intentions to experiment with the use of QUIC as a replacement for SCTP for the data channel transport.

Bandwidth is the capacity available to receive and send data over a certain network connection.

There are a few important concepts around bandwidth:

  • Available bandwidth fluctuates dynamically over time
  • Sending (outgoing) and receiving (incoming) bandwidth are asymmetric in nature
  • In VoIP and WebRTC, our purpose is to estimate the available bandwidth in as much accuracy as possible – the better the estimate, the better the media quality are be able to provide
  • The estimation we make derives the maximum bitrate that we can send or receive

The post Packet pacing appeared first on BlogGeek.me.

]]>
Bandwidth https://bloggeek.me/webrtcglossary/bandwidth/ Fri, 30 Jul 2021 18:07:23 +0000 https://webrtcglossary.com/?p=4158 Bandwidth is the capacity available to receive and send data over a certain network connection. There are a few important concepts around bandwidth: Available bandwidth fluctuates dynamically over time Sending (outgoing) and receiving (incoming) bandwidth are asymmetric in nature In VoIP and WebRTC, our purpose is to estimate the available bandwidth in as much accuracy […]

The post Bandwidth appeared first on BlogGeek.me.

]]>
Bandwidth is the capacity available to receive and send data over a certain network connection.

There are a few important concepts around bandwidth:

  • Available bandwidth fluctuates dynamically over time
  • Sending (outgoing) and receiving (incoming) bandwidth are asymmetric in nature
  • In VoIP and WebRTC, our purpose is to estimate the available bandwidth in as much accuracy as possible – the better the estimate, the better the media quality are be able to provide
  • The estimation we make derives the maximum bitrate that we can send or receive

The post Bandwidth appeared first on BlogGeek.me.

]]>
TWCC (Transport Wide Congestion Control) https://bloggeek.me/webrtcglossary/transport-cc/ Mon, 21 Jun 2021 03:27:38 +0000 https://webrtcglossary.com/?p=4074 TWCC is also known as transport-cc. TWCC stands for Transport Wide Congestion Control. It is used as a sender-side bandwidth estimation technique in WebRTC. TWCC is operated under the following concept: TWCC is considered a better algorithm for bandwidth estimation by many, and is also the algorithm that Google is focusing and investing in more […]

The post TWCC (Transport Wide Congestion Control) appeared first on BlogGeek.me.

]]>
TWCC is also known as transport-cc.

TWCC stands for Transport Wide Congestion Control. It is used as a sender-side bandwidth estimation technique in WebRTC.

TWCC is operated under the following concept:

  1. Receiver of the media calculates the intra-packet delays and reports it back to the sender of the media
  2. The sender then calculates the estimated bitrate based on that information

TWCC is considered a better algorithm for bandwidth estimation by many, and is also the algorithm that Google is focusing and investing in more time and effort. The main reason for this is that the actual estimation implementation is reliant on only the sender, and in media servers such as an SFU that means better control over the algorithm.

The draft for TWCC implementation can be found in draft-holmer-rmcat-transport-wide-cc-extensions-01. The support for TWCC is negotiated in the Offer/Answer SDP Exchange.

What is Congestion Control?

Before diving into TWCC, it’s crucial to understand the concept of congestion control. In a network, congestion occurs when the demand for resources exceeds the available capacity. This can lead to packet loss, increased latency, and jitter, all of which are detrimental to real-time communication. Congestion control algorithms aim to optimize network performance by regulating the flow of data packets.

What is TWCC?

Transport Wide Congestion Control (TWCC) is a Google-developed extension to the RTP (Real-Time Protocol) for WebRTC. It provides a feedback mechanism that allows the sender to adjust the rate of media packet transmission based on the network conditions experienced by the receiver. Unlike traditional congestion control algorithms that operate on a per media stream basis, TWCC works across all media streams in a single transport channel. This “transport-wide” approach offers a more holistic view of network conditions, enabling more accurate and efficient adjustments.

How Does TWCC Work?

Packet Marking

Each RTP packet sent by the sender is marked with a unique sequence number. This sequence number is transport-wide, meaning it is unique across all media streams sharing the same transport channel.

Feedback Reports

The receiver monitors the arrival time of each RTP packet and generates feedback reports. These reports contain the sequence numbers and the observed arrival times of the packets. The feedback is then sent back to the sender.

Rate Adjustment

The sender uses the feedback reports to calculate the network congestion level. If packets are arriving late or are lost, it’s an indication of network congestion. The sender then adjusts the rate of packet transmission accordingly.

Advantages of TWCC

  1. Holistic View: By working across all media channels, TWCC provides a more comprehensive and more accurate understanding of network conditions
  2. Quick Adaptation: TWCC enables the sender to quickly adapt to changing network conditions, improving the quality of experience for end-users
  3. Efficiency: Traditional congestion control mechanisms require separate feedback for each media channel, consuming more bandwidth. TWCC’s transport-wide feedback is more bandwidth-efficient
  4. Flexibility: TWCC is codec-agnostic, meaning it can be used with any media codec, offering greater flexibility in application development

Implementing TWCC

Implementing TWCC in your WebRTC application involves both client-side and server-side changes. On the client-side, you’ll need to enable the TWCC extension in your RTP parameters. On the server-side, you’ll need to handle incoming TWCC feedback reports and adjust your sending rate accordingly. Many WebRTC libraries and SDKs already support TWCC, making it easier to implement.

The post TWCC (Transport Wide Congestion Control) appeared first on BlogGeek.me.

]]>
REMB (Receiver Estimated Maximum Bitrate) https://bloggeek.me/webrtcglossary/remb/ Mon, 26 Jun 2017 02:14:31 +0000 http://webrtcglossary.com/?p=192 REMB stands for Receiver Estimated Maximum Bitrate. It is a RTCP message used to provide bandwidth estimation in order to avoid creating congestion in the network. This RTCP message includes a field to convey the total estimated available bitrate on the path to the receiving side of this RTP session (in mantissa + exponent format). […]

The post REMB (Receiver Estimated Maximum Bitrate) appeared first on BlogGeek.me.

]]>
REMB stands for Receiver Estimated Maximum Bitrate. It is a RTCP message used to provide bandwidth estimation in order to avoid creating congestion in the network.

This RTCP message includes a field to convey the total estimated available bitrate on the path to the receiving side of this RTP session (in mantissa + exponent format). Even if it is defined as the total available bitrate, the sender typically uses it to configure the maximum bitrate of the video encoding.

In addition to be used from an endpoint to notify the available bandwidth in the network, it has also been used by media servers to limit the amount of bitrate the sender is allowed to send.

To provide a better estimation, REMB is usually used in combination with the abs-send-time header extension because providing accurate timing information is critical for the accuracy of the REMB value calculation.

This RTCP message defined in draft-alvestrand-rmcat-remb-03 was never fully standardized but is supported by all the WebRTC browser implementations, although in case of Chrome it is deprecated in favor of the new sender side bandwidth estimation based on RTCP Transport Feedback messages. The support for this message is negotiated in the Offer/Answer SDP Exchange.

How Does REMB Work?

In a typical WebRTC video call, both the sender and receiver are constantly exchanging RTCP packets to monitor the quality of service. REMB adds an additional layer to this by including a specific RTCP packet that contains the receiver’s estimated maximum bitrate. This packet is sent from the receiver to the sender, allowing the sender to adjust its bitrate accordingly.

The REMB algorithm takes into account various factors such as packet loss, jitter, and network latency to calculate the maximum bitrate that the receiver can handle. Once calculated, this information is encapsulated in an RTCP REMB packet and sent back to the sender.

Why is REMB Important?

REMB plays a vital role in enhancing the user experience in WebRTC applications. By providing real-time feedback on network conditions, it allows for dynamic bitrate adaptation. This ensures that the video quality is as high as possible given the current network conditions, without overwhelming the network or causing excessive latency.

Moreover, REMB is particularly useful in scenarios where network conditions are highly variable, such as mobile networks or shared Wi-Fi environments. In such cases, the ability to adapt the bitrate on the fly is invaluable for maintaining a smooth and high-quality video stream.

REMB vs. TWCC

It’s worth noting that REMB is not the only mechanism for bitrate adaptation in WebRTC. TWCC is another mechanism that is considered to be more efficient and flexible, especially in complex network scenarios involving multiple streams and varying conditions.

The post REMB (Receiver Estimated Maximum Bitrate) appeared first on BlogGeek.me.

]]>
DNS64/NAT64 https://bloggeek.me/webrtcglossary/dns64nat64/ Wed, 15 Mar 2017 15:25:11 +0000 http://webrtcglossary.com/?p=174 NAT64 and DNS64 together provide a mechanism to be able to connect to IPv4 destinations from an IPv6 network.  It is based on a NAT converting the addresses from IPv6 to IPv4 with the collaboration of a DNS server generating artificial IPv6 addresses.   It is supported by WebRTC endpoints now and it is important […]

The post DNS64/NAT64 appeared first on BlogGeek.me.

]]>
NAT64 and DNS64 together provide a mechanism to be able to connect to IPv4 destinations from an IPv6 network.  It is based on a NAT converting the addresses from IPv6 to IPv4 with the collaboration of a DNS server generating artificial IPv6 addresses.  

It is supported by WebRTC endpoints now and it is important because of the need for iOS apps to be compatible with this mechanism.

The post DNS64/NAT64 appeared first on BlogGeek.me.

]]>
Congestion https://bloggeek.me/webrtcglossary/congestion/ Wed, 15 Mar 2017 15:22:42 +0000 http://webrtcglossary.com/?p=173 When a network element (usually a router) cannot forward packets to the next network element fast enough then the packets will get queued in that router’s internet buffer, increasing the latency (and the jitter) and potentially end up dropping packets. When that happens and there is that increase in latency and packet loss it is […]

The post Congestion appeared first on BlogGeek.me.

]]>
When a network element (usually a router) cannot forward packets to the next network element fast enough then the packets will get queued in that router’s internet buffer, increasing the latency (and the jitter) and potentially end up dropping packets.

When that happens and there is that increase in latency and packet loss it is when we say that the network is congested.

WebRTC endpoints try to prevent congestion by estimating the available bandwidth and limiting the bitrate sent based on the feedback received from receiver side.

The post Congestion appeared first on BlogGeek.me.

]]>
Lip Synchronization https://bloggeek.me/webrtcglossary/lip-synchronization/ Sat, 31 Dec 2016 05:16:29 +0000 http://webrtcglossary.com/?p=156 Lip Synchronization is a process taking place on the receiver end in WebRTC (and other VoIP protocols), where the audio and the video tracks are synchronized. When media gets captured in WebRTC, the originator of the media timestamps the raw media, which then gets encoded and sent over the network. During this process, audio and video […]

The post Lip Synchronization appeared first on BlogGeek.me.

]]>
Lip Synchronization is a process taking place on the receiver end in WebRTC (and other VoIP protocols), where the audio and the video tracks are synchronized.

When media gets captured in WebRTC, the originator of the media timestamps the raw media, which then gets encoded and sent over the network. During this process, audio and video are handled separately, as they go through different encoders that have different behavior and strategies. The intent of the sender is to get the media sent as soon as possible without harming network performance.

The receiving end collects the media packets, passes them through a jitter buffer and delays the video (or the audio) in order to get lip synchronization. This is done by matching the timestamps in the different media tracks.

The post Lip Synchronization appeared first on BlogGeek.me.

]]>