Category: Video Streaming

TURN/STUN Servers

TURN (Traversal Using Relay NAT) and STUN (Session Traversal Utilities for NAT) servers are used to help establish and maintain real-time communications, such as VoIP calls, video conferencing, and online gaming, between devices on different networks.

When two devices attempt to communicate with each other, they first need to exchange information about their IP addresses and network settings. However, when devices are located behind a NAT (Network Address Translation) router, this information may not be directly available, making it difficult to establish a direct connection between the devices.

This is where TURN and STUN servers come in.

STUN servers provide a way for devices to discover their public IP address and port number, which can be used to establish a direct connection between the devices. When a device sends a request to a STUN server, the server responds with the device’s public IP address and port number. This information can then be used to establish a direct connection between the devices, if possible.

However, in some cases, a direct connection may not be possible due to firewalls or other network restrictions. This is where TURN servers come in. A TURN server acts as a relay, allowing devices to communicate with each other even if they are unable to establish a direct connection. When a direct connection is not possible, devices can send their data to the TURN server, which relays it to the other device.

In summary, STUN servers help devices discover their public IP addresses and port numbers, which can be used to establish a direct connection between devices when possible. TURN servers act as relays when a direct connection is not possible, allowing devices to communicate with each other even if they are located behind firewalls or other network restrictions.

Here is a basic configuration for a TURN server using the open-source software Coturn:

Install Coturn:

sudo apt-get install coturn

Configure Coturn by editing the turnserver.conf file:

sudo nano /etc/turnserver.conf

Set the listening IP address and port number for the TURN server:

listening-ip=
listening-port=

Configure authentication by setting a username and password:

user=:

Set the realm for the TURN server, which is used to identify the domain:

realm=

Set the type of relay to be used by the server:

relay-type=

Common values for include udp, tcp, and tls.

Set the maximum transmission unit (MTU) size for the relay packets:

mtu=

Enable verbose logging for debugging purposes:

verbose

Save and close the turnserver.conf file, and then start the TURN server:

sudo systemctl start coturn

Using FFMPEG to convert file types

FFmpeg is a powerful command-line tool that can be used to convert media files from one format to another. Here’s how FFmpeg works to convert a picture to a different format:

  1. Input file: The first step is to specify the input file using the -i option, followed by the file path of the picture you want to convert.
  2. Codec: Next, you need to specify the codec to use for the output file. FFmpeg supports a wide range of codecs for different media formats. For example, to convert a PNG image to a JPEG image, you can use the -c:v option followed by the codec name “libjpeg”.
  3. Output file: Finally, you need to specify the output file using the -o option, followed by the file path of the new image you want to create. You can also specify additional options such as the output file format using the -f option.

Here’s an example command to convert a PNG image to a JPEG image using FFmpeg:

ffmpeg -i input.png -c:v libjpeg output.jpg

In this example, FFmpeg reads the input file “input.png”, uses the “libjpeg” codec to convert the image to a JPEG format, and saves the output file as “output.jpg”.

The Fyro File Converter provides a handy web interface to do a lot of common file type conversions. But it doesn’t just do image file conversions – you can also do video conversions as well!

In the example below, FFmpeg is used to convert an input video file “input.mp4” to an output video file “output.mkv” using the H.264 video codec (libx264) and the AAC audio codec (-c:v and -c:a options, respectively). The options -preset and -crf are used to control the quality of the output video: “slow” is the encoding preset, and “22” is the Constant Rate Factor (CRF) value, which controls the trade-off between quality and file size.

ffmpeg -i input.mp4 -c:v libx264 -preset slow -crf 22 -c:a aac -b:a 128k output.mkv

Here’s a breakdown of the options used in this example:

  • -i input.mp4: Specifies the input video file.
  • -c:v libx264: Specifies the H.264 video codec for the output file.
  • -preset slow: Specifies the encoding preset to use for the output file.
  • -crf 22: Specifies the CRF value for the output file (lower values result in higher quality but larger file sizes).
  • -c:a aac: Specifies the AAC audio codec for the output file.
  • -b:a 128k: Specifies the audio bitrate for the output file.
  • output.mkv: Specifies the output file name and format.

Basically, this command would convert the input video file from MP4 format to Matroska (MKV) format using the specified video and audio codecs, with a slow encoding preset and a CRF value of 22 for the video.

Video streaming protocols

We’ve recently been developing applications for streaming live video, and experimenting with different streaming protocols. We started off with HLS, but since the use case required extremely low latency, we eventually moved on to other protocols. In our testing, we found that MJPEG was able to provide near-instantaneous results, but with the obvious drawback that it does not support audio, and there is really no defined protocols for MJPEG streams, which means that applications support it in wildly different ways. Eventually we started using SRT, which has been working extremely well for our use case. Here we’ll go over some basics about the differences between these common streaming protocols.

HLS, MJPEG, SRT, and NDI are all different streaming protocols used for video transmission over the internet. Here’s a brief overview of each protocol:

  1. HLS (HTTP Live Streaming): HLS is a streaming protocol developed by Apple that breaks a video into small chunks and sends them over HTTP. The video is divided into multiple bitrate versions, allowing for adaptive streaming based on the viewer’s internet connection. HLS is widely supported on different devices and platforms, and is commonly used for live streaming.
  2. MJPEG (Motion JPEG): MJPEG is a video compression format that compresses each frame of video as a separate JPEG image. It is not a streaming protocol in itself, but can be used with other protocols such as HTTP or RTSP to stream video. MJPEG is a simple format that is easy to implement and is often used in security cameras and webcams.
  3. SRT (Secure Reliable Transport): SRT is an open-source streaming protocol that provides low-latency video transmission with secure encryption and error correction. It is designed to handle poor network conditions and maintain the quality of the video stream.
  4. NDI (Network Device Interface): NDI is a protocol developed by NewTek that allows for low-latency video transmission over a local network. It is designed for use with professional video production equipment and software and provides high-quality, low-latency video with low CPU usage.

In summary, HLS is a widely supported streaming protocol commonly used for live streaming, MJPEG is a video compression format that can be used for streaming, SRT is an open-source streaming protocol designed for reliability, and NDI is a protocol designed for professional video production equipment and software. The choice of protocol depends on the specific use case and requirements of the video streaming application.

© 2024 fyro.net