The definition of the word Compression from the Cambridge Dictionary describes it as “the act of pressing something into a smaller space or putting pressure on it from different sides until it gets smaller”. We can easily understand this concept in the real word when we perform an action as mundane as say, squeezing orange juice. However, the word Compression is probably even more common in the digital world, and is used in a variety of areas, though mostly describing some sort of utility process.
Audio production compression, for example, is completely different from audio delivery compression, though the concept of “squeezing” something smaller is still relevant.
So how does Compression relate to video, specifically?
If you have read our article on bitrate, then you’ll be aware that digital video files are made up of many small pieces of information that are stored in something called a codec. We’ll get into how a codec is used a bit later, but for now just understand that it’s the device that compresses and decodes a video file, hence the name “Co-Dec”. One of the most commonly used video codecs is H.264 much like how MP3 is one of the most popular for audio. MP3 is a bit unique in that it is both a codec and a container, or wrapper, while most codecs need to be placed into a wrapper. Essentially, a wrapper is a container that holds specific file types, such as a video codec, an audio codec, or perhaps a caption file. Some examples of wrappers are MP4, Quicktime, AVI, MOV, etc. and these will usually contain a video codec along with other files to generate the resulting multimedia file. Codecs and wrappers need to be compatible with different devices, so certain types have become more common in order to be more widely implemented. H.264, MP4, and MP3 are all examples of this. In reality though, video codecs that deal with lossy compression (most of them) all use the same bread and butter technology in order to compress; DCT.
In 1974, DCT (Discrete Cosine Transform) coding was published and has since been the compression standard for multimedia delivery. This process relies on complicated mathematical equations, but for the layman, image DCT converts pixels into frequencies which are then prioritized from most to least important. Some of these data points can be similar and thus removed. Compression deals with removing a certain amount of these points (the intensity of which can be controlled) and then converting these frequencies back into pixels. Similar data points can be removed and retain the same look of an image, but there’s technically less data in the file, thus compressing it into a smaller file. You can compress harder, but once you reach a certain threshold, you can start to notice the quality decreasing. Compress a huge amount and you’ll start to see compression artifacts which make the video look glitchy and very poor quality.
Why do we need to compress video in the first place?
The amount of bits in a file need to be processed by software and hardware to actually do things, such as when you stream movies online.
Think of a container filled with one gallon of water and another empty container with a funnel on top. You need to pour the water from one to the other through the funnel, but the funnel is very small. You can fill up the funnel with the gallon, but it will only be able to slowly fill the other container due to the size restraints of how much water can pass through the small space at a time. If you increase the size of the funnel, you can increase the speed the water flows into the container.
When referring to the initial example of streaming, the funnel in the analogy would be your internet bandwidth. Of course there are various components that can factor into the data transfer, but internet speed in this example would be the biggest bottleneck. If you have very little bandwidth, the stream can only pass through so much data to your TV or laptop, or any other device. If the file is too large, you will need to let the content be downloaded, or buffer, before being able to play it back. Compressing a video file reduces the amount of data that needs to be sent through this “funnel”, which means there will be less to transfer in order to see the same image, and thus the image can be played back faster. Other things such as the video’s resolution and frame rate also contribute to the amount of data that needs to be transferred, but minimizing those aspects alone wouldn’t be enough for smooth playback if the footage wasn’t also compressed.
Because more compression means less data in an image, too much compression can lead to poor looking quality, so there’s a fine line between playback speed and quality that needs to be met. This coupled with the fact that many people have different sized “funnels” means that some can enjoy smooth playback with much higher bitrate and quality, while some will require less data to playback smoothly. This is where Adaptive Bitrate comes into play. Essentially, the same file with different levels of resolution, bitrate, and compression can automatically be selected to deliver the highest possible quality for the viewer’s bandwidth. I’m sure you have seen the quality drop sometimes when streaming a movie at home, this is exactly what adaptive bitrate streaming does, and how Viostream is able to deliver the smoothest possible viewing experiences.
Is that everything about compression that I need to know?
For streaming online video content, we have pretty much gone through all the necessities to understand how compression plays a part. Of course the “funnel” can be changed out for different things, such as your CPU or HDD speed when playing back a file from your computer. Modern RAW video files can be enormous and not actually able to be played on any consumer computer smoothly, so compression is always a necessary component when it comes to digital video.
Of course compression can be referenced in other ways within the multimedia world, but simple idea of taking something and making it smaller or tighter can pretty much be applied to all of them.