What Is Codec?
From Wikipedia, the word Codec is a portmanteau of a coder, decoder or compressor, decomposer and it’s a device or computer program capable of encoding or decoding a digital data stream. But, let’s put that in the real world, in terms that the average computer user can understand. Shall we first what we typically mean by digital data stream is audio or video content that is stored in a format that is easily read by computers or other electronic devices. That means digital files made up of zeroes and ones rather than analogue media like Vinyl records.
Next, we need to disambiguate the term codec. Because, depending on whether we are referring to a hardware chip or a piece of software, it can have fairly different meanings. Let’s start with codec or decoder hardware. When the term is used this way, it usually refers to audio gear rather than video. And a codec contains both a DAC i.e. Digital to Analog Convert and ADC i.e. Analog to Digital Converter within one package. Allowing it to convert sound to a digital file and then allowing it to interpret digital files and turn those back into sound with as much as fidelity as possible.
Some losses occur during both of these conversions. However, which leads us well into the more widespread use of the term codec software. Codes, these are computer programs, that take the source video or audio data and pack it up in a specific format that adheres to a documented standard that will allow it to be easily interpreted by other devices or pieces of software that is capable of utilizing the same codec. But, why would you want to to that? You might ask I was trying to play a movie on my computer and it said I didn’t have the codec installed. It was a pain in the patootie. Why cannot everything must be sent in its original form or at the very least, why cannot everything use the same codec?
Great question? In a perfect world, we would never compress or convert anything because aside from the inconvenience and I alluded to this before, most codecs are what is known as lossy. Which means that we are losing some of the fidelity of the video or audio recording when we convert to them. Unfortunately, though in the real world, the logistics of uncompressed media files are a nightmare. A 10 minute HD video that you download from a website might be a couple of hundred megabytes. Whereas a 12 bit raw, the length and resolution can be easily over 60 gigs.
Trying streaming that kind of data over your internet connection. Lossless, codecs are a way around this degradation of quality. But compared to lossy codecs, their file sizes are still very large and or they can be very processor-intensive to encode and decode. So, the most common solution is to use a lossy codec at a high bitrate that is more data per second in the stream. If you want high-quality playback without files, that are so large, you cannot store them or easily send them anywhere. But, there is no one right answer. Some codecs are best for high quality, while others maintain better playback on unreliable connections while others still are designed to keep latency or delay very low and that is why we need a wide variety of audio and video codecs that are optimised for different uses.
The last thing to be touched here is a container. An example is MKV or a VI, these are just easily recognizable wrappers that contain several media streams. For example – video, audio, navigation menu and some subtitle files. Now, many people equate containers with codecs because there are file types like mp3 or JPEG that act as a container. But, can only contain a single file type. So, that’s where the confusion comes from. Regular containers, by contrast, can contain media files that utilize a wide variety of different codecs and if all of this is still pretty confusing.
And, you just wanted an answer to like – How to play your media files, don’t worry, there’s an easy fix. You can download VLC media player which contains, most of the codecs, you will need daily. Or if you don’t like VLC, you can install CCCP Codec pack which installs media player classic and a wide variety of different codecs on your system. So, you basically won’t have to think about this anymore.
Simplest Way to Understand Codecs?
Codec seemingly is a very complex subject. Our goal here will be to give you a basic understanding of how they work and how you can access them on your end. Capturing, working with DES playing back digital video is asking quite a lot from any piece of technology shooting the most basic video footage requires a camera sensor to capture the light available.
Then turn that light into the highest quality digital information that it can and save as much of that information as possible onto a storage device. And, you will eventually need a computer TV to reanalyze, reassemble and play that information back at its highest quality possible – sometimes, while it’s streaming over the internet codecs. It is a combination of the words compressor and decompressor will compress the information of a video.
And, audio signal into a more manageable file size or type when being saved then help decompress that file the best way possible for viewing transferring saving etc. Within a single project, you are likely to end up using several different codecs at different stages. The first of which is your capture codec.
If your final video is just scratch or you have a limited amount of storage space. You are going to want to have a codec that’s a little more compressed. It’s going to be a little bit lower quality but generally speaking when you are recording. You are going to want the least compressed codec that will save the most data and give you the highest quality image possible.
To work with codecs, utilize many different methods of compression. During every step of the process. Let’s first focus on two key things that are very important for record codec. the first is a fit depth of each pixel.
In your video, has a value assigned to its red, green and blue channels that will dictate how each of those colours mixed to recreate the colour that the camera sees. Fir depth determines the number of different colours that your camera’s codec can recognize and record for example cameras utilizing a 12-bit codec means that each RGB channel will utilize twelve bits for a total of 36.
What this translates to is that, it’s going to end up 4096 different colours available to choose from to recreate your image now. Saving so many different colours is going to require a considerable amount more extra storage space. On your media, 8-bit codecs are very common.
Each RGB channel uses 8 bits of data which translates to about 256 different possible colour values per channel. You can still get great-looking images from 8 bit so, it’s going to take a little more effort smaller bit depths result in a degradation of the image and increase in banding because there are fewer colours available to shift between. Another way your record codec will compress an image is through chroma subsampling 4:4:4. Chroma Sampling is the best option you can have.
This means that in a group of four pixels, each pixels individual colour will be saved. No colour information is lost. So, the image quality is better at the cost of more information. Taking up more space on your media, but some codecs subsample colour a four to two subsample rate means that in a group of four pixels.
Two of them will disregard their colour and take the colour of the pixel. They are next to lose information and save space on your media. Another common sub-sample rate is 4. In the same group of 4 pixels, 3 will disregard colour and take on the colour of the remaining 1.
But, will save a lot of space for you in the end. Now, that we know a little bit about capture codecs and how they decide what information to keep or dispose of. Let’s move on to edit codecs. Now, you are ready to begin your edit, here’s where another form of compression, that codecs use can affect your edit process. And, that’s the difference between intraframe compression and interframe compression. Intraframe compression also known as spatial compression gets its name because all compression is done within one single frame at a time it does this by separating the individual frame into different sections.
Analyzing those sections for similarities that it can compress down. Then saving the frame, interframe, compression, also known as temporal compression. Gets its name from calculating the curve frame.
Then only saving the difference in each frame. After that, because intraframe compression is self-contained, each frame your computer is only going to have to recall that single frame to play it back for you as you move around and manipulate your timeline. This is typically a better-looking compression style. Anyways, but viewing frames of an interframe compress video files means that the computer must not only find and display that frame saved information.
But, it’s going to have to go back through every frame. Before it to find the remaining saved information until the entire frame is built. And, that’s going to slow down your computer considerably. Sometimes, video files can be so big, what whether intra or inter-frame compressed a computer is still going to have trouble displaying all the information in a fashion.
A technique around this is to use a codec compress. The original captured footage into additional smaller more manageable files that we call proxies. Because they are smaller and lower quality proxy files are easier to move around and manipulate and when you have a picture look simply reconnect the original high-quality files for further correction or exporting or deliverable.
As a part of the third part, just get ready to create your final deliverable codec. The name of the game is to export the highest quality video possible without compromising its ability to playback. Smoothly, the codec you choose will inherently affect this outcome, but the software is like Final Cut Pro and Compressor or Adobe Premiere and Media Encoder give you a few options to help you fine-tune of codecs settings to best fit your needs.
One thing you will have a choice over is your average bitrate. This is the average amount of information per second that your codec will be allowed to use the higher. Your average bitrate, the higher the quality and larger the size of your final video. As long as you have a media player and monitoring device capable of playing back that much information, a second higher bitrate is your best option. But, you will likely be putting this video onto the Internet and larger files are more difficult to stream and playback.
So, you are going to want to lower that bitrate until you find the best quality image possible. That’s still easily playable through streaming. Here’s a tip if you are uploading to video hosting websites like YouTube and Vimeo. You should compress with higher bitrate because they re-compress every video you upload.