Published: 10 March 2019
By Ali Ismail
There is no shortage of topics to delve into when talking about digital images, vision and compositing tricks, but in this short article, I am going to talk about few basics and techniques that I feel would help a compositor or a digital artist better understand the medium.
I hope this understanding can help some to manipulate and handle images in a more professional level. This article discusses:
For a more practical article on how to composite 3D render, you can view my other blog: “Improving 3D Renders in Post Processing.”
Humans can see light wavelengths between 380 to 740 nanometer which represent how much energy a single photon has, it’s those rainbow colors from violet to red, one photon can be red, another blue, but an eye property that will be important for our discussion about perception, images and tone mapping is dynamic range which refers to the relative difference between the darkest and brightest values. If the visible wavelength spectrum refers to how much energy a photon has, dynamic range refers to how many photons you can see and the difference between the smallest number of photons you can detect versus the highest number of photons you can detect at the same time.
Depending on how you measure it, the human eye can detect more than 10,000 different gradations between the darkest and brightest values. It is actually a lot more complicated than that, and the gradations or dynamic range the eye is able to see can be a lot more, for more info you can check this article by Roger N.Clarck, Notes on the resolution of the human eye.
But why are we talking about human vision, dynamic range or photons quantity? Answer: Because it is the basis of everything we do to manipulate images and how they are displayed. It’s important to know the particulars of your ultimate universal client as a CG artist, the human eye.
We are also doing it to better understand tone mapping and the different types of images and outputs. Monitors cannot reproduce light the same way we see it in nature in terms of dynamic range (That includes even the very best HDR monitor). In addition, file formats used to store light information have their limitations, this necessitates compressing a large volume of image information into a smaller one, which is generally termed as compressing dynamic range.
Light and Information
Quite often, you will encounter the terms light data or information or even audio terms in conjunction with images. You might hear someone talking about the shadow areas and saying there is no information there, or advise you not to clip the highlights which for some people might sound odd, after all an image denotes colors, shapes and scenery and hardly data, information or sound!
The above device used for electronics testing, engineering, laboratories etc.. is not showing an actual physical electrical signal, only a representation of its properties.
And this image is a sound wave, showing frequency over time. Let's now take a look at something that can represent an image:
Histograms are just a way to represent what your image contains, and they are not restricted to pictures, they can represent anything with numbers. For images, they generally represent the number of pixels with a certain value, such as luminance.
To learn more about histograms in digital imaging you can check: "Photoshop histograms: ”How to read and understand image histograms in Photoshop" and "Histograms: ”Camera histograms: tones and contrast”.
We also need to remember that under the hood, images are encoded data, an array of pixels, each with a value.
File Formats and Compression
Why have I been hammering at all this, what practical use is there to it for 3D artists and graphic designers? We started with the intention of talking about images and we ended up with data tables and graphs.
Well, as you see, images stored in computers are nothing more than electrical signals encoded and decoded by various techniques and standards, some image formats will hold more data than others like 32bit/channel formats, EXRs or HDRIs while other formats will favor size and compression of data such as JPEG. Moreover, monitors cannot take all the data stored in an EXR for example and instead display a limited dynamic range with standards of their own (SDR, HDR). To make things more complicated we still have terms like sRGB and gamma correction with roots back to the era of CRT monitors.
Explaining the difference in dynamic range between an EXR and a JPEG is usually obfuscated when you look at both images on a standard monitor side by side, they look exactly the same, on the other hand, if you had a super monitor which can display all the small gradations of color of an EXR you will see instantly the difference. The JPEG would certainly look “washed out” in comparison.
Phew.. so it can seemingly get complicated rather quickly, but in reality it’s only about how to store the light data you have through a render or a photo and how to represent it through your output medium, which for a CG compositing workflow usually means manipulating a rendered image stored in high dynamic range as an EXR and trying to make it look as “nice” as possible on a monitor with a lower dynamic range by color grading it, so that someone looking at that image can recognize that it looks “alright” and “just like the real thing” although the monitor does not reproduce nature with all its beautiful gradations and subtle differences between the infinite shades of color out there.
There are different ways to save images, some are more fancy than others and have a lot of “variations” stored inside them.
Which means a 32 bit/channel image like an EXR or Raw from a camera can save so much more data than a linear image like a JPEG. For example instead of having only a handful of gradations in a shadow area in a JPEG, An EXR could have hundreds which you can exploit when manipulating the image or even display if you had a monitor with the ability to show all those gradations.
Monitors have their limitations in reproducing light and dynamic range.
As CG artists, we try our best to represent the extensive data stored in an EXR or any image with lots of information on a limited monitor (or print) by manipulating the colors of an image through tone mapping and color grading.
To understand tone mapping or color grading you can see my other tutorial “Improving 3D Renders in Post Processing”.
For more information and resources:
If you have done some color grading/tone mapping or if you read “Improving 3D Renders in Post Processing” you must be familiar with tone mapping filters and how they can automatically change the look of your image. But I would like to point out the function and importance of color curves.
Color curves are actually exactly the same as color grading nodes and the numbers you manipulate to change gamma, gain and exposure. After all, your color grade node modifies your image information and the same can be done with a curve, both do the same function, but with color curves you have more control. Usually when modifying EXRs in Nuke, I always use color curves instead of grading or color correction nodes.
This video demonstrates this fact nicely: Curves Vs Sliders
I believe Martin Constable who has done the curves vs sliders video, have provided excellent content for understanding the very basics of image manipulation and compositing, You can check other videos of his such as: Premult Define & Premult in Practice.
An important thing to mention which I believe you will use quite often when manipulating images would be how to detect edges which can be used in sharpening. Below i show you can create a high pass which you could overlay for a sharper image.
Another trick would be the fact that sharpening techniques are also based on the “unsharpening mask” which is founded on the same principle of blurring an image and finding the difference somehow.
These masks can be used not only in sharpening but also for local contrast or light wrap effects. There is so much you can do in compositing by combining simple effects together.