First published: 16 September, 2016
Revised and Updated: 8 March 2019
By Ali Ismail
This article will examine how you can improve your 3D renders in post processing using a photo editing or compositing software. It will categorize the procedures and effects you could use to make your renders more “realistic” and take away that distinctive CG look you get from a 3D render engine.
This is not a step by a step tutorial on how to composite a 3D render in a particular software. It is instead concerned with the fundamentals of how to improve your CG images by applying certain effects. And to also make the workflow of a compositor who is already familiar with these techniques, more concise and organized in handling 3D renders.
In this article we will classify effects you could use to improve 3D renders into these categories:
This article is closely related to this video:
Latin origin of "Compositing": to put together.
While the meaning of the word compositing still rings very true for visual effects work that includes live footage integration; I think you might consider the title: "Improving 3D renders in Post Processing" instead of “CG compositing” to be more fitting for this article. Although it's a given that post processing CG renders will include layering, adding and/or replacing elements inside the frame.
Writing this tutorial; I am trying to share some of the ways I implement to improve my renders, laid out in simple principles to grasp and drawing on experience from compositing my own 3D renders. This article should also make it easy to understand why certain effects make an image look more realistic or aesthetically pleasing, and give you a somewhat standardized way to tackle 3D renders.
The first few techniques such as denoising, improving lighting and tone mapping are more about getting the best out of your 3D renders to have a strong foundation to work with, and then sharpening it and adding camera artifacts should take away some of its digital look away to make it appear as if it were coming out of a camera. A realistic image in this sense means actually photographic. If you are simulating a virtual reality experience, then you might have to add a different set of effects that make your renders closer to human vision.
Please note that this article assumes, you are using 32 bit/channel renders as in EXRs and that you have an understanding of linear workflow and dynamic range.
I used the terms “improve”, “realistic” and “photographic”. Those are subjective terms and have been used to convey an idea, you may very well need to implement these same techniques to make an artistic style of your renders or to simply match sequences.
You might find it helpful to check my other blog: “Understanding Digital Images”
Why Post Process?
Now, what’s wrong with my renders? you might be asking! if I don't need any compositing layers to add or replace, if only I get the lighting and materials right; couldn't I just use the renders as they are? Absolutely, if you are happy with your 3D render or your client is happy with it, then there is no need to spend more time working on an image, in addition your 3D application might produce the needed tone mapping and have built-in features that produce some of these post-processing effects.
But the following written piece will help you understand the settings you see inside your render engine or how to get that extra quality you see in other high-end renders and photographs or even how to better replicate human vision if you are building something for virtual reality. In addition, doing those effects in post is a lot faster and gives you more control than having to do them all “correctly” in render settings. There is always a premium for doing things on the fly.
If you are working with a real-time engine like Unity or Unreal and you do not have all the post processing options in the same manner as the ones used in VFX, the following procedures can at least make you better understand what tools and effects you can add or if you need to develop a certain effect to make something render differently in real time.
Before getting involved with professional work, I had the preconceived notion, that all the big VFX studios had some very sophisticated tools to be able to generate their beautiful final footage, it then turned out they had some fancy infrastructure and software to handle the volume of work they do; but the results they were getting had a lot more to do with good craftsmanship more than anything else. Color lookups, motion blur and film grain will take you a long way, I can assure you!
But please take note that you can’t start with a very poorly rendered 3D model which has lot of artifacts and lacks detail and then make something stunning out of it! I have seen supervisors try to throw everything on the compositing guy at the end of the pipeline on the hope that it can all be magically fixed.
You will need your renders to have good lightning and color information, proper material settings, detailed models to render, extra channels and masks to use, but the changes you can later make to those frames, can be quite drastic and will completely transform how the render looks like.
Where to Start?
well, practically you can work on your renders however you like, These are only some examples of what you can do, they do not cover all the possible things you can attempt, but to make it easier to grasp what I usually do to my 3D renders; I tried to classify them into the following categories:
I - Denoising
Reducing noise or grain as much as possible is the first thing you should start with if you have even a mildly noisy render. Many of the effects we are going to apply, will intensify some of that noise and make it more apparent, so reducing noise to a minimum is your best move before adding any effect
You must aim for noise free renders of course, but as it happens, you can be time strapped or miss out on something here and there, denoise filters, or blurring out specific areas could sometimes do the trick or at least reduce it.
There are a multitude of applications to denoise images, or you can use a filter in your compositing or photo editing software. One thing to note about denoising though, that it is usually a compromise between losing sharp acute details or getting rid of the noise. it’s up to you to experiment and decide how to use it. Additionally you could use masks, or threshold settings to denoise specific areas. Or you could even render new passes if it can be done in creative ways that reduce the total render time of noise free frames.
II-Improving Lighting and Materials
Software and CG render engines have improved quite a lot from the early days of their inception, but they are nonetheless an approximation. Real world interactions of lights and surfaces with different properties are quite complex.
In the render coming out of your 3D software or engine, global illumination calculations are not perfect, shadows are not exactly how they should be, accurate caustics and translucency take too long to compute and subtle diffusion effects like iridescence (thin-film interference), subsurface scattering or light passing through narrow slits are too tenacious to account for realistically. Most of the time you will have time and processing power limitations that will result in some artifacts or visual lighting effects missing.
You can improve lighting and materials by the following:
Adding global illumination or caustics passes that you have manually set up and which rendered a lot quicker than they would have if you used your render engine features.
Manipulating parts of your render to account for subtle effects like SSS.
Adding volume lights or atmospheric effects like haze, fog or rainbows.
Doing manual blurs for things like heat distortion.
Changing the look of your shadow pass.
Accentuating the look of your reflection pass.
And lots of other effects which would be more ideal to do in post processing.
Why not add these effects when rendering in the first place? Again,
Not all of them will be possible in your render engine.
It’s faster to sometimes do things in post.
You might have simply forgotten to account for something in your scene that has an effect in the final look of the image.
Most importantly, the advantage in adjusting things instantly is a huge plus.
In the following two images, I have in one of them added a GI pass that was necessitated by a background change, so instead of re-rendering the whole sequence, I rendered only one quick pass. And in the second image I modified the shadow a bit to make it look closer to what it should be in reality (or what I thought would make the image nicer).
After you have a noise free render and you believe you’ve done your best to get good lighting, environmental effects, etc.. you now have a solid base to start with and are ready to modify your render to adjust color grading, add artistic effects or camera artifacts to make your render more realistic, hyper realistic or simply beautiful!
III- Tone Mapping
In tone mapping and color grading, we try to maximize the “quality” of our image by controlling the global and local contrast and color levels while taking into account the limitations of dynamic range.
I am actually pretty impressed by the definition I just wrote :) , I think it sums up the whole process nicely! But it is probably not very clear, I hope the following will demystify it. And to understand more about dynamic range, you can check my other article: “Understanding Digital Images”
In fine art paintings, a good artist knows how to direct the eye of the viewer and create well balanced regions of interest and high contrast points. When you look at a good painting, your eye just moves from one region to another, taking in the whole of the painting and not growing tired of it.
In a similar fashion, a photographer who manipulates his/her photographs by taking multiple exposures or higher dynamic range image formats, then edits those images to give a “better” look and highlight certain parts of the image is doing something close to that of the fine artist.
It is the same with 3D renders, we are ultimately limited by the output device (The monitor) which necessitate compressing the dynamic range we have in our EXRs And like the artist we need to highlight certain aspects of the image and make it look “nice”.
The term “nice” for images, means something that usually implies the following:
Making good use of the dynamic range. You should have details in both the shadow and highlight areas, nothing clipped or washed out..
At the same time you should keep a feeling of reality to your image, for example if you have a scene with the sun in it, you can’t make an object in the scene brighter than the sun, or the ground brighter than a chrome ball reflecting the sun and sky or have halos or other artifacts due to tone mapping. I’ve seen too many composited renders that look unnatural because tone mapping and color grading were overdone.Searching the web for “bad HDR photography” should give you an idea what not to do.
You should have some hot spots for the viewer to look at and move his/her vision around the image and these high contrast areas should have a scale to them, one really contrasty region, then another one less, and another less and so on so that the eye can hop from one to another while keeping something as a main focal point.
You should pay careful attention to your color saturations and hues especially around bright areas.
Finally this is an artistic process and what you are trying to achieve can be different from one case to another, Doing a product render is very different from doing renders for a VFX shot for a movie.
Please note that I might refer to tone mapping and color grading interchangeably, and while they are two different terms; tone mapping refers to mapping colors from HDR to LDR and the term color grading is more about changing the look of your render in terms of colors and values. You typically have a 3D render and you have to apply to it a series of tone mapping filters, colors look ups or color grading nodes so you are usually both tone mapping and color grading it. And if you are using a real time engine you will most likely have an all-encompassing color grading/tone mapping setting to work with.
The process of tone mapping and color grading itself is very artistic and can use many tools and approaches. But I usually go about it like this:
I apply a single tone mapper and tweak it to have an image that closely resembles the final result I am looking for. There are many tone mapping filters you could use:
For still images, you can use Photoshop’s HDR Toning in adjustments, or Camera Raw Filter and try the different modes, there are also Photomatix, Arion FX, Picturenaut, Luminance HDR, EXR Tools and many more.
For animation and sequences, the color grading nodes should give you very good control, but for Nuke you can also try Arion FX, John Hable’s Filmic, Global Tonemap Operator Gizmo, Lazy Tonemap or any tone mapper that works for you.
There will also be some tools for HDR videos, an interesting study is this one: Temporally Coherent Local Tone Mapping of HDR Video.
I could consider doing 2 or more different tone mapping for an image or sequence then combining them with opacity or a mask.
I try some “artistic” blends to break some of the CG look further more like duplicating the image and layering it as a soft light with a small percentage, or masking some parts and adding it as a screen effect.
I use object and material masks which I generated in my render to control the look of my materials and make local color grading per material or per object.
I might draw my own masks and use them in conjunction with my material or object masks to control smaller areas.
Now this might seem as a standardized pipeline process, but it usually isn’t, whatever works for you and gives you a good result should be enough but just try first to get the bigger portions of the image right before moving to the smaller parts.
As a quick example if you bring in an image and apply high values of contrast to it, it’s not going to look good, but if you consider what parts you want the viewer to focus on, and which parts you want to show more color information in and then and apply color lookups intelligently to certain parts of the image, you are going to end up with something much better. Below is an example where I ended up color grading each material and object separately to get the best control possible.
I tried to match the reference I have with lighting techniques and later color grading/tone mapping it globally, I was unable to replicate the reference completely.
I went ahead and started tweaking my composite, color grading each material separately and then each patch until I got something close to the look I was after. You do not need to go as extreme as I did with this one with the masks and tight control but I was trying to see how far I can take it.
If you have good lighting and materials, the EXR render you have is all what you need to make an excellent image, all it requires is the right tone mapping and some camera effects
There are many different ways and algorithms for doing tone mapping. And color grading your images is very artistic, you can very easily over do it and have that over blown, over saturated HDRI look, It is up to your artistic direction to decide how to tone map your render.
Even if monitors and digital image formats somehow improve to include all the visible dynamic range the human eye can see, we will always need to color grade our images to make them look more interesting and control where the viewer will focus on.
One last point on tone mapping before we move on to the next subject, is that you should try as best you can to keep color temperature in mind when tone mapping. especially when you are doing product renders, this can be a subject in and of itself. But just remember the fact that when an object is lit, it tends not only to change the value of its color, but also hue and saturation. like the following image.
You can achieve this natural effect, inside your material and during tone mapping or color grading.
BTW a good test to know if a render has good lighting information to work with is to play with the gamma as first as you get the render and see how it behaves, this is something that supervisors like to do because it can clearly expose a problematic render, for example you might have an area which is much brighter than all others, when you change the gamma it can be quickly detected and you can see the range of colors and lighting you have in the render.
IV- Sharpening and Grain
Sharpening can be an interesting topic in itself, I am not referring to the simple act of adding a sharpening filter to your final render and calling it a day.
In sharpening I am more interested in the different types of edge sharpening, anti aliasing and noise you get from different techniques such as photography vs a 3D digital render vs Human vision.
We tried to reduce noise as much as possible in the beginning, so that ironically we can later add our own type of noise and have more room to adjust sharpness to make it closer to noise coming from lenses and light capturing camera sensors. Apparently not all noise, grain and sharpening techniques are equal!
For example, let’s have a look at zoomed portions of photographs.
And these are zoomed portions of digital images.
Now the effect and differences, I am asking you to observe are very subtle, this is equivalent to being able to hear the difference between a real actual guitar being played and a digital audio recording of a guitar. Perhaps if you can look up your own pictures and renders and start zooming in at them; you would see that edges and noise in photographs have something different to them than edges and noise in digital renders.
To be honest the difference is best seen in RAW images of cameras. These RAW images that cameras produce are then processed either manually or through the camera software to enhance it. This result is different from what comes from an image produced in a 3D engine which has it’s own digital noise and edge anti aliasing techniques.
Of course each camera lens and software techniques for enhancing these photographs are different and I am not saying that there is a specific way to go about improving your 3D renders sharpness and noise to look closer to a camera but there are few things you can do to make it feel closer to what you get out of cameras.
And you should keep in mind if you are making a video or an application for virtual reality that you would need to make something closer to human vision instead of trying to do effects from photography.
The workflow I do is usually the following:
Make sure I have as little digital noise as possible (first step explained in the beginning of this article).
Add my own sharpening filters (highpass, custom masks or edge detection).
Try to add some sort of random artifacts, by taking my render layer, and applying some sort of color grading and/or blurring effects to it and then merging it just slightly with the original. The idea here is just to add as much variations as possible to make it look different from a typical 3D render.
Add the camera artifacts we will talk about next.
As a final step I add grain or noise which come from analog sources.
This is how the above zoomed in portions of 3D renders look like after some post processing.
Properly added noise and sharpening effects can really help improve the realism of your footage; although with new digital photography/cinematography things look sharper and cleaner. Cameras can't be 100% accurate so it helps to add some very subtle effects here and there to add realism, it of course depends on the look you are after.
To make your 3D render look less CG, it helps to add the same artifacts produced by cameras, as long as you don’t over do it like this guy and add a lens flare in every shot.
These camera artifacts include things like:
Image sharpening and grain. (explained above.)
Lightwrap, bloom, glare and glow effects.
Depth of field
Bokeh effects (This effect can also be caused when lights go through any narrow slit and is not necessarily only camera related. Lighting can produce on occasions this type of effect in the scene.)
There is just so many effects and things you can add, the secret is in being subtle and knowing the look you are after. different cameras will produce different results under different conditions and it's up to you to decide what to works best with your footage.
You might also use bloom or glare effects to help the viewer focus on a certain spot, this can be very helpful as the eye is always drawn to the brightest spot in an image.
It would take quite some time to explain what causes each one of those effects but I will keep that for a different occasion. For now there is probably plenty of resources on each subject.
Again please note that if you are making a VR experience, you will need to try to replicate human vision and stay away from camera artifacts.
All of the effects above can be added or discarded, exaggerated or made subtly in accordance to the final art direction. Lighting can be changed in addition to backgrounds and everything else in the image depending on your tastes.
We are making images that should look beautiful and appealing and this should come first before all other considerations; It's very much expected to do all sorts of adjustments in post processing to get the desired look.
Anyway I hope this tutorial helps with your composites although it is not very technical and lays out only principles and hints. but I assume that you can always research how to do any of those effects in your software. Tools and button pushing are always secondary to fundamentals.
Lastly! I wanted to share with you a tip for learning how to be the top compositor out there for improving 3D renders!
Here it is: pick images or videos which you find interesting, or watch VFX movies, pause when you find an interesting effect and take a screenshot. then observe the color grading, play with the saturation and gamma, zoom in and out and use your eyedropper tool and start seeing what is happening in the image. This done over a period of time is guaranteed to make you learn all the small and subtle effects used to create beautiful and realistic images and sequences.