If you’re new to 3d, you might have wondered what exactly is rendering?. To casual fans and folks who are new to 3D production, the concept can initially seem as cryptic and unapproachable as hieroglyphics.
While the sophisticated math and science behind rendering is far beyond the scope of this article, the process plays a crucial role in the computer graphics development cycle. We won’t go into too much depth here, but no discussion of the CG pipeline would be complete without at least mentioning the tools and methods for rendering 3D images.
LIKE DEVELOPING FILM
Rendering is the most technically complex aspect of 3D production, but it can actually be understood quite easily in the context of an analogy: Much like a film photographer must develop and print his photos before they can be displayed, computer graphics professionals are burdened a similar necessity.
When an artist is working on a 3D scene, the models he manipulates are actually a mathematical representation of points and surfaces (more specifically, vertices and polygons) in three-dimensional space.
The term rendering refers to the calculations performed by a 3D software package’s render engine to translate the scene from a mathematical approximation to a finalized 2D image. During the process, the entire scene’s spatial, textural, and lighting information are combined to determine the color value of each pixel in the flattened image.
TWO TYPES OF RENDERING
There are two major types of rendering, their chief difference being the speed at which images are computed and finalized.
1. Real-Time Rendering:
Real-Time Rendering is used most prominently in gaming and interactive graphics, where images must be computed from 3D information at an incredibly rapid pace.
Because it is impossible to predict exactly how a player will interact with the game environment, images must be rendered in “real-time” as the action unfolds.
In order for motion to appear fluid, a minimum of 18 – 20 frames per second must be rendered to the screen. Anything less than this and action will appear choppy.
Real-time rendering is drastically improved by dedicated graphics hardware (GPUs), and by pre-compiling as much information as possible. A great deal of a game environment’s lighting information is pre-computed and “baked” directly into the environment’s texture files to improve render speed.
2. Offline or Pre-Rendering:
Offline rendering is used in situations where speed is less of an issue, with calculations typically performed using multi-core CPUs rather than dedicated graphics hardware.
Offline rendering is seen most frequently in animation and effects work where visual complexity and photorealism are held to a much higher standard. Since there is no unpredictability as to what will appear in each frame, large studios have been known to dedicate up to 90 hours render time to individual frames.
Because offline rendering occurs within an open ended time-frame, higher levels of photorealism can be achieved than with real-time rendering. Characters, environments, and their associated textures and lights are typically allowed higher polygon counts, and 4k (or higher) resolution texture files.
There are three major computational techniques used for most rendering. Each has its own set of advantages and disadvantages, making all three viable options in certain situations.
Scanline (or rasterization)
Scanline rendering is used when speed is a necessity, which makes it the technique of choice for real-time rendering and interactive graphics. Instead of rendering an image pixel-by-pixel, scanline renderers compute on a polygon by polygon basis. Scanline techniques used in conjunction with precomputed (baked) lighting can achieve speeds of 60 frames per second or better on a high-end graphics card.
In raytracing, for every pixel in the scene, one (or more) ray(s) of light are traced from the camera to the nearest 3D object. The light ray is then passed through a set number of “bounces”, which can include reflection or refraction depending on the materials in the 3D scene. The color of each pixel is computed algorithmically based on the light ray’s interaction with objects in its traced path. Raytracing is capable of greater photorealism than scanline, but is exponentially slower.
Unlike raytracing, radiosity is calculated independent of the camera, and is surface oriented rather than pixel-by-pixel. The primary function of radiosity is to more accurately simulate surface color by accounting for indirect illumination (bounced diffuse light). Radiosity is typically characterized by soft graduated shadows and color bleeding, where light from brightly colored objects “bleeds” onto nearby surfaces.
In practice, radiosity and raytracing are often used in conjunction with one another, using the advantages of each system to achieve impressive levels of photorealism.
THE TWO MOST COMMON RENDER ENGINES
Packaged with Autodesk Maya. Mental Ray is incredibly versatile, relatively fast, and probably the most competent renderer for character images that need subsurface scattering. Mental ray uses a combination of raytracing and “global illumination” (radiosity).
You typically see V-Ray used in conjunction with 3DS Maxtogether the pair is absolutely unrivaled for architectural visualization and environment rendering. Chief advantages of VRay over it’s competitor are its lighting tools and extensive materials library for arch-viz.
This was just a brief overview the basics of what it means to render an image. It’s a technical subject, but can be quite interesting when you really start to take a deeper look at some of the common techniques. If you’re interested in learning more, there’s a lot of good reading around the web, and we’ll continue adding more. Be sure to subscribe to our newsletter to stay up to date!
Image by mikonrenderthat