Servage Magazine

Information about YOUR hosting company – where we give you a clear picture of what we think and do!

Polygonal modeling and Three.js – Part 2

Thursday, April 3rd, 2014 by Servage



In order for these meshes to be rendered, cameras must be placed to tell the renderer how they should be viewed. Three.js has two types of cameras, orthographic and perspective. Orthographic projections eliminate perspective, displaying all objects on the same scale, no matter how far away from the camera they are. This is useful for engineering because differing sizes due to perspective could make it difficult to distinguish the size of the object. You would recognize orthographic projections in any directions for assembling furniture or model cars. The perspective camera includes properties for its location relative to the scene and, as its name implies, can render the size of the model based on the properties’ distance from the camera.

The cameras control the viewing frustum, or viewable area, of the scene. The viewing frustum can be pictured as a box defined by its near and far properties (the plane where the area starts and stops), along with the “aspect ratio” that defines the dimensions of the box. Any object that exists outside of viewing frustum space is not drawn in the scene — but is still rendered.

As expected, such objects can needlessly take up system resources and need to be culled. Culling involves using an algorithm to find the objects that are outside of the planes that make up the frustum and removing them from the scene, as well as using data structures such as an octree (which divides the space into node subdivisions) to increase performance.


Now that a camera is capturing the way the object is being rendered, light sources need to be placed so that the object can be seen and the materials behave as expected. Light is used by 3-D artists in a lot of ways, but for the sake of efficiency, we’ll focus on the ones available in Three.js. Luckily for us, this library holds a lot of options for light sources:


Possibly the most commonly used, the point light works much like a light bulb and affects all objects in the same way as long as they are within its predefined range. These can mimic the light cast by a ceiling light.


The spot light is similar to the point light but is focused, illuminating only the objects within its cone of light and its range. Because it doesn’t illuminate everything equally as the point light does, objects will cast a shadow and have a “dark” side.



This adds a light source that affects all objects in the scene equally. Ambient lights, like sunlight, are used as a general light source. This allows objects in shadow to be viewable, because anything hidden from direct rays would otherwise be completely dark. Because of the general nature of ambient light, the source position does not change how the light affects the scene.


This light source works much like a pool-table light, in that it is positioned directly above the scene and the light disperses from that point only.


The directional light is also fairly similar to the point and spot lights, in that it affects everything within its cone. The big difference is that the directional light does not have a range. It can be placed far away from the objects because the light persists infinitely.


Emanating directly from an object in the scene with specific properties, area light is extremely useful for mimicking fixtures like overhanging florescent light and LCD backlight. When forming an area light, you must declare its shape (usually rectangular or circular) and dimension in order to determine the area that the light will cover.

To take advantage of area lights in Three.js, you must use the deferred renderer. This renderer allows the scene to render using deferred shading, a technique that renders the scene in two parts instead of one. In the first pass-through, the objects themselves are rendered, including their locations, meshes and materials. The second pass computes the lighting and shading of all of the objects and adds them to the scene.

Because the objects are fully formed in the scene during this computation, they can take into account the entirety of adjacent objects and light sources. This means that these computations need to be done only once, each time the scene renders, rather than once per object rendered.

For example, when rendering five objects and five light sources in the scene with a usual renderer, it will render the first object, then calculate lighting and shading, then render the second object and recalculate the lighting and shading to accomodate both objects. This continues for all five objects and light sources. If the deferred renderer is used, then all five objects will be rendered, and then the light sources and shading will be calculated and added, and that’s it.

As you can see, this can have a tremendous benefit with rendering times when using many light sources and objects. Several disadvantages would keep you from using the deferred renderer unless it’s needed, including issues with using multiple materials, as well as the inability to use anti-aliasing after the lighting is added (when it is really needed).


Polygonal modeling and Three.js – Part 2, 5.0 out of 5 based on 2 ratings
You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

No comments yet (leave a comment)

You are welcome to initiate a conversation about this blog entry.

Leave a comment

You must be logged in to post a comment.