Computer-Generated Imagery

As mentioned at the start of this chapter, the foundation of modern visual effects is the art of manufacturing computer-generated imagery (CGI). CGI started as a revolutionary scientific breakthrough that eventually altered the entire motion-picture industry in a fundamental way. After all, CGI methods are used to make an entire genre of modern movies—CGI-animated films, which are among today’s biggest performers at the box office—and to routinely make chunks of many live-action movies, chunks that are then strategically combined with live-action material.

The ability to make, move, blend, and alter images in the computer is a process born out of a series of breakthroughs in recent decades in the creation and use of complex mathematical algorithms that, at their most basic level, instruct a computer what operations to perform—and how to perform them—in order to produce different types of shapes, surfaces, and images. There are many types of shapes and forms the computer can create, such as basic polygons (plane figures with three straight sides that, when joined together, form more sophisticated images). If you pursue computer science or go deeper into animation, you will learn more about the other forms, but for now, the mathematical underpinnings of CGI are not nearly as important to your training as the creative uses of the art form. At this stage, focus primarily on the basics of computer animation generally, character animation, and compositing. We will devote the rest of this chapter to these topics, starting with computer animation.

Think about the process of computer animation in the same way you would think about physically creating real models for animation purposes, regardless of whether you were creating animated characters or simply objects or elements. (A bit later in this chapter, you will learn techniques for putting performance and emotion into characters during the character-animation process.) In the real world, you might start with a lump of clay and build a flexible wire frame that sculptors call an armature. You would put clay on top of the frame; hone the surface and details and colors until you achieve your desired creative result; and then put it on your real-world set, light it, move it, and film it. If you were creating a character, perhaps you would build a marionette. First you would create the puppet, then you would attach strings, and then a puppeteer would take the marionette and create a performance with it. (Animators are puppeteers in this sense, taking over after other technicians have built and configured the “puppet.”) Essentially, these are the same concepts behind the creation of basic digital elements or characters, except you do them in a computer. Following are the primary technical steps in the computer-animation process for movable objects (you will obviously skip the animation steps if you are dealing with a background or static image):

Modeling Based on your designs, you will use computer-animation software to build a digital model of the objects, background, creatures, or characters you want to create. The software creates a rough wire frame based on parameters you provide. (This is essentially a digital armature.) It then calculates the geometry necessary to produce your basic wire-frame model. Once the armature is set up, you can add rough surfaces to it; this embryonic model is what you will use to program basic characteristics, including animated movements if needed, in the next stages. The thing to remember is that, as a modeler, you are creating a skeleton for the physical object.

Rigging Riggers, also known as character TDs (technical directors) or setup artists, essentially take rough CGI skeletons and decide how creatures or characters should move by building bones, joints, and muscles onto the model and programming them with controls (instructions) on how they’ll move when triggered by animators. To take our puppet analogy further, they are essentially putting strings on the object. Those instructions can be programmed by hand, although in some applications they consist of imported motion-capture data (see here). The process of rigging (setup), in other words, is all about how the character or creature will be capable of moving. The rigger determines how filmmakers want the character’s “physical” systems to move—in as realistic a way as possible or in an exaggerated way, depending on the filmmaker’s wishes. The rigger uses the software to program joints and movable areas of the body with various mathematical instructions that will tell the model how to respond to virtual controls that the rigger has built into the model—how to jump, turn, run, and so on. If you are rigging an animated character, then the rigger and character animator will collaborate closely during this process, since the animator will eventually be the one who decides exactly how the character will move in the final product.

image

Gertie the Dinosaur (left, 1914) was the first animated dinosaur in motion picture history. Its descendent, Jurassic Park (right, 1993), ushered in an entire new era in filmmaking.

Layout For environments and character positioning within those environments, your next phase will be layout, which is generally done in two stages—rough and final. During the rough layout phase, you will essentially use hand-drawn storyboards or digital previs material to guide creation of preliminary 3D environments in the computer. You will use the software to decide on initial camera placement and motion, and staging and blocking for where the digital characters will be located within the environment. In a sense, the layout artist is blocking out how digital cinematography will work in computer-generated sequences. After you approve the rough layout, you will move into final layout, which means you will eventually replace rough characters and environments with final characters and other assets as they become available. Once final layout is done, character animators will finalize character performance choices.

Animation After models are rigged and the environment and camera-movement approach are laid out, animators can make computer models move in a realistic manner to create the motion necessary for the model to move through the frame. They do this by working closely with the director to figure out what creative movements and paths are preferred, and then using hundreds or even thousands of digital controls built into the model to achieve those movements. If the model is a digital character, then, of course, the animator will be functioning as a character animator or actor, who will be directed by the film’s director just like any other cast member. (See here for details on the character-animation process specifically.)

Surfacing In most animation-software packages, computer models are dull gray, looking somewhat like actual plain clay. When surfacing, your job is to add texture and color to the mix. What you add depends solely on your script, the creative desires of your director, and logic—animals need fur, robots need a metallic sheen, and so on. Typically, surfacing artists work closely with lighting artists to tweak the final look of surfaces and textures. In fact, one of the exciting developments in computer animation in recent years has been the advent of various breakthroughs in texturing surfaces.

image

Movies like Life of Pi utilize both practical and digital effects: the Oscar-winning Pi used practical effects to shoot water-bound sequences in a giant tank, while digital effects added to the illusion with new backgrounds and CG characters.

Lighting As discussed in Chapters 8 and 9, live-action filmmakers use light as a painting tool to emphasize surfaces or skin, bring out colors, illuminate action, and impact the audience emotionally. The goal is exactly the same with digital lighting. Using animation software, you will position digital lights in certain locations, at certain angles, at certain levels, and at certain hues. Most major software packages give artists a wide range of lighting tools, as well as tools to analyze and compare positioning and use of lights in the live-action plate of a visual effects shot so that the animation team can emulate real-world lighting. It is critical that you, your visual effects supervisor, or some team member do your best to record the physical lighting setups used during production, in order to give you enough data to replicate light conditions on-set in the computer.

Effects You will also need to add any effects your story calls for. An effect is anything that moves of its own accord in a scene without involving an actor or an animator doing any acting. In some cases, you will be adding live or practical elements you have photographed on a stage or location; in other cases, you will be adding digitally created elements; and in still others, you will be adding both. If the effects are digital, they will be designed and animated in the same process we have described. If they are live action, you will scan the image into the computer so that you can combine it with your scene using digital compositing tools (see below).

Tracking The tracking process is also called match moving or motion tracking. It is linked to the compositing process described next because it helps you combine live and digital imagery seamlessly—you will need some sort of tracking solution if any elements are, in fact, moving. The general idea is to track the movement of a real camera through a live-action shot so that an identical camera move can be created during the animation process. Tracking is a way to record the exact position, scale, and movement of elements in live-action footage, and apply that data to the virtual camera. Over the years, the process could be painstaking because accommodations had to be made both during filming and in postproduction for data to be captured accurately. Sophisticated 3D camera trackers have been developed, however, that can analyze and track every element in a piece of live-action photography once it is scanned into a computer, down to the pixel level in many cases, making the process less laborious.

Compositing The general definition of compositing is the seamless combination of two or more images into a new, final image. Digital compositing is the computerized version of the original optical method of doing this task. Some composites involve dozens, even hundreds, of elements—or layers, as they are often called. There is a crucial production element to compositing—the acquisition of live-action plates that will eventually be combined with other live-action plates, digital plates, or both. Typically, a blue screen or green screen, often called a chroma-key, are used on-set to allow capture of only what you need for the composite by photographing an element against a color background from which it can be digitally cut out as a matte, then stitched and tracked together with other elements similarly cut out for placement into a new background (see Action Steps: Plate Photography, below). A matte painting, then, is an isolated element that can be used as a puzzle piece to be strategically combined with other pieces. A specialized technique that is part of the larger compositing process is called rotoscoping, or roto. It evolved out of an animation technique used for early cartoons by which animators traced over live footage by hand, frame by frame, and then lifted those elements, or rotoscopes, out of the frame and added other elements or drawings before putting the shot back together. Computers have taken over the chore, but digital rotoscoping remains an important process today.

image USING BLACK SCREENS

Under certain conditions, you can use black screens to shoot elements, since black is neutral. If your photographic exposure is just right, you can sometimes use the element without having to extract it as a matte first. Frequently, filmmakers have used black backgrounds to shoot organic elements, such as smoke.

Rendering In computer graphics, rendering is the digital “baking” of a raw animated or wire-frame image so that it fully forms the textures or surfaces that are intended to be part of the final product. This is generally the final step for computer-generated images. Today, there are numerous software tools that offer rendering techniques that range in degree of complexity and methodology, as well as in the quality of the final image—you will learn about these different options later in your film education. For now, keep in mind that in all these cases, rendering is the one area of computer graphics that remains particularly hardware intensive, because large digital files require lots of computer processing power to render efficiently. Major facilities have render farms for this purpose—rooms filled with processors that run day and night, rendering shots as they are finalized. For smaller projects, rendering can sometimes be done near real time for modest-sized files. Indeed, that reality is the key to breakthroughs in modern-era video-game platforms, which now have the ability to render images in real time while you play the game. But for major visual effects, rendering remains one of the most time intensive parts of the process.

FIGURE 13.2Left to right: a wireframe model, a pretextured computer model, and the final rendered version

image

ACTION STEPS

Plate Photography

The term plate refers to any isolated image, whether a still or a moving picture, created specifically to be merged with other elements to build a visual effects shot. At the basic level, you acquire a background plate on-set and one or more foreground elements to create a typical shot. Capturing visual effects plates on-set can be quite challenging. Important aspects include the following:

  1. image Choosing your background. Today, filmmakers typically use green screens because modern digital cameras tend to be extremely sensitive to the color green due to the nature of their imaging chips. The exception is when you are shooting an element that has a lot of green in it. In that case, use a blue screen to make sure edges of the green element won’t be lost. In any case, the concept is to end up with a chroma-key that will have nothing from the background attached to it. Professional screens can be expensive to rent or buy. You can paint a blank wall using chroma-key green paint that is relatively affordable, or you can make a hanging screen. If you make a screen yourself, you can use paper, muslin cloth, or foam-backed cloth. Keep in mind that your goal is to have a green screen that scatters light evenly, and certain materials scatter light better than others.
  2. image Choosing your camera. Photography of elements against your background needs to be of the highest quality possible because the image will be manipulated in a computer later. Therefore, the best camera, deepest resolution, and highest pixel count will help avoid an element that degrades as it is manipulated. Thus, even today, some filmmakers still shoot background plates on fine-grain film stocks, even if the rest of the show is being acquired digitally. Newer digital camera systems, however, have made this less of an issue because of technical improvements. But with digital cameras, filmmakers try to capture plates as full-bandwidth 4K uncompressed images when possible. When that is not feasible, try to capture at least a compressed 4:2:2 image for a quality chroma-key.
  3. image Positioning. When shooting elements, think carefully about positioning them, as well as your camera’s positioning and shooting angle. Normally, you want to position your subject far enough away from the screen that no shadows are visible. Then, position the camera so that the subject appears in perspective and not distorted as it relates to other elements. When possible, you want your camera to match the height of the camera or virtual camera used to capture foreground elements. Additionally, make sure you are shooting your background image from the same horizontal angle as you plan to show your foreground element. If both background and foreground are not shot using the same camera system, be careful about using similar lenses to match the viewing angle for each element. Also, camera tilt is important if you have characters moving in three-dimensional space. This is because the horizon—sometimes called the vanishing point—in the foreground needs to match the horizon in the background.
  4. image Lighting. When shooting either foreground or background elements, take note of the lighting configuration if you are on a stage or the sun position and angle if you are shooting outside, and any other lighting data (color temperature) you can calculate, so that you can match them when shooting other elements. Think about lighting to satisfy how two separate things will come across when the image is captured—the screen itself and the element you are shooting. Generally, it is best to shoot the background first and try to match foreground lighting to what you used for the background. Also, light carefully to avoid shadows and light bouncing from unintended sources. It’s good practice to light the screen first before adding elements to your frame.