All material on this site is copyright. You have the right to view this page but you are not granted any other rights and the copyright owners reserve all other rights.
 
3dsMax in Motion Pictures
 
2003
Bullet Proof Monk
 

Blur Studios completed visual effects for Bulletproof Monk, Blur’s work consisted of delivering both CG elements as well as final composites, contributing to more than 65 visual effects shots in all.

In collaboration with Boy Wonder VFX, Blur’s team of artists produced a variety of effects shots which included the creation an of all-CG environment that opens the first ten, action-packed minutes of the film. Blur’s computer-generated backgrounds also included a spectacular mountaintop monastery near a steep Himalayan gorge, set in 1943 Tibet.

“From panoramas to close-ups, we created a dozen backgrounds that were used across 50-60 shots, giving the environment a realistic sense of scale and proportion,” explained Blur Studio visual effects producer Al Shier. “Some shots were designed to convey the vast expanse and depth of the scene, which established the overall tone and setting of the film, drawing the viewer into the story”.
Visually-charged and packed with action, Bulletproof Monk is the story of a mysterious Tibetan Monk (Yun-fat) who teams up with a street-smart thief name Kar (Seann William Scott) to protect an ancient scroll that holds the key to unlimited power and eternal youth.

Additionally, Blur created an effect in which the film’s villain, a Nazi named Struker (Karel Roden), undergoes a dynamic transformation after taking possession of the scroll. He observes his face in a mirror as features change until he looks 60 years younger.

“Using digital scans of the actor in make-up along with textures derived from plates shot on set, a detailed model of the older character was created,” explained Blur Studio technical supervisor David Stinnett. “The CG head was then animated and match-moved to the actor without make-up.” The final result, one of the film’s key visuals, was a seamless transition from the CG element to the actor’s real head.

According to Richard Bluff, visual effects supervisor at Blur Studio on "Bulletproof Monk," "Brazil r/s had been deployed throughout the Blur Studio pipeline on all of its workstation and renderfarm machines. The software was used to render approximately 90 percent of all the visual effects shots on the film.

"Having worked with Brazil since its inception on previous projects at Blur, I was confident that from a technical point-of-view, Brazil's production-proven functionality would allow us to tweak lighting as needed, in a stable, predictable and accurate manner. We have been especially impressed at Brazil's superior anti-aliasing, allowing us to avoid scintillation problems, such as with bump maps, which was critical on this particular project," said Bluff.

"SplutterFish is privileged to enjoy an ongoing, close and dynamic relationship with the team of talented creatives at Blur Studio," said Scott Kirvan, chief executive officer at SplutterFish. "Blur's aggressive and extremely fast-paced production environment, along with the breadth of its work, has been instrumental in making Brazil a reliable and truly flexible production tool. Anyone that has done feature film work knows that these types of jobs push all the tools -- hardware and software -- to their limits. `Bulletproof Monk' is an ideal demonstration, not only of Brazil's ability to cope, but its ability to shine by offering the control and toolkit that production professionals need."

Among the rendering challenges that Blur faced while creating the visual effects on "Bulletproof Monk" was the ability to maintain a consistent and credible lighting quality. "Brazil's lighting qualities took the variable out of the creative process and allowed us to judge the scale, model and textures of each scene accurately and easily. The `skylight' and `arealight' tools were especially useful in allowing us to create depth and detail on objects that were flat or overcast.

"In short, with Brazil, 3ds max users are afforded real-world lighting calculations that are stable and 'bulletproof' (pun intended). It is like having a lighting expert sitting next to you on the job," added Bluff.

Credits for Blur Studio go to

David Stinnett - technical supervisor and cg artist
Richard Bluff - visual effects supervisor, cg artist and matte painter
Al Shier - visual effects producer
Irfan Celik, Jeremy Cook, Willi Hammes, Kirby Miller - cg artists
John Fraser and David Hudnut - matte painters
Seung Jae Lee - camera tracker
Feng Zhu -concept design
Amanda Powell -production assistant.
John Sullivan was visual effects supervisor for Bulletproof Monk.

Sources: article on CG Focus, article on 3DLuVr & article on 3D Festival

 
X2

For X2, Frantic Films provided key pre-visualization design and replicated LIDAR-technology (Light Detection and Ranging), creating a photo-realistic holographic map for one of the VFX sequences. Frantic’s work on X2 extended beyond pre-viz though, culminating in twenty-three post-effects shots.

They used the look of the above mentioned LIDAR technology as inspiration for the creation of a hologram map pictured inside of the X-Jet, the goal was to recreate the look and feel of the LIDAR-technology in CG, generating a strikingly three-dimensional, photo-realistic LIDAR image – in this case, of Stryker’s Base, to illustrate a critical plot line in a story involving the character of Wolverine.

In the real world, a LIDAR box works by scanning the environment using laser technology. Wherever the laser hits a surface, the LIDAR hardware creates a dot in 3D space at the location where the hit-point is detected. This effectively results in the virtual re-creation of the environment in small points, which, if created with a high-resolution sampling rate, achieves the effect of smooth and instantly recognizable surfaces.

Beyond the creation of a LIDAR like image, the Frantic crew had to solve various problems to get the CG elements to seamlessly integrate into the live action plates.

Darren Wall further discusses one such problem using the last shot of the Hologram Map sequence as an example.

"The camera move had a tilt and pan both during a dolly in, with a rack focus. It was the rack focus that was really messing us up, because the blur was severe enough to obliterate any trackable landmarks. Our matchmover tried to 3D track it, our 3D lead tried to track it, and they got us into the ballpark, but it still wasn't perfect. With the DF tracker, we could get superior tracks of different points around the frame, in foreground, midground and deep background to stabilize the shot, even with the focus pull. Then we could stabilize our slippery 3D elements, combine them with the stabilized plate and then re-introduce the original move."

Fusion was also used throughout the Hologram map sequence for the seamless addition of mountainscape background plates to the windows of the X-Jet. Frantic also completed shots involving the re-creation of the eye-scanning system used by Patrick Stewart’s character, Professor Xavier, to enter Cerebro.

More about Frantics Pre-visualization from an interview with Randal Shore....

For X2, frantic films provided pre-visualization for a number of complex sequences for Mike Fink and Bryan Singer. As well as helping to provide a template for the action, the pre-vis was meant to help get people excited about the sequences and was used to help 'sell' ideas like the Dogfight sequence. On top of pre-vis, frantic completed a variety of shots for the feature including the holographic map sequence on board the x-jet.

Some sequences had very detailed boards or were key story points in the script and had to be followed closely. Other sequences such as the Dog Fight were simply two words in the script “Dog Fight ensues” and we worked closely with the VFX Supervisor and the Director to flesh out the action and key story telling in the sequence.

The pre-vis sequences Frantic developed primarily revolved around the effects sequences in the movie, although a significant amount of detail was added to further enhance the sequences and to give it a closer look to the final product. Such detail included enhanced rendering of certain elements, compositing effects and sound effects.

The best thing about pre-vis was that Frantic had the opportunity to participate in scene development for several critical sequences in the X2 sequel and brought to life both initial and postproduction approaches for the Drake House, Forest Camp, Plastic Prison, Generator Room, and Dog Fight sequences. Furthermore we enjoyed working with Mike Fink, Bryan Singer, Guy Dyas and the rest of the talented X2 production team.

Sources: articles on CG Channel & Eye Online

 
Final Destination 2



Digital Dimension's involvement in FDII began early on with a series of tests to help Director Dave Ellis and VFX Supervisor Joe Bauer decide on the approach to take for the complex log sequence. The earliest of these tests included simple dynamics and rendering tests using temporary textures and cylinder primitives for the CG logs.

The log sequence was originally planned to be shot practically, if possible, but with promising results from the preliminary CG tests, it was decided that test footage would be shot in Vancouver with the intention of comparing real logs to CG logs. Digital Dimension went on-site to oversee the test. It became apparent during the test that real-life logs would be nearly impossible to control and would not exhibit the liveliness required for the sequence. Now it was a matter of whether CG logs would be believable and offer a more dramatic performance.

It was clear from the beginning that the whole pipeline was going to have to be flexible enough to accommodate changes in timing and feel of the shots. This basically ruled out hand keyframing of the logs and particles.

Once preliminary dynamics had worked out, the team began experimenting with different types of layers that would contribute to the final composited test shot. The results of this experimentation would serve as the foundation for the pipeline used for the actual effects shots later in the production. The test shot was a complete success, and it was clear to everyone that using CG logs for the log sequence would provide the necessary control without sacrificing realism. So with Digital Dimension greenlit to create the digital logs, the Sr. Technical Director voyaged to Vancouver, BC, to serve as Digital Dimension's eyes and ears on set, and to acquire detailed location measurements and reference photography for the sequence.

Then began working out modeling and textures for the final log, ultimately creating 22 unique logs with custom textures built from reference photos. Close attention was paid to fine details such as scrapes and missing bark on the logs. In addition, custom displacement maps were built for each of the logs. For the final touch, a hair system was used to add frayed wood on some of the logs.

Meanwhile, working from the foundation built for the test shot, particle debris for the test shots were being refined for the final effects. Using a rule-based particle system, an automated approach was developed which detected the impact of a log and emitted various amounts of bark, dirt, dust and debris based on that impact. This meant that all the particle animation was generated in real time in response to the dynamic motion of the logs. The final approach included details to help convey the realism of the shot such as making “particle debris” at rest lay flat on the road without penetrating it.

Visual effects are typically generated from a build up of several layers to give the compositors the fine control needed to achieve seamless integration with the plate. FDII was no exception, with the complexity of a given shot often determining the number of layers needed for the CG elements.

For example, the CG logs had layers for the logs themselves, large debris such as slabs of bark, small debris such as bark particulate, dust, and even earth kicked up by the logs. Additional layers included direct shadows, contact shadows, and a variety of masks such as log highlights, log cap masks, and height maps from the road for reflection layers. The CG logs also needed to change from dry to wet while interacting with the road, and they typically had great depth - traveling from near to far or vice versa. For this reason, all log-related layers typically included Z information used to apply depth of field and ambient density in the composite. Since the road was wet in many shots, reflection layers were added, nearly doubling the number of layers needed for the log elements. Some layers, such as water spray, were created directly in the composite with 2D particles. Finally, details such as focal changes and reflection quality were carefully matched for maximum integration.

Digital Dimension Credits:

Facility VFX Supervisor Benoit Girard
Facility VFX Producer Jerome Morin
Facility VFX Designer Edmund Kozin
Sr. Technical Director James Coulter
CG Supervisor Jason Crosby
CG Animators Brandon Davis
Justin Mitchell
Andy Roberts
Marion Spates
Sung-Wook Su
Lead Compositor Leandro Visconti
Compositors Jeremy Appelbaum
Miguel Bautista
Jim Cabonetti
Dan Walker
Adam Zepeda

Source: Digital Dimension

 
Cradel 2 the Grave

"Those that are familiar might be interested to note that on Feb 28, Warner Bros will release Cradle 2 the Grave, starring Jet Li and DMX. One of our highly regarded list members, Ben Girard, (and his fellow players at Digital Dimension) has/have created some very complex and interesting shots for the film. […] They had a VERY difficult set of images to play with and more than a few bits that weren't particularly optimal, but they worked very hard in a very short period of time and did a very fine job indeed. Public 'hats off' to them. Their work is primarily seen in the final act where the bad guy meets his doom.

"Thanks again to all."

Boyd Shermis
VFX Supervisor - Cradle 2 the Grave

Source: Digital Dimension gallery

 
The Last Samurai

Digital Dimension worked on 48 effects shots for The Last Samurai. Those shots included the integration of 3D swords and lances into live-action footage, crowd generation, and cosmetic effects such as a bleeding Tom Cruise. Discreet 3ds max was used extensively by Digital Dimension to comlplete the assignment.

Swordplay
Three major scenes in the movie featured the addition of a photorealistic 3D sword and lance into live-action footage. Two of the sequences involved Tom Cruise battling an army of warriors using a sword as his weapon, and one sequence involved a Samurai warrior throwing a lance. In all three sequences, the weapons had to be computer-generated involving textures, lighting and matching the weapons to Tom's hands. Because Tom's fighting choreography in those scenes is very fast and close to the camera, there was little chance a practical cheat would have worked.

“Tom was shot with a retracting prop sword that had to be painted and replaced with our CG sword”, explains Girard. “Tracking markers were placed on the soldiers to be stabbed so that we would have a reference in tracking the cg swords and spears. First we would track the 3D camera to match the movement of the live-action camera. There were so many soldiers moving through the frame that most of the camera tracking was done by hand. 3D tracking solutions have difficulty in scenes where markers or reference points are constantly being obscured. We would then matchmove the torso of the soldier to be stabbed. This enabled us to do animated linkages, driving the motion of the sword with the torso of the victim after he had been stabbed. This also facilitated accurate shadows and reflections. Finally we would matchmove Tom’s sword, replacing the stunt sword with a CG replica of the prop sword that was used by Tom in scenes that did not involve stabbing. One of the problems we faced in this process was that the sword handle would rotate unrealistically in the footage. It is very difficult for an actor to simulate the action (or inaction) of a sword being stuck in another actor’s body and reacting to that actor’s movement. This meant that sometimes we deviated from the live action sword to make the shot work. In the final shots, we occasionally replaced the whole sword and in some cases it was only the blade.”

Lighting
Lighting was accomplished through a combination of image-based lighting and traditional 3D lighting. “In general we find that it works best to use image-based lighting for ambient light”, explains Girard, “and virtual lights to control key lights, backlight etc. The images used for ambient light were usually procedural gradients, representing the transition from the ground to the sky (because the fight scenes were outdoors). The colors in those gradients were based on values in the plate.”
Working in a real environment
Producing a realistic sword in battle complete with accurate reflections required several techniques including shaders, angle-based glossiness and motion blur matching. “Shaders were applied to the 3D sword model to match reference of the real sword. As swords tend to be very reflective, one of the keys in achieving a realistic effect was creating naturalistic reflections. Environments were created by stitching together multiple plate sequences and objects close to the sword that we had modeled and mapped. Another focus in getting good reflections was glossiness based on a normal angle. Basically this meant creating a shader where reflections where sharper as the normal-angle became more parallel with the camera perspective and more blurry as the blade turned perpendicular. Glossiness is typically render-intensive so we use it selectively. However, it is a fundamental part of simulating the illumination model and reflectivity of all objects, so as computers get faster we will be using it more and more. While it was quick rendering such relatively simple scenes, great attention was paid to matching the motion blur with the live-action footage.”

Compositing
Rendered CG sequences were brought into compositing for final integration. “Those sequences were layered over the background plate”, explains Girard. “Color correction, blur and grain were added to finalize the rendered 3D elements. Objects and actors that should pass between the camera and the sword were rotoscoped to facilitate proper occlusion. Cineon files were then rendered out and printed to film.”

Team and tools
A total of 12 people were working on The Last Samurai at all times though Digital Dimension had four other major productions running concurrently. In addition to ‘The Last Samurai’, Digital Dimension used 3ds max for all of its major projects, including ‘Elf’, ‘A Cinderella Story’, and ‘Scooby-Doo 2: Monsters Unleashed’. “We have scripts and tools that allow us to tailor the software to our unique production needs. 3ds max proves to be very flexible in those situations. The fact that we can access many types of renderers within the same package is a great advantage.”

Source: Article at CG Networks

Further Interview with Justin Mitchell from Digital Dimension on Render Node

"Tom Cruise and Billy Connolly walk out from an alley and down this street (matte painting) that was painted historically correct according to late 19th century photos. I used 3D Studio Max Ver5. All models built, textured, lit and rendered in max. I've been very pleased with the Max renderer, especially in the past few years as they've been working on getting newer features in it and keeping competitive. As always, textures were created in Photoshop.

The 20 or 22 buildings closest to camera (with 2 exceptions) were mapped traditionally (UV - cubic mapping). This was necasary as early tests revealed that camera mapping all the buildings wouldn't work. If we cam mapped all the buildings the perpective would always be off in one part of the shot because the camera moves so dramatically both sideways and vertically. I always try and cam map 3D shots so I have the most painting freedom possbible, but it simply wasn't that easy with this camera move. Click the movie link to see the final shot in motion.

The items camera mapped are the sky, water, the upper right foreground building's second story, the left most building in the shot, the dirt streets and very far buildings.

I rendered out many individual building mattes and sometimes their parts with adjacent buildings as matte objects so I would have mattes to control whatever I wanted to tweak in my precomp. My precomp made up the entire CG scene and dictated the final lighting and color that was then sent to the compositor who added the extracted people.

The reference for this period was real San Francisco photos from the 1876-ish period. Some were sent to us from the production and some we found on the web. None the less, the Director wanted the scene to be historically accurate so the buildings that existed on California street in San Francisco look like what you see here and even the cable cars (just invented) are accurate in their design to the period (front car is open...no glass windows). "

You can see the shot here on Chris Stoski's website

Source: Chris Stoski quoted from mattepainting.org fourm thread

“First off, I want to thank you for your most excellent work for me on THE LAST SAMURAI. You have done both yourselves and me proud! When I originally told the director Ed Zwick, “If you see an effects shot then I have ruined the movie…” I had high hopes and loads of ambition. But you all have actually pulled that off! And I believe that you will see proof of this in the reviews and the critiques of your friends, family and industry co-workers.”


Jeffrey A. Okun
VFX Supervisor - The Last Samurai

Source: Digital Dimension

 

All material on this site is copyright. You have the right to view this page but you are not granted any other rights and the copyright owners reserve all other rights.