Final Pass -> Raw Footage -> Clay Render -> Clay Render + Raw Footage Roto -> Final Pass
Problems encountered
Camera tracking errors
Unable to open project file after a software update (19/11/2024)
Over-budget during production
Concept/story changed mid-way through post-production
Whatever this is; freezing up my entire PC at the most convenient time
Self-Reflection
Looking back on this project, I feel a deep sense of pride in what I’ve accomplished, but I also recognize significant areas where I can improve. While the outcomes were satisfying, there’s room to enhance the fidelity of my animations and environment designs. I want to push beyond relying on old camcorder filters to mask imperfections and instead fully showcase the photorealistic potential of my work.
One area I’ve noticed that needs attention is documenting my process and research. As a naturally spontaneous worker, I often dive headfirst into projects without pausing to document what I’m doing. This approach might work in a solo setting but could be a challenge in a collaborative environment, where clear communication and organized documentation are vital. Developing this habit will help me grow as a team player and professional.
I also aim to broaden my expertise with industry-standard software like Maya and Nuke. While I currently lean heavily on tools like Unreal Engine and Blender, I realize many studios still depend on the traditional workflow these programs provide. Expanding my skills will make me more versatile and aligned with industry expectations.
Another skill I want to improve is camera tracking. A lack of proficiency in this area became a stumbling block for me during the “Content A” project. Despite the significant investment of time, energy, and money in shooting the footage, I couldn’t use it effectively due to poor tracking. This experience highlighted how critical this skill is for achieving high-quality results.
Finally, I need to refine my time management skills, particularly in post-production planning. I’ve noticed that some stages take much longer than anticipated, which cuts into the time I have for other tasks. By developing a more structured workflow and adhering to timelines, I’ll be able to balance each phase of production more effectively.
This reflection has given me a clear roadmap for improvement, and I’m excited to tackle these challenges as I continue to grow as a VFX artist.
Alternate version (Original before current change)
Original version 12/10/2024
I have clarified the reasoning behind replacing this original version with an entire new version in the “Final Major Project – Post-production” blog post below
For post-compositing process, I have decided to use DaVinci Resolve as it is a powerful industry-standard tool used for color grading. Which happens to also offer a complete editing and compositing suite in its package. So this will largely simplify my finishing workflow as I do not need to waste time on export and re-import my footage into another software.
Color grading in Resolve
The “Camcorder” Effect
This effect is done using a node in Resolve called “Analogue Damage” in addition with other soften and sharpen nodes and other grading effects to match real-life footage with CG-rendered imagery. Here is the before and after:
I think the camcorder filter sells the effect a lot more than raw composited footage. As it “dirties” the clean image generated by a computer. And I also manually bringing up the highlights to create the “overexposed” look you would get from shooting from a camera with low-dynamic range. This concept was brought up to me while in Gonzalos’ class where he teaches theory about camera and dynamic range.
Sound Design
I edited and designed the sound effects inside Resolve as it is convenient and I have all of my footage already imported inside the software. I layered and distorted many sounds to suit the camcorder effect, by removing the sound low-ends using equalizer. Which sells the effect of the sound being recorded from a camcorder. Also, most of the sound effects used here are downloaded and licensed from Epidemic Sounds. With some of the sounds I recorded myself and arranged a ADR session with the actress.
After multiple iterations of the original version, I found myself dissatisfied with the story’s impact and the spectacle’s believability. This prompted me to explore alternate versions that could elevate the narrative. Through inspiration and references discovered online, I came across images that sparked new ideas. It intrigued me enough to begin reconstructing the story around this concept.
This alternate version will reuse the footage shot on set from the original version but will feature an entirely new background and storyline. It also replaces “Content A,” a previously scrapped project that was abandoned due to untrackable camera motion in the video.
The new version aims to surpass the original by introducing richer backstories, thrilling plot points, and elevated spectacles. These additions will create more tension and a stronger overall narrative, ensuring a more impressive and believable result.
Process of designing environment
After gathering references, I initially started designing the public park, where the story itself unfolds from. The park itself is based on a Chatuchak Park in Bangkok, Thailand. I sculpt the landscapes to closely replicate the real-life park, then randomly scattered trees and plants from Quixel Megascans to fill the spaces.
Then I moved on to the main “hero” building of the story, where the action unfolds into chaos. This building is also a real-life replica based on a currently under-construction “Mochit Complex”. I chose this building as the main facade of the story due to its halted construction which are left frozen in its place, unfinished. This reminds me of a similar incident that happened in 1997 when another skyscraper construction was halted amidst the Thailand economy crisis. That particular building had a reputation of being filled with homeless people, gangs and drug-traffickers. So I took that as an inspiration and apply it into this.
Layout Design
The premise of this story was to be styled like “found-footage” videos. So all the actions occurring during the storyline will be shot from ground-level by a group of influencers who happens to be there at the time this unfolds. I overlayed the first frame of the plate against the environment to try and get a perspective of what it would be like with the actors in-frame, without having to wait until compositing.
Camera tracking / export
This part is quite tricky for me, I’ve tried multiple softwares to try and track the on-set footage I’ve shot including, After Effects, Blender, Nuke and Fusion. But to no avail due to the fact that the movement of the camera is too extreme and unpredictable by automatic tracking. I’ve finally settled on doing it manually in PFtrack, an old-fashion industry standard software, alongside 3DEqualizer. But I chose PFtrack because of a more user friendly interface. So after masking out the actor movement in the plate and finished tracking and solving. I ran into another problem. Exporting into FBX and importing into Unreal does not work properly. I’ve spent countless days and hours into figuring it out, to the point where I almost gave up. However, I’ve found a tutorial online on this particular issue. Which involves using Maya as an intermediary between the two softwares (PFtrack and Unreal). Since Unreal does not understand ASCII FBX exported directly from PFtrack. I need to use Maya to bake the camera path and re-export as FBX 2020 that Unreal can understand.
Tracking in PFtrack
A tutorial mentioned on how to bake animation in Maya.
After the tracking data successfully imported, I can use the real footage as an overlay on top of the sequencer to see as reference when positioning the camera.
Building Design / Modelling
For modelling the buildings, I used a very mixed methods catering to different buildings and depending on how close it is to a shot. If for hero buildings (the one closer to the camera and needs fidelity) I will model myself and for some complex elements like cranes or detailed construction equipments, I will use KitBash3D assets to help blend in for more realism. I used variety of software for different purpose models. For distant buildings, I often used SketchUp, as it works great extracting models from satelite images and can be very convincingly real when looked at from faraway. However, sketchup exported models cannot be used directly in Unreal. So I had to use intermediary software like 3DS Max to import and reexport the mesh to be supported by Unreal
For more complex models and signage, I used blender as it supports SVG import (for signs and logos).
I have acquired an asset “Spec Ops” from BigMediumSmall as the model is very high detailed and comes with some useful preset animations. However, most of the animations I ended up using in the final sequence were from Mixamo.
https://www.bigmediumsmall.com/specialops
Crowds Simulation
For background characters, I opted to use Anima for its crowd simulation ability, the reason I used Anima over other solutions is for its fidelity in the realism of crowd models and animation. Bonus point is that it can be directly integrated into Unreal scene itself without having to export separate pass. It is a paid subscription service though. But it can be justified by how streamlined it was to my workflow.
When I finished blocking and planning out the scene in anima, I can export these as 3D mesh to reposition and adjust sizing, animation or settings inside Unreal directly. However, I could not preview the animation directly inside Unreal. So I have to render out the scene in low quality to check if the walking people overlap with any object or other people, which is a bit inconvenient when I want to make adjustment and have to render the whole thing just to check the animation.
FX/Simulation
I used Embergen as an alternative to Houdini as it offers real-time visualization of the simulation. I need smoke as a VDB format as I can position it in 3D space inside Unreal Engine.
Compositing
After rendering all the footage from Unreal. Which took around 3 days to render with 2048 samples and 3000+ frames. The reason it takes a lot longer than traditional Unreal renders is that I opted to use “path-tracing” which uses the same light ray tracing techniques as other offline renderers (Blender, Maya, Cinema 4D). I use Fusion to composite the Unreal plate and real photo plate together.
Final Node Tree
Sound Design and Additional ADR Session
After all the compositing and color grading was finalised. I arranged a ADR session with the original actress in the film to record new lines as the story was significantly altered from the original version
I also have to mention the efforts I have put in to building the entire enviroment from scratch in the original unused version. So these following pictures are tributes to the version that was replaced.
Original Version – 14/10/2024:
Environment Design Final
Top: Referenced Image from Google Maps / Bottom: Recreated in Unreal Engine
I’ve decided to create a virtual environment before going onto principal photography, in order to understand and can see the picture more clearly. Which has helped me think more creatively since I can actually see the environment before production.
Test Render of early environment
Production Planning
Before heading into production, my team and I dedicated considerable time to pre-planning to ensure a smooth shoot despite the unique challenges we faced. To keep costs low, we opted for a makeshift green screen holder rather than building an entire green screen set, knowing this would require more hands-on adjustments during filming. We began by scouting the location, taking precise measurements to understand how best to fit our setup within the space. Next, we conducted practice runs with our cameras, actors, and team members responsible for holding the green screen in place behind the actors. This allowed us to fine-tune positioning and adapt to any movement on set. We thoroughly tested both camera and sound equipment, and did a preliminary shoot to troubleshoot our approach to camera tracking for post-production. We also wanted to see how well the green screen would perform in this less-than-ideal setup. Finally, we rehearsed lines with the actors to make sure they felt confident and prepared for the real shoot. These efforts provided us with a solid foundation and minimized issues during the actual production.
In today’s social media landscape, influencers often push boundaries, chasing views and attention by performing stunts that can be reckless, even dangerous. For this project, I wanted to create a piece of short viral content for TikTok that serves as a social commentary on this trend. By showing influencers willing to risk their safety for the sake of spectacle, my goal is to highlight the lengths people go to for a moment of online fame, often without considering the potential consequences.
Shooting this concept in real life would be nearly impossible due to the danger involved, so I decided to set the entire scene in a virtual world. By using Unreal Engine, I’m able to replicate a realistic environment where these risky behaviors can be safely portrayed. The footage will be composited in Fusion and polished in DaVinci Resolve, giving the final piece a heightened sense of realism. This project isn’t just about VFX; it’s a narrative designed to prompt reflection on the impact of social media and the blurred line between reality and performance for today’s digital influencers.
Theme referenced from my favourite movies:
Examples of influencers doing stunts to gain views/likes on social media:
Storyboarding:
The story goes; An influencer attempts to boost her TikTok views by risking her life, walking across the busy Ratchaprasong intersection filled with traffic. The stunt ends tragically when she gets hit by a bus—but that’s not the end. A miracle occurs: the TikToker somehow manages to stop the bus with her mind. Astonished, she starts experimenting with her newfound power, playfully moving the bus while turning to the camera with a look of surprise. However, she loses focus, accidentally causing the bus to crash into a nearby building, which explodes. Panicked, she stops recording and runs away.
Final Thesis Project
Main Concept
Recreate one of the busiest street in Bangkok in 3D environment to be as photorealistic as possible
Formatted in a TikTok style video featuring an internet influencer attempting to perform a dance on the busy streets of Ratchaprasong Road while avoiding cars and motorcycles passing by
Serves a purpose to be a mockumentary/parody of today’s landscape of social media where influencers try to outdo other influencers with more and more outrageous content or dangerous stunts to garner attention
Post the following video on TikTok and observe the performance of the video on key factors: virality (how viral does the video perform based on engagements), believability (do people believe the stunt was performed for real) and debatability (how people form opinions or sides of the argument according to the video)
Green screen setup in broad daylight + tracking marker for camera motion tracking
Why?
Saves time from rotoscoping and produce better fine details in the matte especially in the upper body area
Daylight provides the best lighting method as the virtual environment was also set in daylight
Cons
Cost of buying materials/setting up green screen may be an issue
Green spill from the materials may cause issue during post-production while compositing
New Idea 09/05/2024
Instead of just one video. I’ve conceptualized that I could create this social media influencer persona around the idea of mockery of today’s social media landscape.
New Idea 08/07/2024
Sticking with first concept but produce two versions with different concept/video entirely but based on the same location
Concept 1 – (“LISA – Rockstar” TikTok)
The currently popular TikTok trend right now is based on k-pop artist LISA’s “Rockstar” music video where people would stand in the middle of the Yaowarat road. This concept takes a different approach by having the talent do the dancing in the middle of Ratchaprasong Road instead.