Final Thesis – Research Sources

https://www.respeecher.com/blog/de-aging-technology-changing-hollywood-future-film-making

De-Aging in Film

De-aging technology has become a game-changer in the film industry, allowing actors to portray younger versions of themselves with remarkable realism. This 3D effect technique involves editing digital images or applying computer-generated imagery (CGI) to make actors appear younger on screen

1. Notable examples include Kurt Russell in “Guardians of the Galaxy Vol. 2” and Robert De Niro in “The Irishman,” where advanced VFX techniques were used to convincingly portray the actors at various stages of their lives

2. The process often combines makeup, CGI, and sometimes the use of younger body doubles to achieve a seamless transformation, pushing the boundaries of visual storytelling and enabling filmmakers to explore characters’ histories without recasting roles3.

Body Enhancement Techniques

Body enhancement techniques in VFX have become increasingly sophisticated, allowing filmmakers to alter actors’ physiques dramatically. A prime example is the work done on Dwayne Johnson’s appearance in “Black Adam,” where Weta Digital used a high-fidelity digital asset to enhance and sometimes modify his muscular build for different scenes

1. These techniques go beyond simple touch-ups, often involving the creation of entirely digital body doubles that can be manipulated to achieve the desired look. VFX artists can sculpt muscles, adjust proportions, and even create fantastical body transformations, pushing the boundaries of what’s possible in visual storytelling while raising questions about the authenticity of on-screen performances.

https://www.marieclaire.com/culture/a26709/how-movie-visual-effects-make-actors-look-younger/

https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2016.00334/full

Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the “thin ideal” has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of “self” and “other” are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one’s own body following exposure to “thin” and to “fat” others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

Disney’s research on software to “re-age” actors

Photorealistic digital re-aging of faces in video is becoming increasingly common in entertainment and advertising. But the predominant 2D painting workflow often requires frame-by-frame manual work that can take days to accomplish, even by skilled artists. Although research on facial image re-aging has attempted to automate and solve this problem, current techniques are of little practical use as they typically suffer from facial identity loss, poor resolution, and unstable results across subsequent video frames. In this paper, we present the first practical, fully-automatic and production-ready method for re-aging faces in video images. Our first key insight is in addressing the problem of collecting longitudinal training data for learning to re-age faces over extended periods of time, a task that is nearly impossible to accomplish for a large number of real people. We show how such a longitudinal dataset can be constructed by leveraging the current state-of-the-art in facial re-aging that, although failing on real images, does provide photoreal re-aging results on synthetic faces. Our second key insight is then to leverage such synthetic data and formulate facial re-aging as a practical image-to-image translation task that can be performed by training a well-understood U-Net architecture, without the need for more complex network designs. We demonstrate how the simple U-Net, surprisingly, allows us to advance the state of the art for re-aging real faces on video, with unprecedented temporal stability and preservation of facial identity across variable expressions, viewpoints, and lighting conditions. Finally, our new face re-aging network (FRAN) incorporates simple and intuitive mechanisms that provides artists with localized control and creative freedom to direct and fine-tune the re-aging effect, a feature that is largely important in real production pipelines and often overlooked in related research work.

Final Major Project – Final Video / Breakdown / Reflection

Final Video – subtitled

VFX Breakdown

Final Pass -> Raw Footage -> Clay Render -> Clay Render + Raw Footage Roto -> Final Pass

Problems encountered

  • Camera tracking errors
  • Unable to open project file after a software update (19/11/2024)
  • Over-budget during production
  • Concept/story changed mid-way through post-production
  • Whatever this is; freezing up my entire PC at the most convenient time


Self-Reflection

Looking back on this project, I feel a deep sense of pride in what I’ve accomplished, but I also recognize significant areas where I can improve. While the outcomes were satisfying, there’s room to enhance the fidelity of my animations and environment designs. I want to push beyond relying on old camcorder filters to mask imperfections and instead fully showcase the photorealistic potential of my work.

One area I’ve noticed that needs attention is documenting my process and research. As a naturally spontaneous worker, I often dive headfirst into projects without pausing to document what I’m doing. This approach might work in a solo setting but could be a challenge in a collaborative environment, where clear communication and organized documentation are vital. Developing this habit will help me grow as a team player and professional.

I also aim to broaden my expertise with industry-standard software like Maya and Nuke. While I currently lean heavily on tools like Unreal Engine and Blender, I realize many studios still depend on the traditional workflow these programs provide. Expanding my skills will make me more versatile and aligned with industry expectations.

Another skill I want to improve is camera tracking. A lack of proficiency in this area became a stumbling block for me during the “Content A” project. Despite the significant investment of time, energy, and money in shooting the footage, I couldn’t use it effectively due to poor tracking. This experience highlighted how critical this skill is for achieving high-quality results.

Finally, I need to refine my time management skills, particularly in post-production planning. I’ve noticed that some stages take much longer than anticipated, which cuts into the time I have for other tasks. By developing a more structured workflow and adhering to timelines, I’ll be able to balance each phase of production more effectively.

This reflection has given me a clear roadmap for improvement, and I’m excited to tackle these challenges as I continue to grow as a VFX artist.

Alternate version (Original before current change)

Original version 12/10/2024

I have clarified the reasoning behind replacing this original version with an entire new version in the “Final Major Project – Post-production” blog post below

Final Major Project – Finishing/Sound Design

For post-compositing process, I have decided to use DaVinci Resolve as it is a powerful industry-standard tool used for color grading. Which happens to also offer a complete editing and compositing suite in its package. So this will largely simplify my finishing workflow as I do not need to waste time on export and re-import my footage into another software.

Color grading in Resolve

The “Camcorder” Effect

This effect is done using a node in Resolve called “Analogue Damage” in addition with other soften and sharpen nodes and other grading effects to match real-life footage with CG-rendered imagery. Here is the before and after:

I think the camcorder filter sells the effect a lot more than raw composited footage. As it “dirties” the clean image generated by a computer. And I also manually bringing up the highlights to create the “overexposed” look you would get from shooting from a camera with low-dynamic range. This concept was brought up to me while in Gonzalos’ class where he teaches theory about camera and dynamic range.

Sound Design

I edited and designed the sound effects inside Resolve as it is convenient and I have all of my footage already imported inside the software. I layered and distorted many sounds to suit the camcorder effect, by removing the sound low-ends using equalizer. Which sells the effect of the sound being recorded from a camcorder. Also, most of the sound effects used here are downloaded and licensed from Epidemic Sounds. With some of the sounds I recorded myself and arranged a ADR session with the actress.

Final Major Project – Post-production

UPDATE 01/11/2024:

After multiple iterations of the original version, I found myself dissatisfied with the story’s impact and the spectacle’s believability. This prompted me to explore alternate versions that could elevate the narrative. Through inspiration and references discovered online, I came across images that sparked new ideas. It intrigued me enough to begin reconstructing the story around this concept.

This alternate version will reuse the footage shot on set from the original version but will feature an entirely new background and storyline. It also replaces “Content A,” a previously scrapped project that was abandoned due to untrackable camera motion in the video.

The new version aims to surpass the original by introducing richer backstories, thrilling plot points, and elevated spectacles. These additions will create more tension and a stronger overall narrative, ensuring a more impressive and believable result.

Process of designing environment

After gathering references, I initially started designing the public park, where the story itself unfolds from. The park itself is based on a Chatuchak Park in Bangkok, Thailand. I sculpt the landscapes to closely replicate the real-life park, then randomly scattered trees and plants from Quixel Megascans to fill the spaces.

Then I moved on to the main “hero” building of the story, where the action unfolds into chaos. This building is also a real-life replica based on a currently under-construction “Mochit Complex”. I chose this building as the main facade of the story due to its halted construction which are left frozen in its place, unfinished. This reminds me of a similar incident that happened in 1997 when another skyscraper construction was halted amidst the Thailand economy crisis. That particular building had a reputation of being filled with homeless people, gangs and drug-traffickers. So I took that as an inspiration and apply it into this.

Layout Design

The premise of this story was to be styled like “found-footage” videos. So all the actions occurring during the storyline will be shot from ground-level by a group of influencers who happens to be there at the time this unfolds. I overlayed the first frame of the plate against the environment to try and get a perspective of what it would be like with the actors in-frame, without having to wait until compositing.

Camera tracking / export

This part is quite tricky for me, I’ve tried multiple softwares to try and track the on-set footage I’ve shot including, After Effects, Blender, Nuke and Fusion. But to no avail due to the fact that the movement of the camera is too extreme and unpredictable by automatic tracking. I’ve finally settled on doing it manually in PFtrack, an old-fashion industry standard software, alongside 3DEqualizer. But I chose PFtrack because of a more user friendly interface. So after masking out the actor movement in the plate and finished tracking and solving. I ran into another problem. Exporting into FBX and importing into Unreal does not work properly. I’ve spent countless days and hours into figuring it out, to the point where I almost gave up. However, I’ve found a tutorial online on this particular issue. Which involves using Maya as an intermediary between the two softwares (PFtrack and Unreal). Since Unreal does not understand ASCII FBX exported directly from PFtrack. I need to use Maya to bake the camera path and re-export as FBX 2020 that Unreal can understand.

Tracking in PFtrack

A tutorial mentioned on how to bake animation in Maya.

After the tracking data successfully imported, I can use the real footage as an overlay on top of the sequencer to see as reference when positioning the camera.

Building Design / Modelling

For modelling the buildings, I used a very mixed methods catering to different buildings and depending on how close it is to a shot. If for hero buildings (the one closer to the camera and needs fidelity) I will model myself and for some complex elements like cranes or detailed construction equipments, I will use KitBash3D assets to help blend in for more realism. I used variety of software for different purpose models. For distant buildings, I often used SketchUp, as it works great extracting models from satelite images and can be very convincingly real when looked at from faraway. However, sketchup exported models cannot be used directly in Unreal. So I had to use intermediary software like 3DS Max to import and reexport the mesh to be supported by Unreal

For more complex models and signage, I used blender as it supports SVG import (for signs and logos).

This model I’ve purchased from Evermotion

https://evermotion.org/shop/show_product/28-skyscraper-/13832

Characters

I have acquired an asset “Spec Ops” from BigMediumSmall as the model is very high detailed and comes with some useful preset animations. However, most of the animations I ended up using in the final sequence were from Mixamo.

https://www.bigmediumsmall.com/specialops

Crowds Simulation

For background characters, I opted to use Anima for its crowd simulation ability, the reason I used Anima over other solutions is for its fidelity in the realism of crowd models and animation. Bonus point is that it can be directly integrated into Unreal scene itself without having to export separate pass. It is a paid subscription service though. But it can be justified by how streamlined it was to my workflow.

When I finished blocking and planning out the scene in anima, I can export these as 3D mesh to reposition and adjust sizing, animation or settings inside Unreal directly. However, I could not preview the animation directly inside Unreal. So I have to render out the scene in low quality to check if the walking people overlap with any object or other people, which is a bit inconvenient when I want to make adjustment and have to render the whole thing just to check the animation.

FX/Simulation

I used Embergen as an alternative to Houdini as it offers real-time visualization of the simulation. I need smoke as a VDB format as I can position it in 3D space inside Unreal Engine.

Compositing

After rendering all the footage from Unreal. Which took around 3 days to render with 2048 samples and 3000+ frames. The reason it takes a lot longer than traditional Unreal renders is that I opted to use “path-tracing” which uses the same light ray tracing techniques as other offline renderers (Blender, Maya, Cinema 4D). I use Fusion to composite the Unreal plate and real photo plate together.

Final Node Tree

Sound Design and Additional ADR Session

After all the compositing and color grading was finalised. I arranged a ADR session with the original actress in the film to record new lines as the story was significantly altered from the original version

I also have to mention the efforts I have put in to building the entire enviroment from scratch in the original unused version. So these following pictures are tributes to the version that was replaced.

Original Version – 14/10/2024:

Environment Design Final

Top: Referenced Image from Google Maps / Bottom: Recreated in Unreal Engine

Final Major Project – Pre-production

Early environment lookdev

I’ve decided to create a virtual environment before going onto principal photography, in order to understand and can see the picture more clearly. Which has helped me think more creatively since I can actually see the environment before production.

Test Render of early environment

Production Planning

Before heading into production, my team and I dedicated considerable time to pre-planning to ensure a smooth shoot despite the unique challenges we faced. To keep costs low, we opted for a makeshift green screen holder rather than building an entire green screen set, knowing this would require more hands-on adjustments during filming. We began by scouting the location, taking precise measurements to understand how best to fit our setup within the space. Next, we conducted practice runs with our cameras, actors, and team members responsible for holding the green screen in place behind the actors. This allowed us to fine-tune positioning and adapt to any movement on set. We thoroughly tested both camera and sound equipment, and did a preliminary shoot to troubleshoot our approach to camera tracking for post-production. We also wanted to see how well the green screen would perform in this less-than-ideal setup. Finally, we rehearsed lines with the actors to make sure they felt confident and prepared for the real shoot. These efforts provided us with a solid foundation and minimized issues during the actual production.

Final Major Project – Ideas/Concepting

In today’s social media landscape, influencers often push boundaries, chasing views and attention by performing stunts that can be reckless, even dangerous. For this project, I wanted to create a piece of short viral content for TikTok that serves as a social commentary on this trend. By showing influencers willing to risk their safety for the sake of spectacle, my goal is to highlight the lengths people go to for a moment of online fame, often without considering the potential consequences.

Shooting this concept in real life would be nearly impossible due to the danger involved, so I decided to set the entire scene in a virtual world. By using Unreal Engine, I’m able to replicate a realistic environment where these risky behaviors can be safely portrayed. The footage will be composited in Fusion and polished in DaVinci Resolve, giving the final piece a heightened sense of realism. This project isn’t just about VFX; it’s a narrative designed to prompt reflection on the impact of social media and the blurred line between reality and performance for today’s digital influencers.

Theme referenced from my favourite movies:

Examples of influencers doing stunts to gain views/likes on social media:

Storyboarding:

The story goes; An influencer attempts to boost her TikTok views by risking her life, walking across the busy Ratchaprasong intersection filled with traffic. The stunt ends tragically when she gets hit by a bus—but that’s not the end. A miracle occurs: the TikToker somehow manages to stop the bus with her mind. Astonished, she starts experimenting with her newfound power, playfully moving the bus while turning to the camera with a look of surprise. However, she loses focus, accidentally causing the bus to crash into a nearby building, which explodes. Panicked, she stops recording and runs away.

Final Thesis Project

Main Concept

  • Recreate one of the busiest street in Bangkok in 3D environment to be as photorealistic as possible
  • Formatted in a TikTok style video featuring an internet influencer attempting to perform a dance on the busy streets of Ratchaprasong Road while avoiding cars and motorcycles passing by
  • Serves a purpose to be a mockumentary/parody of today’s landscape of social media where influencers try to outdo other influencers with more and more outrageous content or dangerous stunts to garner attention
  • Post the following video on TikTok and observe the performance of the video on key factors: virality (how viral does the video perform based on engagements), believability (do people believe the stunt was performed for real) and debatability (how people form opinions or sides of the argument according to the video)

References

https://mgronline.com/travel/detail/9670000006371
@newsonetiktok

ทัวร์ลงยับ เซเลปลูกสาวทหาร เดินถ่ายคลิปกลางถนน ขณะที่รถวิ่งไปมา แถมยังคอมเมนต์แบบไม่กลัวว่า “เดินๆ ไป เดี๋ยวรถเค้าหยุดให้เราเอง” ทำชาวเน็ตถกสนั่นเรื่องความเหมาะสม #News1 #Newsstory #ลูกสาวทหาร #เดินถ่ายคลิปกลางถนน #เซเลปลูกสาวทหาร #เดินๆไปเดี๋ยวรถเค้าหยุดให้เราเอง

♬ เสียงต้นฉบับ – news1 – news1

Idea Board

Location – Ratchadamri Road (CentralwOrld Front Entrance)

https://www.google.com/maps/@13.7458579,100.5402838,3a,53.9y,29.5h,84.26t/data=!3m6!1e1!3m4!1sbMWK0IkoEcFusQxYhFc9pA!2e0!7i16384!8i8192?entry=ttu

Production Methodology 

Green screen setup in broad daylight + tracking marker for camera motion tracking

Why?

  • Saves time from rotoscoping and produce better fine details in the matte especially in the upper body area
  • Daylight provides the best lighting method as the virtual environment was also set in daylight

Cons

  • Cost of buying materials/setting up green screen may be an issue
  • Green spill from the materials may cause issue during post-production while compositing

New Idea 09/05/2024

Instead of just one video. I’ve conceptualized that I could create this social media influencer persona around the idea of mockery of today’s social media landscape.

New Idea 08/07/2024

Sticking with first concept but produce two versions with different concept/video entirely but based on the same location

Concept 1 – (“LISA – Rockstar” TikTok)

The currently popular TikTok trend right now is based on k-pop artist LISA’s “Rockstar” music video where people would stand in the middle of the Yaowarat road. This concept takes a different approach by having the talent do the dancing in the middle of Ratchaprasong Road instead.

Exploratory Practice – Project 2

Plot:

Machine lands on a desert environment, device opens and release a substance which helpes to pollinate the surrounding. Plants start to grow and spread from the initial insertion point. big plant-plosion. Camera zooms out to reveal the deer now covered in plants.

Project Members:

Hu Yang Hao

Xiang Tian Hang

Cholpat Saralamba

Beacher Chen

References:

Storyboard:

Post-Production

My responsibility in this group project is Compositing. Moreover, colour grading, editing and sound design.

UI Design for Scene 2A

Final Output

Scene 3A-C in one composition

Scene 4B Breakdown

Final Output:

Breakdown

Exploratory Practice – Project 1 Journal

In this project, I have decided to create a short about a giant monster attacking Elephant Park

Concept: Hand-held found-footage style shot film about a man discovering a monster roaming through a city at night.

Method: Filming a real footage of walking through a park and when the camera whip pans, seamlessly cut into full CG footage of city and monsters then repeat the process after the camera whips back to real footage of man running away.

Storyline: Man taking a walk in the park, filming something for his homework (Elephant Park) while trying to get home. Suddenly, he hears a loud thumping noise in the distant, coming from behind him. He turns his camera to record what it is, but then realizes it was a huge monster towering over the skyscrapers and roaring as it takes down buildings. Seeing this, the man panicks then turn the camera back to see the monster, but now it is looking directly at him. Then came lunging towards him.

First Version

In the first version, I’ve chosen the settings to be during daytime. However, after careful consideration, I have decided it is not scary enough and also nighttime will hide more noticeably obvious CG Render details.

Current Version

VFX Breakdown

https://youtu.be/6fILfNBu9Xo

Critical Practice – Deepfakes

(Draft 1 – as of 29/05/2024)

How can Artificial Intelligence or Computer-Generated Imagery spread misinformation on TikTok and how to combat the spread?

(Research Title has since then been updated to “HOW DEEPFAKES CAN SPREAD MISINFORMATION ON TIKTOK AND HOW TO COMBAT THE SPREAD“)

Abstract

The rise of social media platforms like TikTok has revolutionised communication and information sharing. However, the widespread rise of Artificial intelligence (AI) and easier access to Computer-Generated Imagery (CGI) has led to concerns of misuse. CGI enables the manipulation of reality, making it difficult to distinguish between genuine and fabricated content. And AI, like OpenAI’s recently revealed “Sora”, can completely generate somewhat convincing video content from nothing but text prompts. These tools can be used by bad actors to disseminate misinformation. And the algorithm-driven nature of TikTok amplifies the reach of misleading videos, potentially influencing public opinion and inciting harmful actions. Combating the spread of CGI-altered misinformation requires a multi-pronged approach, encompassing digital literacy education, regulatory measures, and technological advancements to detect such content. By addressing this issue, we can preserve the integrity of online information sharing and promote a more informed digital landscape.

References

CGI used in Tiktok to trick the viewers in a light-hearted but fabricated content.

https://www.nicematin.com/faits-de-societe/non-uber-ne-propose-pas-des-courses-en-jet-prive-depuis-laeroport-de-cannes-856143

Darker side of “Deepfakes”

https://edition.cnn.com/videos/business/2021/03/02/tom-cruise-tiktok-deepfake-orig.cnn-business

@socialamour

The rise of CGI has caused a few brands to creatively redefine their advertising and we think the results are pretty incredible. Here are some of the best CGI ads we’ve seen so far! If we HAD to pick a favourite it would probably be @Jacquemus . They are absolutely killing it at implementing these deep-fake ads into their marketing strategy! Let us know which CGI ad is your favourite? #cgi #deepfake #FOOH #fauxOOH #viralvideos

♬ original sound – Ian Asher

Deepfake presidents used in Russia-Ukraine war

https://www.bbc.co.uk/news/technology-60780142

Solution

https://www.cogitatiopress.com/mediaandcommunication/article/view/3494