Final Thesis – Research Sources

https://www.respeecher.com/blog/de-aging-technology-changing-hollywood-future-film-making

De-Aging in Film

De-aging technology has become a game-changer in the film industry, allowing actors to portray younger versions of themselves with remarkable realism. This 3D effect technique involves editing digital images or applying computer-generated imagery (CGI) to make actors appear younger on screen

1. Notable examples include Kurt Russell in “Guardians of the Galaxy Vol. 2” and Robert De Niro in “The Irishman,” where advanced VFX techniques were used to convincingly portray the actors at various stages of their lives

2. The process often combines makeup, CGI, and sometimes the use of younger body doubles to achieve a seamless transformation, pushing the boundaries of visual storytelling and enabling filmmakers to explore characters’ histories without recasting roles3.

Body Enhancement Techniques

Body enhancement techniques in VFX have become increasingly sophisticated, allowing filmmakers to alter actors’ physiques dramatically. A prime example is the work done on Dwayne Johnson’s appearance in “Black Adam,” where Weta Digital used a high-fidelity digital asset to enhance and sometimes modify his muscular build for different scenes

1. These techniques go beyond simple touch-ups, often involving the creation of entirely digital body doubles that can be manipulated to achieve the desired look. VFX artists can sculpt muscles, adjust proportions, and even create fantastical body transformations, pushing the boundaries of what’s possible in visual storytelling while raising questions about the authenticity of on-screen performances.

https://www.marieclaire.com/culture/a26709/how-movie-visual-effects-make-actors-look-younger/

https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2016.00334/full

Body size misperception is common amongst the general public and is a core component of eating disorders and related conditions. While perennial media exposure to the “thin ideal” has been blamed for this misperception, relatively little research has examined visual adaptation as a potential mechanism. We examined the extent to which the bodies of “self” and “other” are processed by common or separate mechanisms in young women. Using a contingent adaptation paradigm, experiment 1 gave participants prolonged exposure to images both of the self and of another female that had been distorted in opposite directions (e.g., expanded other/contracted self), and assessed the aftereffects using test images both of the self and other. The directions of the resulting perceptual biases were contingent on the test stimulus, establishing at least some separation between the mechanisms encoding these body types. Experiment 2 used a cross adaptation paradigm to further investigate the extent to which these mechanisms are independent. Participants were adapted either to expanded or to contracted images of their own body or that of another female. While adaptation effects were largest when adapting and testing with the same body type, confirming the separation of mechanisms reported in experiment 1, substantial misperceptions were also demonstrated for cross adaptation conditions, demonstrating a degree of overlap in the encoding of self and other. In addition, the evidence of misperception of one’s own body following exposure to “thin” and to “fat” others demonstrates the viability of visual adaptation as a model of body image disturbance both for those who underestimate and those who overestimate their own size.

Disney’s research on software to “re-age” actors

Photorealistic digital re-aging of faces in video is becoming increasingly common in entertainment and advertising. But the predominant 2D painting workflow often requires frame-by-frame manual work that can take days to accomplish, even by skilled artists. Although research on facial image re-aging has attempted to automate and solve this problem, current techniques are of little practical use as they typically suffer from facial identity loss, poor resolution, and unstable results across subsequent video frames. In this paper, we present the first practical, fully-automatic and production-ready method for re-aging faces in video images. Our first key insight is in addressing the problem of collecting longitudinal training data for learning to re-age faces over extended periods of time, a task that is nearly impossible to accomplish for a large number of real people. We show how such a longitudinal dataset can be constructed by leveraging the current state-of-the-art in facial re-aging that, although failing on real images, does provide photoreal re-aging results on synthetic faces. Our second key insight is then to leverage such synthetic data and formulate facial re-aging as a practical image-to-image translation task that can be performed by training a well-understood U-Net architecture, without the need for more complex network designs. We demonstrate how the simple U-Net, surprisingly, allows us to advance the state of the art for re-aging real faces on video, with unprecedented temporal stability and preservation of facial identity across variable expressions, viewpoints, and lighting conditions. Finally, our new face re-aging network (FRAN) incorporates simple and intuitive mechanisms that provides artists with localized control and creative freedom to direct and fine-tune the re-aging effect, a feature that is largely important in real production pipelines and often overlooked in related research work.