Publication Type : Conference Paper
Publisher : Springer India
Source : Advances in Intelligent Systems and Computing
Url : https://doi.org/10.1007/978-81-322-2656-7_11
Campus : Coimbatore
School : School of Computing
Department : Computer Science and Engineering
Year : 2016
Abstract :
This paper explores a pioneering methodology that merges the capabilities of ChildNet with a collaborative diffusion model to enhance facial image synthesis. ChildNet, known for its adeptness in generating age-specific facial images from parental inputs, utilizes genetic data to predict age progression. The collaborative diffusion model complements this by refining the synthesized images, focusing on improving detail and realism through advanced image segmentation techniques. This integration facilitates the creation of highly accurate and photo-realistic facial predictions, crucial for applications requiring precise visual representations of age progression. Our approach significantly benefits sectors such as digital entertainment, where lifelike character development is essential, and forensic analysis and genealogical research, where accurate age progression can provide substantial insights. By combining these models, we introduce a robust tool set capable of producing detailed and realistic age-progressed facial images, setting new benchmarks in the accuracy and quality of digital face aging technology.
Cite this Research Publication : G. Abirami, S. Padmavathi, Differential Illumination Enhancement Technique for a Nighttime Video, Advances in Intelligent Systems and Computing, Springer India, 2016, https://doi.org/10.1007/978-81-322-2656-7_11