Abstract
We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training using those faces. We derive a novel iterative deep learning based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier, conference version of our work [1], as well as additional experiments and results.
Original language | English |
---|---|
Pages (from-to) | 560-575 |
Number of pages | 16 |
Journal | IEEE Transactions on Pattern Analysis and Machine Intelligence |
Volume | 45 |
Issue number | 1 |
Early online date | 26 Apr 2022 |
DOIs | |
State | Published - 2023 |
Bibliographical note
Publisher Copyright:IEEE
Keywords
- Deep Learning
- Face Reenactment
- Face Swapping
- Faces
- Generators
- Image segmentation
- Rendering (computer graphics)
- Three-dimensional displays
- Training
- Videos