TY - JOUR
T1 - FSGANv2: Improved Subject Agnostic Face Swapping and Reenactment.
T2 - Improved Subject Agnostic Face Swapping and Reenactment
AU - Nirkin, Yuval
AU - Keller, Yosi
AU - Hassner, Tal
N1 - Publisher Copyright:
IEEE
PY - 2023/1
Y1 - 2023/1
N2 - We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training on those faces. We derive a novel iterative deep learning-based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce a continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier conference version of our work (Nirkin et al. 2019), as well as additional experiments and results.
AB - We present Face Swapping GAN (FSGAN) for face swapping and reenactment. Unlike previous work, we offer a subject agnostic swapping scheme that can be applied to pairs of faces without requiring training on those faces. We derive a novel iterative deep learning-based approach for face reenactment which adjusts significant pose and expression variations that can be applied to a single image or a video sequence. For video sequences, we introduce a continuous interpolation of the face views based on reenactment, Delaunay Triangulation, and barycentric coordinates. Occluded face regions are handled by a face completion network. Finally, we use a face blending network for seamless blending of the two faces while preserving the target skin color and lighting conditions. This network uses a novel Poisson blending loss combining Poisson optimization with a perceptual loss. We compare our approach to existing state-of-the-art systems and show our results to be both qualitatively and quantitatively superior. This work describes extensions of the FSGAN method, proposed in an earlier conference version of our work (Nirkin et al. 2019), as well as additional experiments and results.
KW - Face swapping
KW - deep learning
KW - face reenactment
UR - http://www.scopus.com/inward/record.url?scp=85129663310&partnerID=8YFLogxK
U2 - 10.1109/TPAMI.2022.3155571
DO - 10.1109/TPAMI.2022.3155571
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 35471874
AN - SCOPUS:85129663310
SN - 0162-8828
VL - 45
SP - 560
EP - 575
JO - IEEE Transactions on Pattern Analysis and Machine Intelligence
JF - IEEE Transactions on Pattern Analysis and Machine Intelligence
IS - 1
ER -