We describe a deep learning based method for estimating 3D facial expression coefficients. Unlike previous work, our process does not relay on facial landmark detection methods as a proxy step. Recent methods have shown that a CNN can be trained to regress accurate and discriminative 3D morphable model (3DMM) representations, directly from image intensities. By foregoing landmark detection, these methods were able to estimate shapes for occluded faces appearing in unprecedented viewing conditions. We build on those methods by showing that facial expressions can also be estimated by a robust, deep, landmark-free approach. Our ExpNet CNN is applied directly to the intensities of a face image and regresses a 29D vector of 3D expression coefficients. We propose a unique method for collecting data to train our network, leveraging on the robustness of deep networks to training label noise. We further offer a novel means of evaluating the accuracy of estimated expression coefficients: by measuring how well they capture facial emotions on the CK+ and EmotiW-17 emotion recognition benchmarks. We show that our ExpNet produces expression coefficients which better discriminate between facial emotions than those obtained using state of the art, facial landmark detectors. Moreover, this advantage grows as image scales drop, demonstrating that our ExpNet is more robust to scale changes than landmark detectors. Finally, our ExpNet is orders of magnitude faster than its alternatives.
|כותר פרסום המארח||Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018|
|מוציא לאור||Institute of Electrical and Electronics Engineers Inc.|
|מזהי עצם דיגיטלי (DOIs)|
|סטטוס פרסום||פורסם - 5 יוני 2018|
|אירוע||13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018 - Xi'an, סין|
משך הזמן: 15 מאי 2018 → 19 מאי 2018
|שם||Proceedings - 13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018|
|כנס||13th IEEE International Conference on Automatic Face and Gesture Recognition, FG 2018|
|תקופה||15/05/18 → 19/05/18|
הערה ביבליוגרפיתPublisher Copyright:
© 2018 IEEE.