Reverse Engineering of Generative Models: Inferring Model Hyperparameters From Generated Images

Vishal Asnani, Xi Yin, Tal Hassner, Xiaoming Liu

פרסום מחקרי: פרסום בכתב עתמאמרביקורת עמיתים

תקציר

State-of-the-art (SOTA) Generative Models (GMs) can synthesize photo-realistic images that are hard for humans to distinguish from genuine photos. Identifying and understanding manipulated media are crucial to mitigate the social concerns on the potential misuse of GMs. We propose to perform reverse engineering of GMs to infer model hyperparameters from the images generated by these models. We define a novel problem, 'model parsing', as estimating GM network architectures and training loss functions by examining their generated images - a task seemingly impossible for human beings. To tackle this problem, we propose a framework with two components: a Fingerprint Estimation Network (FEN), which estimates a GM fingerprint from a generated image by training with four constraints to encourage the fingerprint to have desired properties, and a Parsing Network (PN), which predicts network architecture and loss functions from the estimated fingerprints. To evaluate our approach, we collect a fake image dataset with 100 K images generated by 116 different GMs. Extensive experiments show encouraging results in parsing the hyperparameters of the unseen models. Finally, our fingerprint estimation can be leveraged for deepfake detection and image attribution, as we show by reporting SOTA results on both the deepfake detection (Celeb-DF) and image attribution benchmarks.

שפה מקוריתאנגלית
עמודים (מ-עד)15477-15493
מספר עמודים17
כתב עתIEEE Transactions on Pattern Analysis and Machine Intelligence
כרך45
מספר גיליון12
מזהי עצם דיגיטלי (DOIs)
סטטוס פרסוםפורסם - 1 דצמ׳ 2023

הערה ביבליוגרפית

Publisher Copyright:
© 1979-2012 IEEE.

טביעת אצבע

להלן מוצגים תחומי המחקר של הפרסום 'Reverse Engineering of Generative Models: Inferring Model Hyperparameters From Generated Images'. יחד הם יוצרים טביעת אצבע ייחודית.

פורמט ציטוט ביבליוגרפי