
References
[1] Rameen Abdal, Yipeng Qin, and Peter Wonka. Im-
age2StyleGAN: How to embed images into the StyleGAN
latent space? In ICCV, 2019. 7
[2] Michael Albright and Scott McCloskey. Source generator
attribution via inversion. In CVPR Workshops, 2019. 8
[3] Carl Bergstrom and Jevin West. Which face is
real? http://www.whichfaceisreal.com/learn.html, Accessed
November 15, 2019. 1
[4] Christopher M. Bishop. Pattern Recognition and Machine
Learning. Springer, 2006. 17
[5] Andrew Brock, Jeff Donahue, and Karen Simonyan. Large
scale GAN training for high fidelity natural image synthesis.
CoRR, abs/1809.11096, 2018. 1
[6] Yann N. Dauphin, Harm de Vries, and Yoshua Bengio. Equi-
librated adaptive learning rates for non-convex optimization.
CoRR, abs/1502.04390, 2015. 5
[7] Emily L. Denton, Soumith Chintala, Arthur Szlam, and
Robert Fergus. Deep generative image models using
a Laplacian pyramid of adversarial networks. CoRR,
abs/1506.05751, 2015. 6
[8] Vincent Dumoulin, Ethan Perez, Nathan Schucher, Flo-
rian Strub, Harm de Vries, Aaron Courville, and Yoshua
Bengio. Feature-wise transformations. Distill, 2018.
https://distill.pub/2018/feature-wise-transformations. 1
[9] Vincent Dumoulin, Jonathon Shlens, and Manjunath Kud-
lur. A learned representation for artistic style. CoRR,
abs/1610.07629, 2016. 1
[10] Aviv Gabbay and Yedid Hoshen. Style generator in-
version for image enhancement and animation. CoRR,
abs/1906.11880, 2019. 7
[11] R. Ge, X. Feng, H. Pyla, K. Cameron, and W.
Feng. Power measurement tutorial for the Green500
list. https://www.top500.org/green500/resources/tutorials/,
Accessed March 1, 2020. 21
[12] Robert Geirhos, Patricia Rubisch, Claudio Michaelis,
Matthias Bethge, Felix A. Wichmann, and Wieland Brendel.
ImageNet-trained CNNs are biased towards texture; increas-
ing shape bias improves accuracy and robustness. CoRR,
abs/1811.12231, 2018. 1,4
[13] Golnaz Ghiasi, Honglak Lee, Manjunath Kudlur, Vincent
Dumoulin, and Jonathon Shlens. Exploring the structure of a
real-time, arbitrary neural artistic stylization network. CoRR,
abs/1705.06830, 2017. 1
[14] Xavier Glorot and Yoshua Bengio. Understanding the diffi-
culty of training deep feedforward neural networks. In Pro-
ceedings of the Thirteenth International Conference on Arti-
ficial Intelligence and Statistics, pages 249–256, 2010. 3
[15] G.H. Golub and C.F. Van Loan. Matrix Computations. Johns
Hopkins Studies in the Mathematical Sciences. Johns Hop-
kins University Press, 2013. 17
[16] Ian Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing
Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and
Yoshua Bengio. Generative adversarial networks. In NIPS,
2014. 1,5,11
[17] Ishaan Gulrajani, Faruk Ahmed, Mart´
ın Arjovsky, Vincent
Dumoulin, and Aaron C. Courville. Improved training of
Wasserstein GANs. CoRR, abs/1704.00028, 2017. 6
[18] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian
Sun. Deep residual learning for image recognition. CoRR,
abs/1512.03385, 2015. 6
[19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Delving deep into rectifiers: Surpassing human-level perfor-
mance on ImageNet classification. CoRR, abs/1502.01852,
2015. 3
[20] Martin Heusel, Hubert Ramsauer, Thomas Unterthiner,
Bernhard Nessler, and Sepp Hochreiter. GANs trained by
a two time-scale update rule converge to a local Nash equi-
librium. In Proc. NIPS, pages 6626–6637, 2017. 1
[21] Xun Huang and Serge J. Belongie. Arbitrary style trans-
fer in real-time with adaptive instance normalization. CoRR,
abs/1703.06868, 2017. 1
[22] Animesh Karnewar and Oliver Wang. MSG-GAN: multi-
scale gradients for generative adversarial networks. In Proc.
CVPR, 2020. 6
[23] Tero Karras, Timo Aila, Samuli Laine, and Jaakko Lehtinen.
Progressive growing of GANs for improved quality, stability,
and variation. CoRR, abs/1710.10196, 2017. 1,5,11
[24] Tero Karras, Samuli Laine, and Timo Aila. A style-based
generator architecture for generative adversarial networks. In
Proc. CVPR, 2018. 1,2,4,5,11,13,16,20
[25] Diederik P. Kingma and Jimmy Ba. Adam: A method for
stochastic optimization. In ICLR, 2015. 11,19
[26] Alex Krizhevsky, Ilya Sutskever, and Geoffrey E. Hinton.
ImageNet classification with deep convolutional neural net-
works. In NIPS, pages 1097–1105. 2012. 11
[27] Tuomas Kynk¨
a¨
anniemi, Tero Karras, Samuli Laine, Jaakko
Lehtinen, and Timo Aila. Improved precision and recall met-
ric for assessing generative models. In Proc. NeurIPS, 2019.
1,2,4
[28] Barbara Landau, Linda B. Smith, and Susan S. Jones. The
importance of shape in early lexical learning. Cognitive De-
velopment, 3(3), 1988. 4
[29] Haodong Li, Han Chen, Bin Li, and Shunquan Tan. Can
forensic detectors identify GAN generated images? In Proc.
Asia-Pacific Signal and Information Processing Association
Annual Summit and Conference (APSIPA ASC), 2018. 7
[30] Lars Mescheder, Andreas Geiger, and Sebastian Nowozin.
Which training methods for GANs do actually converge?
CoRR, abs/1801.04406, 2018. 5,11
[31] Takeru Miyato, Toshiki Kataoka, Masanori Koyama, and
Yuichi Yoshida. Spectral normalization for generative ad-
versarial networks. CoRR, abs/1802.05957, 2018. 1,5,6,
20
[32] Dmitry Nikitko. StyleGAN – Encoder for official Ten-
sorFlow implementation. https://github.com/Puzer/stylegan-
encoder/, 2019. 7
[33] Augustus Odena, Jacob Buckman, Catherine Olsson, Tom B.
Brown, Christopher Olah, Colin Raffel, and Ian Goodfellow.
Is generator conditioning causally related to GAN perfor-
mance? CoRR, abs/1802.08768, 2018. 5,18
9