1. Antoniou, A., Storkey, A., & Edwards, H. (2017). Data augmentation generative adversarial networks. arXiv:1711.04340.
2. Azadi, S., Fisher, M., Kim, V., Wang, Z., & Shechtman, E. (2017). Multi-content GAN for few-shot font style transfer. arXiv:1712.00516.
3. Chang, A. X., Funkhouser, T., Guibas, L., Hanrahan, P., Huang, Q., Li, Z., et al. (2015). ShapeNet: An information-rich 3D model repository. Tech. Rep., Stanford University—Princeton University—Toyota Technological Institute at Chicago. arXiv:1512.03012 [cs.GR].
4. Chen, X., Duan, Y., Houthooft, R., Schulman, J., Sutskever, I., & Abbeel, P. (2016). Infogan: Interpretable representation learning by information maximizing generative adversarial nets. In NIPS.
5. Cordts, M., Omran, M., Ramos, S., Rehfeld, T., Enzweiler, M., Benenson, R., et al. (2016). The cityscapes dataset for semantic urban scene understanding. In CVPR.