Guilin Liu
Research Scientist at NVIDIA
Santan Clara, CA

Email: guilinl at nvidia.com
CV
LinkedIn
Google Scholar
I am a Research Scientist in Applied Deep Learning Research group at NVIDIA, where we do deep learning related research. I got my Ph.D. in Computer Science from George Mason University in 2017 summer. In 2012, I got a Bachelor's degree in Spatial Informatics & Digitalized Technology(Software Engineering and Georgraphy Information System) and a minor degree in Finance from Wuhan University. I was a research intern at TTI Chicago in 2015 summer and at Adobe Research in 2016 summer.

I am looking for research intern to work on the research topics in deep learning for vision and graphics. Please send me an email with your CV if you are interested.

News:

Nov. 2018: Code and Paper of Partial Convolution based Padding (better than all existing padding schemes) is released at Code.
Sep. 2018: Our inpainting online demo is now available at https://www.nvidia.com/research/inpainting/ (Note: the natural image model is the consistent with model describled in ECCV paper; the face image model has been improved after ECCV.)
Sep. 2018: Video-to-Video Synthesis was accepted to NIPS 2018.
July 2018: Two papers got accepted to ECCV 2018.
May 2018: Showed image inpainting demo during NVIDIA CEO Jensen Huang's keynote talk at GTC Taiwan .
May 2018: Recently we released a new paper Image Inpainting for Irregular Holes Using Partial Convolutions (project page with FAQ) . The Youtube video can be found here , which has been viewed over 1,000,000 times. This project was also featured in many presses, including Fortune, Forbes.

Publications:

   Partial Convolution based Padding
Guilin Liu, Kevin J. Shih, Ting-Chun Wang, Fitsum A. Reda, Karan Sapra, Zhiding Yu, Andrew Tao, Bryan Catanzaro
arXiv preprint
Paper   Code
   Image Inpainting for Irregular Holes Using Partial Convolutions
Guilin Liu, Fitsum A. Reda, Kevin J. Shih, Ting-Chun Wang, Andrew Tao, Bryan Catanzaro
ECCV 2018
Paper   Project   Video   Fortune   Forbes
   Video-to-Video Synthesis
Ting-Chun Wang, Ming-Yu Liu, Jun-Yan Zhu, Guilin Liu, Andrew Tao, Jan Kautz, Bryan Catanzaro
NIPS 2018
Paper   Project   Video   Arxiv   Code
   SDC-Net: Video prediction using spatially-displaced convolution
Fitsum A. Reda, Guilin Liu, Kevin J. Shih, Robert Kirby, Jon Barker, David Tarjan, Andrew Tao, Bryan Catanzaro
ECCV 2018
Paper  
   Material Editing Using a Physically Based Rendering Network
Guilin Liu, Duygu Ceylan, Ersin Yumer, Jimei Yang, Jyh-Ming Lien
ICCV 2017
Paper   Project   Data
   Symmetry-aware Depth Estimation using Deep Neural Networks
Guilin Liu, Chao Yang, Zimo Li, Duygu Ceylan, Qixing Huang
arxiv 2016
Arxiv  
   Nearly Convex Segmentation of Polyhedra Through Convex Ridge Separation
Guilin Liu, Zhonghua Xi, Jyh-Ming Lien
SPM 2016, also in Journal of Computer-Aided Design
Paper   Project   Video
   Continuous Visibility Feature
Guilin Liu, Zhonghua Xi, Jyh-Ming Lien
CVPR 2015
Paper   Project   Code
   Fast Medial Axis Approximation via Max-Margin Pushing
Guilin Liu, Jyh-Ming Lien
IROS 2015
Paper   Project   Video
   Dual-Space Decomposition of 2D Complex Shapes
Guilin Liu, Zhonghua Xi, Jyh-Ming Lien
CVPR 2014
Paper   Project   Code

Media Coverage:

Fortune, Forbes, Fast Company, Engadget, SlashGear, Digital Trends, TNW, eTeknix, Game Debate, Alphr, Gizbot, Fossbytes, Techradar, Beeborn, Bit-tech, Hexus, HotHardWare, BleepingComputer, hardocp, boingboing, PetaPixel, 搜狐, 新浪, 量子位(知乎)