3D-SSGAN: Lifting 2D Semantics for 3D-Conscious Compositional Portrait Synthesis
Authors: Ruiqi Liu, Peng Zheng, Ye Wang, Rui Ma
Summary: Current 3D-aware portrait synthesis strategies can generate spectacular high-quality photos whereas preserving robust 3D consistency. Nonetheless, most of them can’t assist the fine-grained part-level management over synthesized photos. Conversely, some GAN-based 2D portrait synthesis strategies can obtain clear disentanglement of facial areas, however they can not protect view consistency resulting from a scarcity of 3D modeling skills. To handle these points, we suggest 3D-SSGAN, a novel framework for 3D-aware compositional portrait picture synthesis. First, a easy but efficient depth-guided 2D-to-3D lifting module maps the generated 2D half options and semantics to 3D. Then, a quantity renderer with a novel 3D-aware semantic masks renderer is utilized to provide the composed face options and corresponding masks. The entire framework is skilled end-to-end by discriminating between actual and synthesized 2D photos and their semantic masks. Quantitative and qualitative evaluations display the prevalence of 3D-SSGAN in controllable part-level synthesis whereas preserving 3D view consistenc