Developing comprehensive explicit world models is crucial for understanding and simulating real-world scenarios. Recently, Procedural Controllable Generation (PCG) has gained significant attention in large-scale scene generation by enabling the creation of scalable, high-quality assets. However, PCG faces challenges such as limited modular diversity, high expertise requirements, and challenges in managing the diverse elements and structures in complex scenes. In this paper, we introduce a large-scale scene generation framework, SceneX, which can automatically produce high-quality procedural models according to designers' textual descriptions. Specifically, the proposed method comprises two components, PCGHub and PCGPlanner. The former encompasses an extensive collection of accessible procedural assets and thousands of hand-craft API documents to perform as a standard protocol for PCG controller. The latter aims to generate executable actions for Blender to produce controllable and precise 3D assets guided by the user's instructions. Extensive experiments demonstrated the capability of our method in controllable large-scale scene generation, including nature scenes and unbounded cities, as well as scene editing such as asset placement and season translation.
The generated models are characterized by delicate geometric structures, realistic material textures, and natural lighting, allowing for seamless deployment in the industrial pipeline.
SceneX framework converting user text input into diverse 3D scenes through four stages: scene decomposition stage, terrain generation stage, objects generation \& retrieval stage and assets placement stage.
@article{,
author = {Mengqi Zhou and Yuxi Wang and Jun Hou and Shougao Zhang and Yiwei Li and Chuanchen Luo and Junran Peng and Zhaoxiang Zhang},
title = {SceneX: Procedural Controllable Large-scale Scene Generation},
journal = {arXiv preprint arXiv:2403.15698},
year = {2024},
}