shapegptlogo

ShapeGPT: 3D Shape Generation with
A Unified Multi-modal Language Model

Fudan University1Fudan University
Tencent PCG2Tencent PCG
ShanghaiTech University3ShanghaiTech Univ.
Zhejiang Lab4Zhejiang Lab

Paper
Abstract

The advent of large language models, enabling flexibility through instruction-driven approaches, has revolutionized many traditional generative tasks, but large models for 3D data, particularly in comprehensively handling 3D shapes with other modalities, are still under-explored. By achieving instruction-based shape generations, versatile multimodal generative shape models can significantly benefit various fields like 3D virtual construction and network-aided design. In this work, we present ShapeGPT, a shape-included multi-modal framework to leverage strong pre-trained language models to address multiple shape-relevant tasks. Specifically, ShapeGPT employs a word-sentence-paragraph framework to discretize continuous shapes into shape words, further assembles these words for shape sentences, as well as integrates shape with instructional text for multi-modal paragraphs. To learn this shape-language model, we use a three-stage training scheme, including shape representation, multimodal alignment, and instruction-based generation, to align shape-language codebooks and learn the intricate correlations among these modalities. Extensive experiments demonstrate that ShapeGPT achieves comparable performance across shape-relevant tasks, including text-to-shape, shape-to-text, shape completion, and shape editing.

Being completed

Made with Next.js, Tailwind CSS and shadcn/ui. Icons from Lucide. Style inspired by RERENDER A VIDEOand MotionGPT.