Last Friday, researchers from Nvidia came out with another milestone in AI development: the new Magic3D tool can generate complete 3D models from a simple text description. In the
related paper, Nvidia describes a two-step process that takes a rough model created at low resolution and optimises it for higher resolution.
According to the paper's authors, the resulting Magic3D method can generate 3D objects twice as fast as DreamFusion. After an input such as "A silver tray stacked with fruit", Magic3D generates a 3D mesh model with a coloured texture in about 40 minutes (on eight NVIDIA A100 GPUs). That's twice as fast as a similar DreamFusion project from Google.

Nvidia Magic3D creates a 3D model with texture from a set.
Magic3D can also reconstruct 3D models. This works best with the (relatively quickly generated) low-resolution 3D model by tweaking the base prompt. In addition, the authors of Magic3D demonstrate how to maintain the same motif over several generations (a concept often referred to as coherence) and how to apply the style of a 2D image (e.g. a cubist painting) to a 3D model.
However, you can't try it out yourself yet, because no code has (yet?) been published with the scientific work. Since Magic3D currently requires a lot of GPU power to produce a model in an acceptable amount of time, a version for the home computer is only of limited use anyway.
However, Nvidia announced that the models can be directly integrated into their own Omniverse and used there. It can therefore be strongly assumed that Nvidia will definitely offer an online version of the tool in order to be able to quickly provide any objects as 3D objects in Digital Twin worlds.Alternative developments from the corner of Stable Diffusion are, however, also already on the way...