[10:17 Fri,21.October 2022 by Thomas Richter]
In the field of AI, the step from study to startup happens pretty fast - so the new company LumaLabsAI demonstrates its Luma AI app, which can generate 3D views (and also models) of objects on a smartphone just based on some photos. For now, the project is still in closed beta (you can apply for access), but there are already some impressive examples of various 3D models generated by Luma AI. For now, Luma AI is only available for iOS, but an Android version is planned.
We had previously reported about Neural Radiance Fields (NeRFs), the underlying AI technology of LumaAI - such as Nvidia Instant NeRF or Google&s NeRF in the Wild - virtual camera movements from photos. To our knowledge, however, LumaAI is the first project that will soon make NeRFs generally available to a large circle of users in the form of a smartphone app.
Unlike previous NeRF methods, Luma AI does not require a large number of photos as input, but a video tour around the desired object. After capturing and processing, the object can then be viewed interactively both in its environment and isolated from all sides - even zooming into the object is possible (to a limited extent) or even synthesized photos from any perspective. Likewise, the app can generate a 360° view of a room, which can then be viewed using a virtual 360° camera pan. However, according to a the developers, generating a 3D view (which is done online via cloud) of a room still takes about 30 minutes.
more infos at bei lumalabs.ai
deutsche Version dieser Seite: Luma AI: Neue Smartphone-App erstellt 3D-Modelle von beliebigen Objekten