Generative AI is cool and all, but procedural 3D modeling just hits different. Check out this Houdini setup by Pepe Buendia.
Why this is cool: Instead of manually placing every building and car, this system generates an NYC-style city that builds itself — automatically… pic.twitter.com/ssbMcfKrnq
— Bilawal Sidhu (@bilawalsidhu) November 11, 2024
Why this is cool: Instead of manually placing every building and car, this system generates an NYC-style city that builds itself — automatically spawning buildings with unique variations, plus a traffic system where cars actually obey traffic lights and avoid crashing into each other (all orchestrated through houdini VEX code).
Each car knows exactly which road it’s on and makes real-time decisions about navigation. The whole thing runs on custom algorithms that handle everything from road directions to building placement.
If you did this with generative AI, even approaches to conditioning and taming the chaos — you’d still get a ton of variance which might be an issue for visual and spatial consistency.
To me this is procedural modeling at its finest – precise control meets infinite scalability. Immensely useful for media & entertainment, but also synthetic training data for robotics and augmented reality.
I of course can’t wait until multimodal LLMs to have the spatial understanding to write these kind of controllable 3D worlds for us. Claude counting 2D pixels on a screen is a step in this direction. Soon they’ll be able to manipulate 2D ortho viewports, and eventually spawn procedural 3D worlds.