Realtime Rendering With Unreal Studio
For most typical jobs that I design, the biggest part of the process is explaining the environment through renderings, sketches, and diagrams. Nearly all of the work I share on this site are images from that process, though for every one posted here there are several, sometimes dozens, of other renderings that show details, alternative angles, comparisons and context. For something like an island exhibit, that might consist of five or six different angles. For a larger event or experience, it could be twenty. For spaces with a lot of small interactions, or sprawling floorplans with a lot of walls, a handful of renderings isn’t always enough to explain how it all works. Shortcuts like cutaway views and diagrams can help make a complex scene understandable, but it’s always helpful to see a proposed design the way you would out of your own eyes.
I’ve been working on a new way to visualize environments using Unreal Studio, a rendering engine using the same technology as a lot of modern, popular video games and virtual reality applications. The traditional way of making 3D renderings for architectural visualization involves making a 3D model, then creating and applying materials to the model, setting up virtual lights and cameras, and letting the computer turn that information into a photorealistic image. The process is a lot like a simulation of photography, creating an exposure by tracing the paths of light as it bounces from its source, around the 3D model, and into camera. By now, I’ve rendered so many scenes that I have templates set up to make the setup process easy and fast, but I still need to wait for the computer to work its magic. For a complex scene, this can be something like 40 minutes to 2 hours. Rendering images of every nook and cranny of a large event venue isn’t possible with the sort of tight deadlines we often are working under.
Unreal Engine, or its competitor Unity, work in a different way. The geometry of the 3D model is still created the same way, but instead of raytracing images one frame at a time, the software builds an application that renders the 3D environment using shaders computed by the machine’s GPU. Simply put, and I’m far from being a computer scientist so I may not have this totally right, instead of doing a large amount of complex math to create a photorealistic scene one time, it breaks the job up into thousands of simpler pieces that it can calculate faster but less precisely. The end result is that it renders 30 frames per second, instead of 1 frame every ten minutes, though not quite as realistic. The major practical advantage with realtime rendering is that the camera is fully interactive. Like any of the video games based on the same technology, you can move around and explore the environment. Communicating the design this way is less of a presentation and more of a conversation. Any detail or space can be seen and revisited from another angle, and the dimension of motion gives better context to the design.
I recently designed this simple exhibit space for a client’s RFP response, and translated it to Unreal Studio to make a simple demo of the technique. In this video, I’m moving the camera with an Xbox controller.
Controlling it live and rendering it in real-time can be done from my laptop computer in person, or from my desk streamed over a video call. I can also record a walkthrough and share a video like the above. If you’d like to try using realtime rendering for your next project, drop me a message!