A Generalizable Light Transport 3D Embedding for Global Illumination
This paper introduces a transformer-based 3D embedding that efficiently approximates global illumination in computer-generated scenes, allowing for generalizable and view-independent rendering without traditional ray tracing. The model learns to encode scene geometry, materials, and lighting into latent codes, which can then be decoded to predict indirect lighting effects, with preliminary results showing potential for complex tasks like glossy reflections and path guiding.