Neural Microfacet Fields

for Inverse Rendering


ICCV 2023

Alexander Mai, Dor Verbin, Falko Kuester, and Sara Fridovich-Keil


Paper

code
/static/fd34d15971e3281463ba6b385b485877/frontpage.png

Abstract

We present Neural Microfacet Fields, a method for recovering materials, geometry (volumetric density), and environmental illumination from a collection of images of a scene. Our method applies a microfacet reflectance model within a volumetric setting by treating each sample along the ray as a surface, rather than an emitter. Using surface-based Monte Carlo rendering in a volumetric setting enables our method to perform inverse rendering efficiently and enjoy recent advances in volume rendering. Our approach obtains similar performance as state-of-the-art methods for novel view synthesis and outperforms prior work in inverse rendering, capturing high fidelity geometry and high frequency illumination details.

Method

Our method takes as input a collection of images (100 in our experiments) with known cameras, and outputs the volumetric density and normals, materials (BRDFs), and far-field illumination (environment map) of the scene. We assume that all light sources are infinitely far away from the scene, though light may interact locally with multiple bounces through the scene.

The key to our method is a novel combination of the volume rendering and surface rendering paradigms: we model a density field as in volume rendering, and we model outgoing radiance at every point in space using surface-based light transport (approximated using Monte Carlo ray sampling). Volume rendering with a density field lends itself well to optimization: initializing geometry as a semi-transparent cloud creates useful gradients, and allows for changes in geometry and topology. Using surface-based rendering allows modeling the interaction of light and materials, and enables recovering these materials.

We combine these paradigms by modeling a microfacet field, in which each point in space is endowed with a volume density and a local micro-surface. Light accumulates along rays according to the volume rendering integral but the outgoing light of each 3D point is determined by surface rendering as in the surface rendering integral, using rays sampled according to its local micro-surface. This combination of volume-based and surface-based representation and rendering enables us to optimize through a severely underconstrained inverse problem, recovering geometry, materials, and illumination simultaneously.

figure2 To render the color of a ray cast through the scene, we (a) evaluate density at each sample and compute each sample’s volume rendering quadrature weight wiw_i, then (b) query the material properties and surface normal (flipped if it does not face the camera) at each sample point, which are used to (c) compute the color of each sample by using Monte Carlo integration of the surface rendering integral, where the number of samples used is proportional to the quadrature weight wiw_i. This sample color is then accumulated along the ray using the quadrature weight to get the final ray color.

Decomposition elements

Top left: roughness only. Top right: normals. Bottom left: diffuse lobe color. Bottom right: BRDF integrated against all white background.

Experiments

Here, we reconstruct the shiny musclecar, then render it while spinning the background.

Here, we reconstructed the musclecar and toaster (with randomized background to get rid of floaters) using the same diffuse and BRDF layers, pretrained on the materials scene, which allows us to combine the two scenes. Note the interreflections of the car on the toaster and environment map change on the car. On the left, we show an equivalent decomposition using NVDiffRecMC.

Here we reconstruct the materials and helmet scenes, then replace the environment map of the materials scene using the reconstructed environment map from the helmet scene. Note how the materials accurately change appearance.

Bibtex