Code for "Specular Manifold Sampling for Rendering High-Frequency Caustics and Glints" (SIGGRAPH 2020) by Tizian Zeltner, Iliyan Georgiev, and Wenzel Jakob
Source code of the paper "Specular Manifold Sampling for Rendering High-Frequency Caustics and Glints" by Tizian Zeltner, Iliyan Georgiev, and Wenzel Jakob from SIGGRAPH 2020.
The implementation is based on the Mitsuba 2 Renderer, see the lower part of the README.
The normal compilation instructions for Mitsuba 2 apply. See the "Getting started" sections in the documentation. For this project, only the scalar_{rgb,spectral} variants are tested. The paper shows results generated with scalar_rgb.
Various versions of the SMS technique are implemented:
Single-scattering caustic SMS
Sampling technique for diffuse-specular-light connections with a single reflection or refraction event.
include/mitsuba/render/manifold_ss.hand
src/librender/manifold_ss.cpp
src/integrators/path_sms_ss.cpp
src/integrators/path_filtered_ss.cpp
Multi-scattering caustic SMS
Sampling technique for diffuse-specular*-light connections with a fixed number of reflection or refraction events.
include/mitsuba/render/manifold_ms.hand
src/librender/manifold_ms.cpp
src/integrators/path_sms_ms.cpp
src/integrators/path_filtered_ms.cpp
Glint SMS
Sampling technique for glints from specular (normal-mapped) microstructures.
include/mitsuba/render/manifold_glints.h
src/librender/manifold_glints.cpp
src/integrators/path_sms_ms.cpp
Vectorized Glint SMS
Since the submission, we also implemented a version of the glints that use of SIMD vectorization.
include/mitsuba/render/manifold_glints_vectorized.h
src/librender/manifold_glints_vectorized.cpp
Combined caustic and glint integrators
We also combined the previous single/multi-scattering caustics and the glint method into a single integrator that was used for the teaser image.
src/integrators/path_sms_teaser.cpp
src/integrators/path_filtered_teaser.cpp
The directory
resultscontains a set of folders for the different figures in the paper, e.g.
results/Figure__. They contain Python scripts to generate plots or render the included Mitsuba 2 scenes.
source setpath.sh. See the "Running Mitsuba" section in the documentation.
Here is a list of available results:
results/Figure_4_5_RingSolutions/
mitsuba ring.xmlto render Figure 4a.
generate_plots_simple.pyto create the two plots in Figure b,d.
mitsuba ring_paths.xmlto render Figure 4c.
generate_fractal.pyto create Figure 5a.
generate_plots_normalmapped.pyto create Figure 5b.
results/Figure_6_Sequence/
render.pythat renders the scene with Mitsuba after setting the right method parameters.
render_references.pyto render references with path tracing and SMS. This will take a long time!
results/Figure_8_Constraints/
render.pythat renders the scene using the two approaches.
results/Figure_9_Twostage/
render.pythat renders the two scenes with both approaches.
results/Figure_10_TwostageSolutions/
generate_plots.pyto create the two subplots.
results/Figure_11_GlintsZoom/
generate_plots.pyto create plots of the footprint and the convergence basins inside.
results/Figure_12_GlintsMIS/
render.pythat renders the three images with Mitsuba after setting the right method parameters.
results/Figure_14_15_MainComparison/
render_{plane,sphere,pool}.pyto create renderings for all methods at equal time.
render_references_{plane,sphere,pool}.pyto render references with path tracing and SMS. This will take a long time!
results/Figure_16_Displacement/
render.pyto render both versions of the scene.
results/Figure_17_Roughness/
render.pyto render the scenes with varying roughness using both approaches.
results/Figure_18_DoubleRefraction/
render.pyto create renderings for all methods at equal time.
render_references.pyto render references with path tracing and SMS. This will take a long time!
results/Figure_19_GlintsComparison/
generate_normalmaps.pythat will create the high-resolution normal maps used in the two scenes.
.flakesformat used by their method by running these two commands:
.//dist/normalmap_to_flakes textures/normalmap_gaussian_yan.exr gaussian.flakes 4
.//dist/normalmap_to_flakes textures/normalmap_brushed_yan.exr brushed.flakes 2
render_{shoes,kettle}.pyscripts. These run for a long time! Specify the method to use by providing one of the following command line arguments:
ptfor path tracer reference
sms_ubfor unbiased SMS
sms_bfor biased SMS
sms_bvfor biased + vectorized SMS
yanfor the method of Yan et al. 2016
render_references.py.
generate_plots.py.
| Documentation |
| :---: |
| |
Mitsuba 2 is a research-oriented rendering system written in portable C++17. It consists of a small set of core libraries and a wide variety of plugins that implement functionality ranging from materials and light sources to complete rendering algorithms. Mitsuba 2 strives to retain scene compatibility with its predecessor Mitsuba 0.6. However, in most other respects, it is a completely new system following a different set of goals.
The most significant change of Mitsuba 2 is that it is a retargetable renderer: this means that the underlying implementations and data structures are specified in a generic fashion that can be transformed to accomplish a number of different tasks. For example:
In the simplest case, Mitsuba 2 is an ordinary CPU-based RGB renderer that processes one ray at a time similar to its predecessor Mitsuba 0.6.
Alternatively, Mitsuba 2 can be transformed into a differentiable renderer that runs on NVIDIA RTX GPUs. A differentiable rendering algorithm is able to compute derivatives of the entire simulation with respect to input parameters such as camera pose, geometry, BSDFs, textures, and volumes. In conjunction with gradient-based optimization, this opens door to challenging inverse problems including computational material design and scene reconstruction.
Another type of transformation turns Mitsuba 2 into a vectorized CPU renderer that leverages Single Instruction/Multiple Data (SIMD) instruction sets such as AVX512 on modern CPUs to efficiently sample many light paths in parallel.
Yet another type of transformation rewrites physical aspects of the simulation: Mitsuba can be used as a monochromatic renderer, RGB-based renderer, or spectral renderer. Each variant can optionally account for the effects of polarization if desired.
In addition to the above transformations, there are several other noteworthy changes:
Mitsuba 2 provides very fine-grained Python bindings to essentially every function using pybind11. This makes it possible to import the renderer into a Jupyter notebook and develop new algorithms interactively while visualizing their behavior using plots.
The renderer includes a large automated test suite written in Python, and its development relies on several continuous integration servers that compile and test new commits on different operating systems using various compilation settings (e.g. debug/release builds, single/double precision, etc). Manually checking that external contributions don't break existing functionality had become a severe bottleneck in the previous Mitsuba 0.6 codebase, hence the goal of this infrastructure is to avoid such manual checks and streamline interactions with the community (Pull Requests, etc.) in the future.
An all-new cross-platform user interface is currently being developed using the NanoGUI library. Note that this is not yet complete.
Please see the documentation for details on how to compile, use, and extend Mitsuba 2.
This project was created by Wenzel Jakob. Significant features and/or improvements to the code were contributed by Merlin Nimier-David, Guillaume Loubet, Sébastien Speierer, Delio Vicini, and Tizian Zeltner.