Posting a paper from a couple of years ago that doesn’t seem to have made it into Google Scholar.
We propose a new volumetric integration method that combines guiding of candidate point positions and importance resampling. We refer to this as the virtual density segment method (VDS). In particular, we show that this control can be driven by treating invertible PDFs as virtual density sources, which in turn steers a tracking algorithm to generate distributions of points that conform to the same, arbitrary PDFs. We combine this virtual density process with importance resampling to pick samples from the set of candidates according to the full product of the VRE. The resampling step is especially beneficial for non-invertible terms, such as complex light shapers like projected texture maps. In the end, by bridging tracking methods and inversion-based importance sampling, we arrive at a method for steering sampling that can incorporate any number of PDFs, thereby providing a general framework for combining arbitrary importance sampling schemes with tracking. Finally, having employed the importance resampling method for direct lighting, we also introduce a related method to the sampling of indirect volumetric illumination for highly anisotropic media.
http://magnuswrenninge.com/wrenninge-productimportancesampling
https://graphics.pixar.com/library/CandidateSampling/index.html
We are presenting three talks related to The Good Dinosaur at SIGGRAPH 2016 in Anaheim.
Abstracts can be found here:
We just published a new paper on volumetric modeling and volumetric motion blur in JCGT.
Temporal volumes are a new representation of volumes that capture continuous time, making for fast motion blur in path tracers. Reves (a play on Reyes, on which it is based) is a volume modeling algorithm that makes it possible to use a much broader set of primitives in order to model procedural volumes.
A while back, I was interested in comparing the performance of Field3D with some other voxel storage libraries, most notably OpenVDB. Personally, I was interested in the memory- and performance implications of using different data structures, because when Field3D was first developed, our use of multiple data structures came from the assumption that different uses and applications require different data structures for optimal performance. Thus, it was of interest to us to see what each data structure’s performance was under a range of different tasks, so that we could make educated decisions about when to use each one.
The result of this is a suite of tests located at the GitHub/Field3D repo.
The test code was initially written by me, but as the various tests took shape, I implemented the Field3D parts, and people from the VDB team implemented the VDB parts.
Hopefully, the tests can be used as a basis for determining which data structure is more suitable for a range of different contexts: dense data, sparse data, narrow band levelsets, coherent lookups, incoherent lookups, memory overhead, etc., etc.
A summary of the test results is available in PDF form here:
Field3D performance tests