Miloš Hašan

milos dot hasan at gmail dot com

I am currently a Senior Research Scientist at Adobe Research in San Jose. Before joining Adobe, I was an engineer and researcher at Autodesk in San Francisco (2012-2018), a postdoc at UC Berkeley (2010-2012, advised by Ravi Ramamoorthi), and a postdoc at Harvard University (2009-2010, Hanspeter Pfister). I received my Ph.D. in Computer Science from Cornell in August 2009, under the supervision of Kavita Bala. I am interested in computer graphics and many related topics.

Selected publications

MATch: Differentiable Material Graphs for Procedural Material Capture
Liang Shi, Beichen Li, Miloš Hašan, Kalyan Sunkavalli, Tamy Boubekeur, Radomír Měch, Wojciech Matusik
SIGGRAPH Asia 2020 [project] [PDF] [video]

We present MATch, a method to automatically convert photographs of material samples into production-grade procedural material models. At the core of MATch is a new library DiffMat that provides differentiable building blocks for constructing procedural materials, which can be used to automatically translate large-scale procedural models, with hundreds to thousands of node parameters, into differentiable node graphs. Combining these translated node graphs with a rendering layer yields an end-to-end differentiable pipeline that maps node graph parameters to rendered images. This facilitates the use of gradient-based optimization to estimate the parameters such that the resulting material, when rendered, matches the target image appearance, as quantified by a style transfer loss. In addition, we propose a deep neural feature-based graph selection and parameter initialization method that efficiently scales to a large number of procedural graphs. We evaluate our method on both rendered synthetic materials and real materials captured as flash photographs. We demonstrate that MATch can reconstruct more accurate, general, and complex procedural materials compared to the state-of-the-art. Moreover, by producing a procedural output, we unlock capabilities such as constructing arbitrary-resolution material maps and parametrically editing the material appearance.

MaterialGAN: Reflectance Capture using a Generative SVBRDF Model
Yu Guo, Cameron Smith, Miloš Hašan, Kalyan Sunkavalli, and Shuang Zhao
SIGGRAPH Asia 2020 [project] [PDF] [video]

We address the problem of reconstructing spatially-varying BRDFs from a small set of image measurements. This is a fundamentally under-constrained problem, and previous work has relied on using various regularization priors or on capturing many images to produce plausible results. In this work, we present MaterialGAN, a deep generative convolutional network based on StyleGAN2, trained to synthesize realistic SVBRDF parameter maps. We show that MaterialGAN can be used as a powerful material prior in an inverse rendering framework: we optimize in its latent representation to generate material maps that match the appearance of the captured images when rendered. We demonstrate this framework on the task of reconstructing SVBRDFs from images captured under flash illumination using a hand-held mobile phone. Our method succeeds in producing plausible material maps that accurately reproduce the target images, and outperforms previous state-of-the-art material capture methods in evaluations on both synthetic and real data. Furthermore, our GAN-based latent space allows for high-level semantic material editing operations such as generating material variations and material morphing.

A Bayesian Inference Framework for Procedural Material Parameter Estimation
Yu Guo, Miloš Hašan, Ling-Qi Yan, Shuang Zhao
Pacific Graphics 2020 (Computer Graphics Forum) [project] [PDF] [supplement]

Procedural material models have been gaining traction in many applications thanks to their flexibility, compactness, and easy editability. We explore the inverse rendering problem of procedural material parameter estimation from photographs, presenting a unified view of the problem in a Bayesian framework. In addition to computing point estimates of the parameters by optimization, our framework uses a Markov Chain Monte Carlo approach to sample the space of plausible material parameters, providing a collection of plausible matches that a user can choose from, and efficiently handling both discrete and continuous model parameters. To demonstrate the effectiveness of our framework, we fit procedural models of a range of materials — wall plaster, leather, wood, anisotropic brushed metals and layered metallic paints — to both synthetic and real target images.

Path Cuts: Efficient Rendering of Pure Specular Light Transport
Beibei Wang, Miloš Hašan, Ling-Qi Yan
SIGGRAPH Asia 2020 [PDF] [video]

In scenes lit with sharp point-like light sources, light can bounce several times on specular materials before getting into our eyes, forming purely specular light paths. However, to our knowledge, rendering such multi-bounce pure specular paths has not been handled in previous work: while many light transport methods have been devised to sample various kinds of light paths, none of them are able to find multi-bounce pure specular light paths from a point light to a pinhole camera. In this paper, we present path cuts to efficiently render such light paths. We use a path space hierarchy combined with interval arithmetic bounds to prune non-contributing regions of path space, and to slice the path space into regions small enough to empirically contain at most one solution. Next, we use an automatic differentiation tool and a Newton-based solver to find an admissible specular path within a given path space region. We demonstrate results on several complex specular configurations, including RR, TT, TRT and TTTT paths.

Deep Reflectance Volumes: Relightable Reconstructions from Multi-View Photometric Images
Sai Bi, Zexiang Xu, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi
ECCV 2020 [arXiv] [PDF]

We present a deep learning approach to reconstruct scene appearance from unstructured images captured under collocated point lighting. At the heart of Deep Reflectance Volumes is a novel volumetric scene representation consisting of opacity, surface normal and reflectance voxel grids. We present a novel physically-based differentiable volume ray marching framework to render these scene volumes under arbitrary viewpoint and lighting. This allows us to optimize the scene volumes to minimize the error between their rendered images and the captured images. Our method is able to reconstruct real scenes with challenging non-Lambertian reflectance and complex geometry with occlusions and shadowing. Moreover, it accurately generalizes to novel viewpoints and lighting, including non-collocated lighting, rendering photorealistic images that are significantly better than state-of-the-art mesh-based methods. We also show that our learned reflectance volumes are editable, allowing for modifying the materials of the captured scenes.

Neural Reflectance Fields for Appearance Acquisition
Sai Bi, Zexiang Xu, Pratul Srinivasan, Ben Mildenhall, Kalyan Sunkavalli, Miloš Hašan, Yannick Hold-Geoffroy, David Kriegman, Ravi Ramamoorthi
[arXiv] [PDF]

We present Neural Reflectance Fields, a novel deep scene representation that encodes volume density, normal and reflectance properties at any 3D point in a scene using a fully-connected neural network. We combine this representation with a physically-based differentiable ray marching framework that can render images from a neural reflectance field under any viewpoint and light. We demonstrate that neural reflectance fields can be estimated from images captured with a simple collocated camera-light setup, and accurately model the appearance of real-world scenes with complex geometry and reflectance. Once estimated, they can be used to render photo-realistic images under novel viewpoint and (non-collocated) lighting conditions and accurately reproduce challenging effects like specularities, shadows and occlusions. This allows us to perform high-quality view synthesis and relighting that is significantly better than previous methods. We also demonstrate that we can compose the estimated neural reflectance field of a real scene with traditional scene models and render them using standard Monte Carlo rendering engines. Our work thus enables a complete pipeline from high-quality and practical appearance acquisition to 3D scene composition and rendering.

OpenRooms: An End-to-End Open Framework for Photorealistic Indoor Scene Datasets
Zhengqin Li, Ting-Wei Yu, Shen Sang, Sarah Wang, Sai Bi, Zexiang Xu, Hong-Xing Yu, Kalyan Sunkavalli, Miloš Hašan, Ravi Ramamoorthi, Manmohan Chandraker
[arXiv] [PDF]

Large-scale photorealistic datasets of indoor scenes, with ground truth geometry, materials and lighting, are important for deep learning applications in scene reconstruction and augmented reality. The associated shape, material and lighting assets can be scanned or artist-created, both of which are expensive; the resulting data is usually proprietary. We aim to make the dataset creation process for indoor scenes widely accessible, allowing researchers to transform casually acquired scans to large-scale datasets with high-quality ground truth. We achieve this by estimating consistent furniture and scene layout, ascribing high quality materials to all surfaces and rendering images with spatially-varying lighting consisting of area lights and environment maps. We demonstrate an instantiation of our approach on the publicly available ScanNet dataset. Deep networks trained on our proposed dataset achieve competitive performance for shape, material and lighting estimation on real images and can be used for photorealistic augmented reality applications, such as object insertion and material editing. Importantly, the dataset and all the tools to create such datasets from scans will be released, enabling others in the community to easily build large-scale datasets of their own. All code, models, data, dataset creation tool will be publicly released on our project page.

Example-Based Microstructure Rendering with Constant Storage
Beibei Wang, Miloš Hašan, Nicolas Holzschuch, Ling-Qi Yan
ACM Transactions on Graphics, 2020 [PDF] [video]

Rendering glinty details from specular microstructure enhances the level of realism, but previous methods require heavy storage for the high-resolution height field or normal map and associated acceleration structures. In this paper, we aim at dynamically generating theoretically infinite microstructure, preventing obvious tiling artifacts, while achieving constant storage cost. Unlike traditional texture synthesis, our method supports arbitrary point and range queries, and is essentially generating the microstructure implicitly. Our method fits the widely used microfacet rendering framework with multiple importance sampling (MIS), replacing the commonly used microfacet normal distribution functions (NDFs) like GGX by a detailed local solution, with a small amount of runtime performance overhead.

Learning Generative Models for Rendering Specular Microgeometry
Alexandr Kuznetsov, Miloš Hašan, Zexiang Xu, Ling-Qi Yan, Bruce Walter, Nima Khademi Kalantari, Steve Marschner, Ravi Ramamoorthi
SIGGRAPH Asia 2019 [PDF] [video]

Rendering specular material appearance is a core problem of computer graphics. While smooth analytical material models are widely used, the high-frequency structure of real specular highlights requires considering discrete, finite microgeometry. Instead of explicit modeling and simulation of the surface microstructure (which was explored in previous work), we propose a novel direction: learning the high-frequency directional patterns from synthetic or measured examples, by training a generative adversarial network (GAN). A key challenge in applying GAN synthesis to spatially varying BRDFs is evaluating the reflectance for a single location and direction without the cost of evaluating the whole hemisphere. We resolve this using a novel method for partial evaluation of the generator network. We are also able to control large-scale spatial texture using a conditional GAN approach. The benefits of our approach include the ability to synthesize spatially large results without repetition, support for learning from measured data, and evaluation performance independent of the complexity of the dataset synthesis or measurement.

Position-Free Monte Carlo Simulation for Arbitrary Layered BSDFs
Yu Guo, Miloš Hašan, Shuang Zhao
SIGGRAPH Asia 2018 [project] [PDF] [video] [code]

Real-world materials are often layered: metallic paints, biological tissues, and many more. Variation in the interface and volumetric scattering properties of the layers leads to a rich diversity of material appearances from anisotropic highlights to complex textures and relief patterns. However, simulating light-layer interactions is a challenging problem. Past analytical or numerical solutions either introduce several approximations and limitations, or rely on expensive operations on discretized BSDFs, preventing the ability to freely vary the layer properties spatially. We introduce a new unbiased layered BSDF model based on Monte Carlo simulation, whose only assumption is the layer assumption itself. Our novel position-free path formulation is fundamentally more powerful at constructing light transport paths than generic light transport algorithms applied to the special case of flat layers, since it is based on a product of solid angle instead of area measures, so does not contain the high-variance geometry terms needed in the standard formulation. We introduce two techniques for sampling the position-free path integral, a forward path tracer with next-event estimation and a full bidirectional estimator. We show a number of examples, featuring multiple layers with surface and volumetric scattering, surface and phase function anisotropy, and spatial variation in all parameters.

Rendering Specular Microgeometry with Wave Optics
Ling-Qi Yan, Miloš Hašan, Bruce Walter, Steve Marschner, Ravi Ramamoorthi
SIGGRAPH 2018 [PDF] [video] [supplementary]

Simulation of light reflection from specular surfaces is a core problem of computer graphics. Most existing solutions either make the approximation of providing only a large-area average solution in terms of a fixed BRDF (ignoring spatial detail), or are based only on geometric optics (which is an approximation to more accurate wave optics), or both. We design the first rendering algorithm based on a wave optics model, but also able to compute spatially-varying specular highlights with high-resolution detail. We compute a wave optics reflection integral over the coherence area; our solution is based on approximating the phase-delay grating representation of a micron-resolution surface heightfield using Gabor kernels. Our results show both single-wavelength and spectral solution to reflection from common everyday objects, such as brushed, scratched and bumpy metals.

Simulating the structure and texture of solid wood
Albert Liu, Zhao Dong, Miloš Hašan, Steve Marschner
SIGGRAPH Asia 2016 [PDF] [video] [supplementary]

Wood is an important decorative material prized for its unique appearance. It is commonly rendered using artistically authored 2D color and bump textures, which reproduces color patterns on flat surfaces well. But the dramatic anisotropic specular figure caused by wood fibers, common in curly maple and other species, is harder to achieve. While suitable BRDF models exist, the texture parameter maps for these wood BRDFs are difficult to author: good results have been shown with elaborate measurements for small flat samples, but these models are not much used in practice. Furthermore, mapping 2D image textures onto 3D objects leads to distortion and inconsistencies. Procedural volumetric textures solve these geometric problems, but existing methods produce much lower quality than image textures. This paper aims to bring the best of all these techniques together: we present a comprehensive volumetric simulation of wood appearance, including growth rings, color variation, pores, rays, and growth distortions. The fiber directions required for anisotropic specular figure follow naturally from the distortions. Our results rival the quality of textures based on photographs, but with the consistency and convenience of a volumetric model. Our model is modular, with components that are intuitive to control, fast to compute, and require minimal storage.

Position-Normal Distributions for Efficient Rendering of Specular Microstructure
Ling-Qi Yan, Miloš Hašan, Steve Marschner, Ravi Ramamoorthi
SIGGRAPH 2016 [PDF] [video]

Specular BRDF rendering traditionally approximates surface microstructure using a smooth normal distribution, but this ignores glinty effects, easily observable in the real world. While modeling the actual surface microstructure is possible, the resulting rendering problem is prohibitively expensive. Recently, Yan et al. [2014] and Jakob et al. [2014] made progress on this problem, but their approaches are still expensive and lack full generality in their material and illumination support. We introduce an efficient and general method that can be easily integrated in a standard rendering system. We treat a specular surface as a four-dimensional position-normal distribution, and fit this distribution using millions of 4D Gaussians, which we call elements. This leads to closed-form solutions to the required BRDF evaluation and sampling queries, enabling the first practical solution to rendering specular microstructure.

Rendering Glints on High-Resolution Normal-Mapped Specular Surfaces
Ling-Qi Yan*, Miloš Hašan*, Wenzel Jakob, Jason Lawrence, Steve Marschner, Ravi Ramamoorthi
(* dual first authors)
SIGGRAPH 2014 [PDF] [video] [supplementary]

Complex specular surfaces under sharp point lighting show a fascinating glinty appearance, but rendering it is an unsolved problem. Using Monte Carlo pixel sampling for this purpose is impractical: the energy is concentrated in tiny highlights that take up a minuscule fraction of the pixel. We instead compute an accurate solution using a completely different deterministic approach. Our method considers the true distribution of normals on a surface patch seen through a single pixel, which can be highly complex. We show how to evaluate this distribution efficiently, assuming a Gaussian pixel footprint and Gaussian intrinsic roughness. We also take advantage of hierarchical pruning of position-normal space to rapidly find texels that might contribute to a given normal distribution evaluation. Our results show complex, temporally varying glints from materials such as bumpy plastics, brushed and scratched metals, metallic paint and ocean waves.

Discrete Stochastic Microfacet Models
Wenzel Jakob, Miloš Hašan, Ling-Qi Yan, Jason Lawrence, Ravi Ramamoorthi, Steve Marschner
SIGGRAPH 2014 [PDF] [video] [project page]

This paper investigates rendering glittery surfaces, ones which exhibit shifting random patterns of glints as the surface or viewer moves. It applies both to dramatically glittery surfaces that contain mirror-like flakes and also to rough surfaces that exhibit more subtle small scale glitter, without which most glossy surfaces appear too smooth in close-up. These phenomena can in principle be simulated by high-resolution normal maps, but maps with tiny features create severe aliasing problems under narrow-angle illumination. In this paper we present a stochastic model for the effects of random subpixel structures that generates glitter and spatial noise that behave correctly under different illumination conditions and viewing distances, while also being temporally coherent so that they look right in motion. The model is based on microfacet theory, but it replaces the usual continuous microfacet distribution with a discrete distribution of scattering particles on the surface. A novel stochastic hierarchy allows efficient evaluation in the presence of large numbers of random particles, without ever having to consider the particles individually. This leads to a multiscale procedural BRDF that is readily implemented in standard rendering systems, and which converges back to the smooth case in the limit.

Progressive Light Transport Simulation on the GPU: Survey and Improvements
Tomáš Davidovič, Jaroslav Křivánek, Miloš Hašan, Philipp Slusallek
ACM Transactions on Graphics, 2014 [PDF] [supplementary] [project page]

Graphics Processing Units (GPUs) recently became general enough to enable implementation of a variety of light transport algorithms. However, the efficiency of these GPU implementations has received relatively little attention in the research literature and no systematic study on the topic exists to date. The goal of our work is to fill this gap. Our main contribution is a comprehensive and in-depth investigation of the efficiency of the GPU implementation of a number of classic as well as more recent progressive light transport simulation algorithms. We present several improvements over the state-of-the-art. In particular, our Light Vertex Cache, a new approach to mapping connections of sub-path vertices in Bidirectional Path Tracing on the GPU, outperforms the existing implementations by 30-60%. We also describe a first GPU implementation of the recently introduced Vertex Connection and Merging algorithm [Georgiev et al. 2012], showing that even relatively complex light transport algorithms can be efficiently mapped on the GPU. With the implementation of many of the state-of-the-art algorithms within a single system at our disposal, we present a unique direct comparison and analysis of their relative performance.

Modular Flux Transfer: Efficient Rendering of High-Resolution Volumes with Repeated Structures
Shuang Zhao, Miloš Hašan, Ravi Ramamoorthi, Kavita Bala
SIGGRAPH 2013 [PDF] [video] [project page]

The highest fidelity images to date of complex materials like cloth use extremely high-resolution volumetric models. However, rendering such complex volumetric media is expensive, with brute-force path tracing often the only viable solution. Fortunately, common volumetric materials (fabrics, finished wood, synthesized solid textures) are structured, with repeated patterns approximated by tiling a small number of exemplar blocks. In this paper, we introduce a precomputation-based rendering approach for such volumetric media with repeated structures based on a modular transfer formulation. We model each exemplar block as a voxel grid and precompute voxel-to-voxel, patch-to-patch, and patch-to-voxel flux transfer matrices. At render time, when blocks are tiled to produce a high-resolution volume, we accurately compute low-order scattering, with modular flux transfer used to approximate higher-order scattering. We achieve speedups of up to 12X over path tracing on extremely complex volumes, with minimal loss of quality. In addition, we demonstrate that our approach outperforms photon mapping on these materials.

Interactive Albedo Editing in Path-Traced Volumetric Materials
Miloš Hašan, Ravi Ramamoorthi
ACM Transactions on Graphics, 2013 [PDF] [video]

Materials such as clothing or carpets, or complex assemblies of small leaves, flower petals or mosses, do not fit well into either BRDF or BSSRDF models. Their appearance is a complex combination of reflection, transmission, scattering, shadowing and inter-reflection. This complexity can be handled by simulating the full volumetric light transport within these materials by Monte Carlo algorithms, but there is no easy way to construct the necessary distributions of local material properties that would lead to the desired global appearance. In this paper, we consider one way to alleviate the problem: an editing algorithm that enables a material designer to set the local (single-scattering) albedo coefficients interactively, and see an immediate update of the emergent appearance in the image. This is a difficult problem, since the function from materials to pixel values is neither linear nor low-order polynomial. We combine the following two ideas to achieve high-dimensional heterogeneous edits: precomputing the homogeneous mapping of albedo to intensity, and a large Jacobian matrix, which encodes the derivatives of each image pixel with respect to each albedo coefficient. Combining these two datasets leads to an interactive editing algorithm with a very good visual match to a fully path-traced ground truth.

Combining Global and Local Virtual Lights for Detailed Glossy Illumination
Tomáš Davidovič, Jaroslav Křivánek, Miloš Hašan, Philipp Slusallek, Kavita Bala
SIGGRAPH Asia 2010 [PDF] [Supplemental PDF] [PPTX]

Accurately rendering glossy materials in design applications, where previewing and interactivity are important, remains a major challenge. While many fast global illumination solutions have been proposed, all of them work under limiting assumptions on the materials and lighting in the scene. In the presence of many glossy (directionally scattering) materials, fast solutions either fail or degenerate to inefficient, brute-force simulations of the underlying light transport. In particular, many-light algorithms are able to provide fast approximations by clamping elements of the light transport matrix, but they eliminate the part of the transport that contributes to accurate glossy appearance. In this paper we introduce a solution that separately solves for the global (low-rank, dense) and local (high-rank, sparse) illumination components. For the low-rank component we introduce visibility clustering and approximation, while for the high-rank component we introduce a local light technique to correct for the missing illumination. Compared to competing techniques we achieve superior gloss rendering in minutes, making our technique suitable for applications such as industrial design and architecture, where material appearance is critical.

Physical Reproduction of Materials with Specified Subsurface Scattering
Miloš Hašan, Martin Fuchs, Wojciech Matusik, Hanspeter Pfister, Szymon Rusinkiewicz
SIGGRAPH 2010 [PDF] [slides]

We investigate a complete pipeline for measuring, modeling, and fabricating objects with specified subsurface scattering behaviors. The process starts with measuring the scattering properties of a given set of base materials, determining their radial reflection and transmission profiles. We describe a mathematical model that predicts the profiles of different stackings of base materials, at arbitrary thicknesses. In an inverse process, we can then specify a desired reflection profile and compute a layered composite material that best approximates it. Our algorithm efficiently searches the space of possible combinations of base materials, pruning unsatisfactory states imposed by physical constraints. We validate our process by producing both homogeneous and heterogeneous composites fabricated using a multi-material 3D printer. We demonstrate reproductions that have scattering properties approximating complex materials.

Virtual Spherical Lights for Many-Light Rendering of Glossy Scenes
Miloš Hašan, Jaroslav Křivánek, Bruce Walter, Kavita Bala
SIGGRAPH Asia 2009 [PDF] [PPTX] [HLSL shader]

In this paper, we aim to lift the accuracy limitations of many-light algorithms by introducing a new light type, the virtual spherical light (VSL). The illumination contribution of a VSL is computed over a non-zero solid angle, thus eliminating the illumination spikes that virtual point lights used in traditional many-light methods are notorious for. The VSL enables application of many-light approaches in scenes with glossy materials and complex illumination that could previously be rendered only by much slower algorithms. By combining VSLs with the matrix row-column sampling algorithm, we achieve high-quality images in one to four minutes, even in scenes where path tracing or photon mapping take hours to converge.

Automatic Bounding of Programmable Shaders for Efficient Global Illumination
Edgar Velázquez-Armendáriz, Shuang Zhao, Miloš Hašan, Bruce Walter, Kavita Bala
SIGGRAPH Asia 2009 [PDF]

This paper describes a technique to automatically adapt programmable shaders for use in physically-based rendering algorithms. Programmable shading provides great flexibility and power for creating rich local material detail, but only allows the material to be queried in one limited way: point sampling. Physically-based rendering algorithms simulate the complex global flow of light through an environment but rely on higher level information about the material properties, such as importance sampling and bounding, to intelligently solve high dimensional rendering integrals. We propose using a compiler to automatically generate interval versions of programmable shaders that can be used to provide the higher level query functions needed by physically-based rendering without the need for user intervention or expertise. We demonstrate the use of programmable shaders in two such algorithms, multidimensional lightcuts and photon mapping, for a wide range of scenes including complex geometry, materials and lighting.

Tensor Clustering for Rendering Many-Light Animations
Miloš Hašan, Edgar Velázquez-Armendáriz, Fabio Pellacini, Kavita Bala
EGSR 2008 [PDF] [PPT] [video] [comparison video] [diff video]

Rendering animations of scenes with deformable objects, camera motion, and complex illumination, including indirect lighting and arbitrary shading, is a long-standing challenge. Prior work has shown that complex lighting can be accurately approximated by a large collection of point lights. In this formulation, rendering of animation sequences becomes the problem of efficiently shading many surface samples from many lights across several frames. This paper presents a tensor formulation of the animated many-light problem, where each element of the tensor expresses the contribution of one light to one pixel in one frame. We sparsely sample rows and columns of the tensor, and introduce a clustering algorithm to select a small number of representative lights to efficiently approximate the animation. Our algorithm achieves efficiency by reusing representatives across frames, while minimizing temporal flicker. We demonstrate our algorithm in a variety of scenes that include deformable objects, complex illumination and arbitrary shading and show that a surprisingly small number of representative lights is sufficient for high quality rendering. We believe out algorithm will find practical use in applications that require fast previews of complex animations.

Matrix Row-Column Sampling for the Many-Light Problem
Miloš Hašan, Fabio Pellacini, Kavita Bala
SIGGRAPH 2007 [PDF] [PPT]

Rendering complex scenes with indirect illumination, high dynamic range environment lighting, and many direct light sources remains a challenging problem. Prior work has shown that all these effects can be approximated by many point lights. This paper presents a scalable solution to the many-light problem suitable for a GPU implementation. We view the problem as a large matrix of sample-light interactions; the ideal final image is the sum of the matrix columns. We propose an algorithm for approximating this sum by sampling entire rows and columns of the matrix on the GPU using shadow mapping. The key observation is that the inherent structure of the transfer matrix can be revealed by sampling just a small number of rows and columns. Our prototype implementation can compute the light transfer within a few seconds for scenes with indirect and environment illumination, area lights, complex geometry and arbitrary shaders. We believe this approach can be very useful for rapid previewing in applications like cinematic and architectural lighting design.

Direct-to-Indirect Transfer for Cinematic Relighting
Miloš Hašan, Fabio Pellacini, Kavita Bala
SIGGRAPH 2006 [PDF] [PPT] [video]

This paper presents an interactive GPU-based system for cinematic relighting with multiple-bounce indirect illumination from a fixed view-point. We use a deep frame-buffer containing a set of view samples, whose indirect illumination is recomputed from the direct illumination on a large set of gather samples, distributed around the scene. This direct-to-indirect transfer is a linear transform which is particularly large, given the size of the view and gather sets. This makes it hard to precompute, store and multiply with. We address this problem by representing the transform as a set of sparse matrices encoded in wavelet space. A hierarchical construction is used to impose a wavelet basis on the unstructured gather cloud, and an image-based approach is used to map the sparse matrix computations to the GPU. We precompute the transfer matrices using a hierarchical algorithm and a variation of photon mapping in less than three hours on one processor. We achieve high-quality indirect illumination at 10-20 frames per second for complex scenes with over 2 million polygons, with diffuse and glossy materials, and arbitrary direct lighting models (expressed using shaders). We compute per-pixel indirect illumination without the need of irradiance caching or other subsampling techniques.

Ph.D. Thesis

Matrix Sampling for Global Illumination
Miloš Hašan (advised by Kavita Bala)
Cornell, August 2009

Global illumination is the problem of rendering images by simulating the light transport in a scene, also considering the inter-reflection of light between surfaces. One general approach to global illumination that gained popularity during the last decade is the many-light formulation, whose idea is to approximate global illumination by many automatically generated virtual point lights. In this thesis, we address two fundamental issues that arise with the many-light formulation: scalability and generality. We present a new view of the many-light approach, by treating it as a large matrix of light-surface contributions. Our insight is that there is usually a significant amount of structure and redundancy in the matrix; this suggests that only a tiny subset of the elements might be needed for accurate reconstruction. First, we present a scalable rendering algorithm that exploits this insight by sampling a small subset of matrix rows and columns to reconstruct the image. This algorithm is very flexible in terms of the material and light types it can handle, and achieves high-quality rendering of complex scenes in several seconds on consumer-level graphics hardware. Furthermore, we extend this approach to render whole animations, by considering a 3D tensor of light-surface contributions over time. This allows us to further decrease the necessary number of samples by exploiting temporal coherence. We also address a long-standing limitation of all previous many-light approaches that leads to fundamentally incorrect results in scenes with glossy materials, by introducing a new virtual light type that does not have this limitation. Finally, we describe an algorithm that computes a wavelet-compressed approximation to the lighting matrix, which allows for interactive light placement in a scene with global illumination.

[link]