FOR IMMEDIATE RELEASE
11 June 2019

Media Contact:
Emily Drake
Media Relations Manager
+ 1.312.673.4758
emily_drake@SIGGRAPH.org

 

Novel Denoising Method Generates Sharper Photorealistic Images Faster

Researchers to present work on this post-processing technique at SIGGRAPH 2019

CHICAGO—Monte Carlo computational methods are behind many of the realistic images in games and movies. They automate the complexities in simulating the physics of lights and cameras to generate high-quality renderings from samples of diverse image features and scenes. But the process of Monte Carlo rendering is slow and can take hours — or even days — to produce a single image, and oftentimes the results are still pixelated, or “noisy.”

A global team of computer scientists from MIT, Adobe, and Aalto University has developed an innovative method for producing higher-quality images and scene designs in much less time by using a deep-learning-based approach that considerably cuts the noise in images. Their method results in sharper images that effectively capture intricate details from sample features, including complex lighting components like shadowing, indirect lighting, motion blur, and depth of field.

The researchers are set to present their work at SIGGRAPH 2019, held 28 July–1 August in Los Angeles. This annual gathering showcases the world’s leading professionals, academics, and creative minds at the forefront of computer graphics and interactive techniques.

“Our algorithm can produce clean images from noisy input images with very few samples, and could be useful for producing quick rendered previews while iterating on scene design,” says study lead author Michaël Gharbi, research scientist at Adobe. Gharbi began the research as a PhD student at MIT in the lab of Frédo Durand, who also is a coauthor.

The team’s work focuses on so-called “denoising,” a post-processing technique to reduce image noise in Monte Carlo rendering. It essentially retains the details of an image and removes anything that detracts from its sharpness. In previous work, computer scientists have developed methods that smooth the noise out by taking the average from the pixels in a sample image and neighboring pixels.

“This works reasonably well, and several movies have actually used this in production,” notes coauthor Tzu-Mao Li, a recent PhD graduate from MIT who also studied under Durand. “However, if the images are too noisy, oftentimes the post-processing methods are not able to recover clean and sharp images. Usually users still need hundreds of samples per pixel on average for an image with reasonable quality — a tedious, time-consuming process.”

Somewhat comparable is the process of editing a photo in a graphics software program. If a user is not working from the original, raw file, altered versions of the photo will likely not result in a clear, sharp, high-res final image. A similar yet more complex problem is image denoising.

To this end, the researchers’ new computational method involves working with the Monte Carlo samples directly, instead of average, noisy images where most information has already been lost. Unlike typical deep learning methods that deal with images or videos, the researchers demonstrate a new type of convolutional network that can learn to denoise renderings directly from the raw set of Monte Carlo samples rather than from the reduced, pixel-based representations.

A key part of their work is a novel kernel-predicting computational framework that “splats” individual samples — colors and textures — onto nearby pixels to sharpen the overall composition of the image. In traditional image processing, a kernel is used for blurring or sharpening. Splatting is a technique that addresses motion blur or depth-of-field issues and makes it easier to even out a pixelated area of a sample.

In this work, the team’s splatting algorithm generates a 2D kernel for each sample, and “splats” the sample onto the image. “We argue that this is a more natural way to do the post-processing,” says Li. The team trained their network using a random scene generator and extensively tested their method on a variety of realistic scenes, including various lighting scenarios such as indirect and direct illumination.

“Our method gives cleaner outputs at very low sample counts, where previous methods typically struggle,” adds Gharbi.

In future work, the researchers intend to address scalability with their method to extend to more sample features and explore techniques to enforce frame-to-frame smoothness of the denoised images.

The paper, “Sample-based Monte Carlo Denoising Using a Kernel-Splatting Network,” is also coauthored by Miika Aittala at MIT and Jaakko Lehtinen at Aalto University and Nvidia. For more details and a video, visit the team’s project page.

###

About ACM, ACM SIGGRAPH, and SIGGRAPH 2019

ACM, the Association for Computing Machinery, is the world’s largest educational and scientific computing society, uniting educators, researchers, and professionals to inspire dialogue, share resources, and address the field’s challenges. ACM SIGGRAPH is a special interest group within ACM that serves as an interdisciplinary community where researchers, artists, and technologists collide to progress applications in computer graphics and interactive techniques. The SIGGRAPH conference is the world’s leading annual interdisciplinary educational experience for inspiring transformative advancements across the disciplines of computer graphics and interactive techniques. SIGGRAPH 2019, the 46th annual conference hosted by ACM SIGGRAPH, will take place from 28 July–1 August at the Los Angeles Convention Center.

To register for SIGGRAPH 2019 and hear from the authors themselves, visit s2019.siggraph.org/register.