A Denoiser Can Do Much More than Just Clean Noise
Nearly all image processing tasks require access to some “approximate” notion of the images’ probability density function. This problem is generally intractable, especially due to the high dimensions that are involved. Rather than directly approximating this distribution, the image processing community has consequently built algorithms that—either explicitly or implicitly—incorporate key features of the unknown distribution of natural images. In particular, researchers have proposed very efficient denoising algorithms (i.e., algorithms that remove noise from images, which is the simplest inverse problem) and embedded valuable characteristics of natural images in them. The driving question is thus as follows: How can we systematically leverage these algorithms and deploy their implicit information about the distribution in more general tasks?
Consider a noisy image observation
The recent development of sophisticated and well-performing denoising algorithms has led researchers to believe that current methods have reached the ceiling in terms of noise reduction performance. This belief comes from the observation that substantially different algorithms lead to nearly the same denoising performance; it has been corroborated by theoretical studies that aimed to derive denoising performance bounds. These insights led researchers to conclude that improving image denoising algorithms may be a task with diminishing returns, or to put it more bluntly: a dead end.
Surprisingly, a consequence of this realization is the emergence of a new and exciting area of research: the leveraging of denoising engines to solve other, far more challenging inverse problems. Examples of such problems include image deblurring, super-resolution imaging, inpainting, demosaicing, and tomographic reconstruction. The basis for achieving this goal resides in the formulation of an inverse problem as a general optimization task that seeks to solve
The term
For the denoising problem, the choice of
How can we leverage a given powerful denoising machine
We can minimize the above objective with alternating optimization techniques. For example, consider a deblurring problem with
Inspired by the PPP rationale, the framework of Regularization by Denoising (RED) [1] takes a different route and defines an explicit regularizer
Put simply, the value of the above penalty function is low if the cross-correlation between the candidate image
What are the mathematical properties of the RED prior? Can we hope to compute its derivative? Recall that scientists often formulate state-of-the-art denoising functions as optimization problems; therefore, computing the derivative of
is convex as well, thus guaranteeing global convergence to the optimum. One can flexibly treat this task with a wide variety of first-order optimization procedures, as the gradient is simple to obtain and necessitates only a single activation of the denoiser. In its formal form, RED requires the chosen denoiser to meet some strict conditions, including local homogeneity, differentiability, and Jacobian symmetry. From an empirical standpoint, however, RED-based recovery algorithms seem to be highly stable and capable of incorporating any denoising algorithm as a regularizer—from the simplest median filtering to state-of-the-art deep learning methods—and treating general inverse problems very effectively.
The PPP and RED frameworks pose new and exciting research questions. The gap between theory and practice has inspired the development of a series of new variations for RED’s prior, as well as novel numerical algorithms. Provable convergence guarantees further support these new methods, broadening the family of denoising machines that one can use to solve general inverse problems. Another exciting line of research seeks a rigorous connection between RED and PPP, with the hope that such an understanding will lead to improved regularization schemes and optimizers. In terms of machine learning aspects, RED solvers formulate novel deep learning architectures by replacing the traditional nonlinear activation functions—like rectified linear units or sigmoid functions—with well-performing denoising algorithms. This approach offers new ways for researchers to train data-driven solvers for the RED functional, with the hope of ultimately achieving superior recovery in fewer iterations than the analytic approach.
This article is based on Yaniv Romano’s SIAM Activity Group on Imaging Science Early Career Prize Lecture at the 2020 SIAM Conference on Imaging Science, which took place virtually last year. Romano’s presentation is available on SIAM’s YouTube Channel.
1 Here we present a simplified version of the original PPP objective by replacing the hard constraint
References
[1] Romano, Y., Elad, M., & Milanfar P. (2017). The little engine that could: regularization by denoising (RED). SIAM J. Imaging Sci., 10(4), 1804-1844.
[2] Venkatakrishnan, S.V., Bouman, C.A., & Wohlberg, B. (2013). Plug-and-play priors for model based reconstruction. In 2013 IEEE global conference on signal and information processing (pp. 945-948). Austin, TX: IEEE.
About the Authors
Yaniv Romano
Assistant Professor, Israel Institute of Technology
Yaniv Romano is an assistant professor in the Departments of Electrical Engineering and Computer Science at the Technion – Israel Institute of Technology. He was a postdoctoral researcher in the Department of Statistics at Stanford University and completed his Ph.D. in electrical engineering at the Technion. Romano’s research focuses on selective inference, predictive inference, machine learning, and computational imaging.
Michael Elad
Professor, Israel Institute of Technology
Michael Elad is a professor in the Computer Science Department at the Technion – Israel Institute of Technology. His research focuses on inverse problems, sparse representations, and machine learning in the wider context of signal and image processing applications. Elad has served as editor-in-chief of the SIAM Journal on Imaging Sciences since 2016 and was elected as a SIAM Fellow in 2018.
Peyman Milanfar
Principal Scientist/Director, Google Research
Peyman Milanfar is a principal scientist/director at Google Research, where he leads the Computational Imaging team. He was a professor of electrical engineering at the University of California, Santa Cruz from 1999-2014 and Associate Dean for Research at the Baskin School of Engineering from 2010-2012. He is a Distinguished Lecturer of the IEEE Signal Processing Society and a Fellow of the IEEE.
Stay Up-to-Date with Email Alerts
Sign up for our monthly newsletter and emails about other topics of your choosing.