Skip to main content

Efficient shift-variant image restoration using deformable filtering (Part II): PSF field estimation

Abstract

Abstract

We present a two-step technique for estimating the point spread function (PSF) field from a single star field image affected by shift-variant (SV) blur. The first step estimates the best-fitting PSF for each block of an overlapping block grid. We propose a local image model consisting of a pattern (the PSF) being replicated at arbitrary locations and with arbitrary weights. We follow an efficient alternate marginal optimization approach for estimating (1) the most likely pattern, and (2) the locations where it appears in the block, with sub-pixel accuracy. The second step uses linear dimensionality reduction and nonlinear spatial filtering for estimating the entire PSF field from the grid of local PSF estimates. We simulate SV blur on realistic synthetic star fields to assess the accuracy of the method for this kind of images, for different blurs, star densities, and Poisson counts. The results indicate a moderately low error and very robust behavior against noise and artifacts. We also apply our method to real astronomical images, and demonstrate that the method provides relevant information about the underlying structure of the actual telescope and atmosphere PSF fields. We use a variant of the method proposed in Part I to compensate for the observed blur.

Introduction

Shift-variant (SV) image restoration requires knowledge of the point spread function (PSF) at each image location. If we have access to the imaging device, and the capture conditions are known, it may be possible to obtain the PSF field by pre-calibration. In many practical situations, though, we do not have access to this information. Moreover, the PSF field may change due to factors that are beyond our control (e.g., atmospheric turbulences, device temperature, vibrations, fog, relative movement of camera and objects). A given PSF field must then be estimated solely from the observed image(s) we want to restore.

The most common approach for estimating the blurring kernel from a single image consists of formulating a joint optimization problem, often based on statistical models of the image and the degradation, which is used to estimate both the uncorrupted image and the kernel. Typically, a starting guess is refined by alternating between estimating the image (assuming the kernel is known) and the PSF kernel (assuming the image is known). Apart from other potential problems like convergence and stability, adding such an outer loop to the estimation makes it especially heavy in computational terms, as each of these marginal estimations is, by itself, computationally expensive. To the best of the authors’ knowledge, the intrinsic complexity of these approaches has prevented their application to the general case of SV blur kernels (although special cases have been treated, like considering foreground and background layers, e.g.,[1]).

Furthermore, as blind restoration problems are intrinsically highly ill posed, stable solutions have been obtained mostly using some prior information about the kernel, adapting the problem solution to particular situations. Many methods benefit computationally from using a restricted (e.g., parametric) PSF model: Prior knowledge is applied in camera motion[2, 3], parametric models are used in defocus and lens aberration correction[4, 5], sparse characteristics are exploited for frosted glass[6], etc. In the specific case of PSF field estimation in astronomical images, several authors have studied orthogonal representations to characterize observed PSFs from stars[7, 8]. The underlying motivation is to provide a robust tool against noise, flexible and possibly adaptive, without imposing a narrow structure to the PSF field. They may also facilitate PSF field interpolation (see, e.g.,[9, 10]). In the data-adaptive case, the linear dimensionality reduction (by singular value decomposition (SVD) or principal component analysis) deserves to be mentioned, which connects directly to the idea of deformable filters[11] approached in the companion paper, Part I. Function bases are optimal in a least-square sense, and they are ranked in terms of energy contribution to the PSF field description, so they can be selected to filter measurement noise in the estimation in a way that can be easily automated with techniques like Generalized Cross Validation[12]. Not surprisingly, such dimensionality reduction techniques have previously been used to estimate the PSF field[13]. However, they have typically been used for special cases, such as for modeling particular devices (for instance, the Large Synoptic Survey Telescope[14] or the Advanced Camera for Surveys on Hubble space Telescope[15]), adapted to particular stars as the ideal PSF reference, or for modeling gravitational lenses[16, 17]. Hence, these approaches lack generality in the sense that they cannot be used as a general, “knowledge blind” tool for astronomical PSF field estimation. Only a few of the referred methods (like[18]) intend to apply their PSF estimation results to image deblurring, and none of them draw the clear connection between image restoration and PSF field estimation.

To alleviate the previously mentioned computational bottleneck, many techniques analyze the image in search of local features which provide information of the PSFs across the image, such as edges and corners[4]. These groups of pixels give clues which are used to compute point and line spread functions without user intervention. The idea of extracting local information about the PSF to build local PSF estimates is especially relevant for the SV blur case. Considering the lack of generality of the most common approaches outlined above, and taking the benefits of utilizing local estimates into account, we design here a novel approach to astronomical PSF field estimation. It is based on two simple observations relating to the fundamental characteristics of astronomical imagery, as opposed to typical photographic images.

First, astronomical images typically contain an abundance of stars, which can be modeled well as ideal point light sources. While only a narrow set of typical photographic images present enough repeated patterns, such as bright distant lights, to be used directly for characterizing the PSF field, the presence of stars in astronomical images allows for a particularly simple way to locally estimate the PSF, by weighted averaging, after sub-pixel localization. Note that, even under the smoothness assumption, obtaining a more or less dense set of local PSF estimates is not enough to fully characterize the blur, and a certain type of regularization and interpolation of that information to the rest of the image locations is necessary.

This is where our second observation comes in: we take advantage of the fact that the PSF field in astronomical images is usually simpler compared to typical photographic images, as there is no foreground–background structure. To a first approximation, PSFs result from the composition of the telescope and atmosphere PSF fields (for Earth-based observations), and they usually vary smoothly across the image field. Typical photographic images, on the other hand, may present a very rich variability in PSF field structures, due to differences in focus and/or relative speed of objects in the scene. Only in particular cases (e.g., camera movement and/or defocus, still long distance objects) one may expect to find a smooth PSF field. Furthermore, only in an even narrower set of cases, typical photographic images present enough repeated patterns (e.g., bright distant lights) to be used directly for characterizing the PSF field, similarly to the star field case.

Hence, our approach is twofold, and sequential. The first step consists of estimating the most likely local PSF for each block of a set of overlapping image blocks covering the whole image, according to a simple local image model. The second step extends and refines these local PSF estimates through linear dimensionality reduction, nonlinear filtering of outliers, and spatial interpolation. Whereas the first part is completely model-based, the second part is presented here, to a large extent, in an ad hoc way. This does not diminish its strong connection with the ideas presented in the companion paper, Part I, about using the concept of deformable kernels to deal with smooth PSF fields.a

There is a third fundamental characteristic of astronomical images that needs to be mentioned. Astronomical images have a huge dynamic range compared to typical photographic images, and very often, the search for relevant information requires processing low-contrast details in a contrast-adaptive fashion. Therefore, traditional error measures, such as quadratic error, must fail to describe the quality of a restoration method for star fields. To evaluate the quality of a restoration method on this kind of images is, thus, challenging. Quantitatively assessing the quality of the deblurring results has therefore not been tried here.

The rest of the article is organized as follows. Section 2. presents a simple image model for SV blurred star fields to be used as a local approximation for image blocks. Then, Section 3. describes an alternate optimization algorithm for obtaining, through maximum likelihood, an estimate of the local PSFs. Section 4. describes a generic approach for estimating a smooth PSF field from a regular grid of local PSF estimates. A broad set of experiments, and their corresponding results, are described and discussed in Section 5. First (Subsection 5.1), objective measurements of the PSF field estimation quality are made by means of simulated star fields using two synthetic PSF fields. Then (Subsection 5.2), real astronomical images are analyzed to estimate their PSF field. These estimates are used for restoration of real images in Section 5.2a. Section 6. concludes the article.

A smooth pattern field model for images

In this section, we describe a simple and yet effective image model for star fields subject to smoothly varying SV blur, e.g., due to a combination of atmospheric blur and non-ideal telescope optics. The model is conceptually much wider and potentially applicable to very different situations, like when observing repeated patterns in typical photographic images, possibly subject to certain variations depending on their image location. Although future instantiations of this model may attack the problem of directly characterizing the smooth pattern field for the whole image, we follow a simpler approach here, consisting of characterizing local regions of the image first by using a single reference pattern. Despite this simplification, we do consider in the local model that there is an error due to modeling the actual pattern within a block as constant. Figure1 illustrates the idea of obtaining local PSF estimates on a regular spatial grid, using overlapping blocks.

Figure 1
figure 1

General strategy applied for estimating the PSF field. The first step is to estimate the local PSF on a grid of locations, using overlapping blocks. From these local estimates, the PSF field can be estimated, assuming it varies smoothly across the image. The method is presented here in its most general way (grouping local features).

Image model

We define a pattern field p d (x;d(x0)) as a special type of four-dimensional function which associates a certain spatial pattern p d (x) to each image location x0. The spatial pattern p d (x) depends on a vector of hidden parameters d(x0)(usually unknown a priori) which, in turn, depends on the image location x0. If the pattern depends smoothly (i.e., continuously and with continuous first derivative) on the parameter vector d, and each component of d is a smooth function of x0, then p d (x;d(x0)) can be viewed as a smooth pattern field. The reader familiar with SV blur will already have noted the applicability of such smoothly varying pattern fields to characterize the PSF at any given image location, in some favorable cases (as discussed in the introduction). The star field, then, can be modeled as the result of blurring a set of ideal point sources (Dirac delta functions) with a shift variant PSF field:

y(x)= y B (x)+ i = 1 M a i p d (x x i ;d( x i ))
(1)

This image model y(x) is composed of a number of instances of the pattern field, each representing a local PSF, in this case, located at a position x i and scaled by a factor a i corresponding to the strength of the ideal point source. In order to make the model robust to changes of the background level, it is highly convenient to include a very smooth additional background term yB(x).

In this application of the pattern field model, we have taken the approach of representing the patches by finite-dimensional vectors, corresponding to finite and discrete sampled patches in the image. A proper pixel size of the patch can easily be assigned by inspection of the most blurred areas of the image. Also, for algorithmic convenience, we have imposed that the patches do not overlap (in other applications of the model this may respond to an actual physical constraint). Although this is not a realistic assumption for stars, we have experienced that it does not significantly harm the estimation.

Note that, seen as a generative model, Equation 1 is fairly simple: a highly sparse input distorted by an SV blur, even linear for a given pattern field. However, from an analysis point of view, fitting an observation to this model involves a highly nonlinear procedure, mainly because of the sparse character of its input. Although powerful techniques for sparse estimation exist (e.g.,[19]), they usually assume that the input is sparse, but not necessarily in such a way that the input is a set of (isolated) deltas, and much less that the “basis functions” are non-overlapping. On the other hand, our approach takes advantage of this particular structure of the problem.

Pattern observation model

The pattern field p generally is a nonlinear function for which there is no prior information except that it is smooth with respect to the image coordinates. If it is smooth enough, the Nyquist theorem limits the amount of information that is lost when we spatially sample the parameter vector d. This takes us to a block-based approach, in which we model the PSF at a certain block location as a component along the PSF at the block center, plus a certain perturbation increasing with the distance to the center: p d (x;d(x0)) = b p d (x;d(x C )) + ε, where x C is the central location of the block B(x C ), and x0B(x C ). Consequently, a given pattern observation, numbered i, at some position x i in a given block, is modeled as:

y( x i )= y B ( x i )+ a i b i p + ε i + n i
(2)

Here, p is the vectorized pattern at x C , b i represents the fading of the component along the central pattern towards the boundaries of the block, ε i N(0, σ εi 2 I) represents the deviation of the pattern with respect to the p direction, and n i N(0, σ n 2 I) is a noise term. Note that b i will be 1 (or less, in practice, if our knowledge of p is approximate) at the block center, and will decrease as we move away from it, while σ εi is modeled as an ever increasing but upper bounded function of ||xx C ||, thus behaving in a complementary way to b i .

Despite of the apparent intricacy of this model, it can be reformulated into a much simpler one. First, observe that we can decompose p into

p= p 0 + μ p 1
(3)

where μ p 1 = [μ p ,…,μ p ]T represents the sample pixel mean of the reference pattern, and p0, consequently, its zero mean version.b Second, as the background yB(x) is smooth, it can locally be approximated by a constant:

y B ( x i ) μ B , i 1
(4)

Since we are unconcerned about estimating the image background or the ground level of the zero-mean pattern, we may encapsulate these components of the model, thus simplifying the model as well as the computational procedure:

y( x i )( μ B , i + a i b i μ p = μ i )1= a i b i = c i p 0 + a i ε i + n i random
(5)

The simplified patch model can be restated as follows:

y( x i ) μ i 1 = c i p 0 + w i
(6)

where w i N(0,( a i 2 σ ε i 2 + σ n 2 )I) now is a joint “error” term representing the uncertainty of the observation.

Local PSF estimation by alternating marginal likelihood optimization

Since the observed star patterns are independent, the likelihood of observing a number M of patterns in the block is

log L i = 1 M K log ( a i 2 σ εi 2 + σ n 2 ) + y ( x i ) μ i 1 c i p 0 2 a i 2 σ εi 2 + σ n 2 + ( N MK ) log σ n 2 + x R y ( x ) y B ( x ) 2 σ n 2 + const ,
(7)

where K is the dimensionality of the vectorized neighborhoods, N is the number of pixels in the block, and R is the set of pixels that are not included in any pattern. Optimizing the likelihood directly for all the unknowns is a difficult problem. However, it can effectively be attacked by alternating between optimizing the marginal likelihood as a function of p0 and the set of pattern locations, {x i ,i = 1…M}, respectively.

The algorithm we propose essentially comprises three stages:

  1. 1.

    taking an initial guess at the pattern;

  2. 2.

    finding approximate (full-pixel precision) locations of the pattern, as well as the number of locations, with respect to that guess; and

  3. 3.

    alternating between re-estimating the pattern and refining the locations with sub-pixel precision.

Likelihood maximization with respect to each of μ i , c i is a simple least-squares fitting, and can be done independently for each image location x i :

μ ̂ i = y T ( x i ) 1 K
(8)
ĉ i = ( y ( x i ) μ i 1 ) T p 0 p 0 2 = y ( x i ) μ i 1 p 0 cos p 0 , y ( x i ) μ i 1 .
(9)

For estimating a i we observe in (5) that:

E{y( x i ) μ i 1 2 }= a i 2 E{ b i p 0 + ε i 2 }+K σ n 2
(10)

Assuming E{b i p0 + ε2} = 1, we simply choose the estimate

â i 2 = y ( x i ) μ i 1 2 K σ n 2 +
(11)

where the brackets indicate non-negative clipping. For those selected patches with norms well above the background noise level (K σ n 2 ), the effect of a i on the variance is to normalize the local energy of the patch, so effectively giving a stronger weight to candidates with low relative error (high SNR). The low energy patches, on the other hand, are effectively damped by the larger influence of σ n 2 in their associated variance.

Also, for a given set of scaled and noisy pattern candidates,{y( x i ) μ ̂ i 1}, the solution for the most likely p0 is the standard least-squares one:

p ̂ 0 i = 1 M y ( x i ) μ ̂ i 1 cos p 0 , y ( x i ) μ ̂ i 1 a i 2 σ εi 2 + σ n 2 ( y ( x i ) μ ̂ i 1 )
(12)

Note that ĉ i p 0 is invariant with respect to p0. Hence, we only need to find the direction of p ̂ 0 in (12). Consequently, our method is invariant with respect to the normalization of the PSF. In order to avoid numerical error, it is still useful to constrain p0. As it appears most natural in this context, we choose p0 = 1. Recall that, in spite of this non-standard choice when dealing with PSF patterns, it is always possible to go back to standard PSF form (non-negative, normalized volume) by estimating μ p and re-normalizing.

Methodically, the optimization for the locations {x i } takes a different form for the initial (full pixel precision) and the subsequent (sub-pixel) estimates.

Estimation of the approximate locations of star patterns

When the {x i }are estimated for the first time, we can make use of the observation that the last two terms in (7) (the “non-pattern” terms) are constant if extended to all pixels of the block. Then, we can remove that constant from the optimization functional to obtain

log L i = 1 M K log ( a i 2 σ εi 2 + σ n 2 ) + y ( x i ) μ i 1 c i p 0 2 a i 2 σ εi 2 + σ n 2 K log σ n 2 y ( x i ) μ i 1 2 σ n 2 + const ,
(13)

where we have applied the approximation (4) again.c Consider the expression inside the parentheses, evaluated as a function of a generic image location x and holding p0 fixed:

A ( x ) = K log a 2 ( x ) σ ε 2 ( x ) σ n 2 + 1 + y ( x ) μ ( x ) 1 2 × 1 σ n 2 1 cos 2 p 0 , y ( x i ) μ ( x ) 1 a 2 ( x ) σ ε 2 ( x ) + σ n 2 ,
(14)

where we have applied

y ( x ) μ ( x ) 1 c ( x ) p 0 2 = y ( x ) μ ( x ) 1 2 ( 1 cos 2 p 0 , y ( x ) μ ( x ) 1 ) ,
(15)

and where μ(x) = yT(x)1/K is the sample pixel average of the patch. This function represents the log-likelihood ratio between two hypotheses: the patch centered on x corresponding to an instance of the pattern versus corresponding to the background. When the patch centered at x, y(x), contains significantly more energy than that of the background noise, we have a pattern candidate. In that case, all but one of the terms in Equation (14) in the vicinity of x are smooth functions of x, because they correspond to energy measurements which are insensitive to the exact location of the pattern, whenever it remains entirely within the patch. The exception is the covariance factor term, which is very selective to the exact pattern location, even under noise. Furthermore, in this vicinity A(x) must yield high positive values (because1/ σ n 2 1/( a 2 (x) σ ε 2 (x)+ σ n 2 )), peaking in an isolated maximum where the covariance factor gets closest to 1, i.e., when the pattern candidate is aligned with respect to the pattern guess p0. Therefore, a greedy algorithm, under the assumption of no pattern overlapping in the image, is optimal for finding the most likely locations for the current pattern guess. To estimate the set of M locations {x i }, we first pick the global maximum of A(x) in the block as x1 and set M to 1. Next, we rule out the neighborhood around that maximum as a potential location for subsequent picks (to enforce the non-overlapping constraint in the analysis) and find the next-to-optimal maximum x2, increasing M by 1; and so on, until no further locations can be selected, or the next minimum attains a close-to-zero value, indicating that nothing is gained by considering a star pattern at this location.

Practically, this algorithm fails to choose the correct locations only in the case where the initial assumption of non-overlapping patterns does not hold, i.e., in our case, when two or more stars of similar brightness are too close to each other (if one star clearly dominates, then the ML choice corresponds to centering that dominant star). However, these wrong choices can be easily detected by thresholding the covariance cos(p0,y(x) − μ(x)1) between the current guess of the pattern and the image, and thus preventing these locations to contribute to the pattern estimate (in our experiments we used a threshold cmin = 0.2).

Sub-pixel refinement

When estimating x i in the subsequent iterations, we do not need to adjust M, but we would like to improve the locations x i to sub-pixel precision. Observing that σ εi and y(x i )−μ i 1 are approximately constant for such small changes of x i (as the pattern is assumed to be approximately centered in the patch, with some background space around it), the optimization functional can be simplified to obtain the following estimate:

x ̂ i arg max x i cos p 0 , y ( x i ) μ i 1
(16)

As this functional is very simple to compute, we can afford to perform a local search around the previous location estimates simply by computing a standard, bicubic sub-pixel interpolation of the patch and evaluating the functional.d We choose to do this in a hierarchical fashion, that is, performing local search in [−1,1] first with a step size of Δx = 1/2in each coordinate, updating, then performing a local search for the relative shift in [−1/2,1/2] with Δx = 1/4, and so on, down to a precision of one eighth of a pixel.

Variance estimates

The overall variance of the pixels of a patch including a pattern depends on the additive background noise term ( σ n 2 ), and the multiplicative error term ( a i 2 σ ε i 2 ) due to the error on the pattern itself. The latter depends solely on the distance to the block center in relative terms; in absolute terms, it also depends on the scale factor a i of the instance of the pattern. For estimating σ n 2 , we use the Median Absolute Deviation method with a simple difference filter along each dimension[20].

To model the variance of the error ε i , we assume that the pattern random vector field has an isotropic covariance structure similar to an AR(1) process. In particular,

σ εi 2 = 1 ρ 2 L / f r x i x C K
(17)

and E{b i p0 + ε i 2} = 1, such that the energy is stationary across the pattern field. ρ is the parameter that determines the smoothness of the field, and L/f r is the distance between neighboring block centers (L being the side length of the block, and f r the overlapping block redundancy factor). Thus, E{ε i 2} = 0 at the block center itself and E{ε i 2} = 1 − ρ2 at the neighboring block center. In our experiments, we chose ρ = 0.98 as a reasonable smoothness measurement for adjacent PSF in the grid made of the overlapping blocks’ centers. In the estimation procedure (12), this structure of σ εi 2 leads to the desirable effect of giving a stronger weighting of the detected patterns that are near the block center.

In addition to uncertainty due to noise and spatial variation of the pattern, there is a methodical uncertainty with respect to the current estimate of p0. Particularly the first guess may be somewhat inaccurate; and since the weights in (12) are based on covariance measurements with the (guessed) pattern, subsequent applications of the estimator may weight useful instances of the star pattern lower than necessary, and vice versa. To make the method adapt to the quality of the current patch estimate, we add the term σ g 2 to the r.h.s. of (16) when first performing the estimation of the full pixel pattern locations, defined as

σ g 2 = 1 max x cos p 0 , y ( x ) μ ( x ) 1 2 K .
(18)

This value corresponds exactly to the quadratic error in our guess if (1) there is an instance of the exact pattern in the image, and (2) that instance is the closest one in the image, in a covariance factor sense, to our guess. The use of this extra term permits a less shape-selective weighting of candidate patterns when the quality of the estimated pattern is still relatively low.

Initial and subsequent pattern guesses

In our experiments, for the first block we used as a guess a Gaussian PSF, with a spatial dispersion σ=2, using a 19×19 patch support. For subsequent blocks, processed in a horizontal raster scan order, we use the average of the estimated PSFs for previously estimated blocks above and on the left (or only one of them, for the first row or the first column).

Summary of the local PSF estimation method

In Figure 2 (left) we have depicted a scheme summarizing our local PSF estimation method. We also provide an algorithmic description below:

Figure 2
figure 2

Left: Local PSF estimation for a block; Right: PSF field estimation from a grid of local PSF estimates.

Algorithm 1. Local PSF estimation in star fields

  1. 1:

    Obtain an initial guess p ̂ 0

  2. 2:

    for all blocks in the image do

  3. 3:

    Estimate σ n 2 and σ g 2

  4. 4:

    for all x in the block do

  5. 5:

    Compute σ ε 2 , a, μ, and c

  6. 6:

    Compute covariance factor among patch and pattern cos(p0,y(x) − μ(x)1)

  7. 7:

    Compute A(x)using (14)

  8. 8:

    end for

  9. 9:

    X ← {}

  10. 10:

    loop

  11. 11:

    x ̂ i arg max x A( x ) s.t. x N(x),xX

  12. 12:

    ifA( x ̂ i )<T or x ̂ i has no solution then

  13. 13:

    M←|X|

  14. 14:

    break

  15. 15:

    else

  16. 16:

    if cos(p0,y(x i ) − μ i 1) > c min then

  17. 17:

    X ← X{ x ̂ i }

  18. 18:

    end if

  19. 19:

    end if

  20. 20:

    end loop

  21. 21:

    repeat

  22. 22:

    Re-estimate p ̂ 0 using (12)

  23. 23:

    for all x ̂ i X do

  24. 24:

    Improve x ̂ i by local search using (15)

  25. 25:

    end for

  26. 26:

    until convergence

  27. 27:

    Update the guess p ̂ 0 for the next block

  28. 28:

    end for

Estimating smooth PSF fields from local estimates using SVD

When we observe a set of patterns under noise, relevant features can be extracted by means of finding the orthogonal basis that optimally (in a least-squares sense) compresses that particular set of patterns as a weighted sum of orthogonal components. The SVD has been in widespread use for many years to solve this problem.e In a high dimensional space, such as the one used to represent rich unconstrained patterns, like our PSFs, the noise effect on the obtained dominant terms is small, because, if it is white, it distributes its energy equally among all the orthogonal components. The terms above the noise level, on the other hand, may carry a large proportion of the total pattern energy. Therefore, those terms provide an approximation to the original (uncorrupted) set of patterns, when the components below the noise level are filtered out before reverting the linear transform.

This kind of linear spectral filtering can be effective to reliably estimate the dominant eigenPSFs of the PSF field, but it may not be enough to characterize the field by itself, when there are strong spurious fluctuations in the estimated patterns. These fluctuations may be caused by imperfections of the algorithm and/or the model (in our case, we do some incorrect assumptions, like that the PSFs do not overlap), or due to the presence of “outliers” (image contents that fit neither the dominant pattern nor the background statistics). Then, using prior information about the typical features of the pattern field is essential to increase robustness. In our problem, the local PSF estimates are far from being equally reliable (e.g., because of inhomogeneities of the local density of stars, different brightness, background effects, etc.). Using the prior knowledge of the PSF field being smooth helps us to discount the outliers’ effect on the estimation, and, thus, to improve the results.

Estimating a smooth PSF field from a grid of local estimates

As the method of the previous section does not ensure alignment of the local estimates with respect to each other, they need to be aligned prior to being subjected to dimensionality reduction, as illustrated in Figure 2 (right).

Local PSF sub-pixel alignment

Note that the sub-pixel alignment performed here is conceptually slightly different from the intra-block alignment of patterns, as we do not want to optimize a similarity measures with respect to a certain reference pattern here, but rather maximize the similarity of each pattern with respect to the others. A sensible solution is to find the subpixel shifts that minimize the Euclidean error to the projection of each pattern onto the first eigenvector of the set, which is recomputed at each iteration (using a hierarchical iterative approach using cubic splines, as before). It is straightforward to prove that, following this procedure we will reach a stationary solution for the spatial shifts which maximize (locally, at least) the first eigenvalue of the covariance matrix of the set of patterns, as it effectively minimizes the sum of squares of the components along orthogonal directions. Similarly to the intra-block alignment case, we have experienced a very significant decrease in the energy of the components orthogonal to the first eigenvector (around 4–5 dB) by doing this inter-block alignment. We observe a spatial drift of the estimated local pattern across image blocks. This undesirable effect does not affect the model fitting negatively.

Dimensionality reduction and nonlinear filtering of the spatial weights

After the alignment, we proceed to perform a linear dimensionality reduction of the set, using the SVD. Although it is not difficult to obtain a suitable number of singular vectors/values automatically, we have chosen the number simply by visual inspection of the eigenvalues profile. We obtain a set of eigenvectors (eigenPSFs), plus the corresponding optimal spatial weights to represent the grid of extracted local patterns as a linear combination of them. Then, the “outliers” can easily be detected and suppressed by iteratively detecting the spatial weights that largely deviate with respect to their neighbor values, and substituting them by the local mean of their neighbors. Once the spatial weights are regularized, we revert the orthogonal transform and obtain a spatial grid of regularized patterns. The ground level of these must be shifted back to zero and their volume normalized, in order to force both positivity and unit volume, as required for PSFs. We use a mode estimation to estimate the ground level of each pattern.

PSF field interpolation to intermediate spatial locations

We may want to obtain PSF estimates for each pixel of the image; for example, if we want to use the PSF field estimation to perform deblurring on the same or similarly captured images. Given a low-dimensional description of a vector at different image locations, such as obtained through the SVD, it is easy to achieve that by interpolating the scalar coefficient corresponding to each basis vector, provided the vector field is smooth. However, as we enforce the PSF constraints (positivity and unit volume) after reverting the transform, the resulting set of local PSF estimates are no longer confined to a low-dimensional linear subspace. One solution to this is to interpolate before reintroducing the constraints, and then fixing the ground level and volume for each pixel location in the image. However, a more efficient manner consists of performing a second SVD on the previously estimated set of positive and volume normalized PSFs, and retaining enough singular components for an excellent approximation (let us say, more than 60–80 dB). After this second dimensionality reduction, we can perform a vector interpolation very efficiently by simply interpolating the obtained weights as scalar functions, using cubic splines. This method ensures the right form of our PSF field estimate: a number of eigenPSFs, plus their corresponding smooth spatial weights (one weight for each eigenPSF–pixel pair). We may then apply the deformable filtering techniques explained in Part I to restore the observation using this estimate.

Relationship to the image estimation method proposed in Part I

The dimensionality reduction approach on the (aligned) PSFs is a shared approach of Parts I and II of this double article. In both cases, the local PSF is modeled as a linearly deformable kernel with a reduced number of dimensions. Whereas the main motivation for using deformable kernels in Part I was computational efficiency (as the linear deformable filtering can be implemented as a sum of masked convolutions), our main motivation in this second part is robustness in the characterization of a PSF field from which we only have partial and noisy information. More importantly, both techniques are connected in practice by the use one would give to them for image SV blind restoration: first estimating the PSF field (Part II), then using that estimate for applying the SV image restoration estimation described in Part I. We also believe that, by developing the PSF field estimation and the image estimation in the context of a single framework, we add value to each of the described techniques.

Experiments: results and discussion

PSF field estimation in simulated star fields

In order to obtain an objective reference to evaluate our PSF field estimation results, we have simulated both PSF fields and star field images. We degraded the latter by noise after the SV blur.

Simulating SV blurred star fields

We consider three different Poisson noise levels, corresponding to having maximum values for 8 bits (from 0 to 255), 12 bits (0 to 4095), and 16 bits (0 to 65535) in the simulated star fields. Also, we add zero mean Gaussian noise of unit variance. We have implemented a simulation for the star field giving roughly the same (relative, observed from the Earth) magnitude statistics as the observed stars, which approximately fit the law

log n ( m ) = αm + C ,
(19)

n being the number of stars of a given apparent magnitude m. According to actual statistics of observable stars[21], the number of stars with apparent magnitude between m and m + 1 (at least for the studied interval 0 ≤ m ≤ 10) is around three times larger than the number of stars with apparent magnitude between m − 1 and m. Considering that an increase by one unit of magnitude corresponds to a decrease by 2.15 times the apparent brightness, this yields α = log(3)/log(2.15) ≈ 1.43. For simulating apparent brightness samples, we use the standard technique of integrating and inverting the pdf (a power law in this casef) with this coefficient, and then sampling from an uniform density in ,1] and applying the obtained function to the uniform samples. In this case, the resulting function is still a power law with β = − 1/(α − 1) ≈ − 2.3. This yields an output brightness range of [1,−2.3, which is normalized afterwards to the desired maximum value. We have chosen  = 0.01.

We have implemented two synthetic PSF fields, BLUR1 and BLUR2, which are quite similar to the ones used in Part I. BLUR1 is defined by:

h BLUR 1 ( x ; x 0 ) 1 / ( k ( x 0 , y 0 ) 2 + x 2 + y 2 ) 0 . 95 min { 1 / ( k ( x 0 , y 0 ) 2 + x 2 + y 2 ) } ,
(20)

with x and y ranging now from −9 to 9. Except for keeping a constant spatial support and the offset correction, this PSF field can approximately be interpreted as a spatial magnification of the PSF from the center (where it is sharpest), with the zoom factor beingk( x 0 , y 0 )= 2 1 / 2 1 + ( ( x 0 ) 2 + ( y 0 ) 2 ) / ( L 2 / 6 ) , being L the side of the pattern (assumed squared). The second PSF field, BLUR2, is a Gaussian function whose width (square root of variance along the horizontal axis) is fixed and equal to σ x  = 1.6, whereas its height changes exponentially along the vertical axis by σ y ( x 0 , y 0 )=1.6× 2 y 0 L / 2 . Note that these two PSF fields are defined through uni-parametric PSF functions, with the parameter being a smooth function of the image coordinates. Thus, they are smooth uni-parametric PSF fields.

A degraded star field simulation is obtained by (1) choosing an image size (2048×2048 in our case) and a number of stars for the image (2×104,4×104,1.6×105or 6.4×105), locating each at non-integer random coordinates, and then associating to each of them a random brightness according to the power law explained above; (2) applying the synthetic PSF field (BLUR1 or BLUR2); (3) normalizing the resulting maximum to the desired value (255, 4095, or 65535, for 8, 12, or 16 bits, respectively), (4) simulating Poisson statistics on the previously noise-free image, and (5) adding white Gaussian noise (zero mean, unit variance), rounded to integer values (plus a constant, to avoid negative values).

Results for simulated blurred star fields

Figure3 shows the eigenvalues profile of the two synthetic PSF fields (BLUR1 and BLUR2), and compares them to the profiles obtained by using our method on the grid of initial PSF estimates to the simulated images, for the three noise levels considered (for comparison purposes, global energy has been normalized in all cases). It is very interesting to see: (1) the rapid decay of the ranked eigenvalues; (2) the estimated profiles fitting the theoretical curves for the first two or three eigenvalues, the rest staying effectively beneath observation noise; (3) the influence of different Poisson count levels being quite small for the chosen rank (8, 12, and 16 bits). Observing these eigenvalues profiles, it seems legitimate to use just two dimensions for the reconstruction, in both blur cases (BLUR1 and BLUR2).

Figure 3
figure 3

First set of eigenvalues in the PSF field of BLUR1 and BLUR2. Original and estimated for 3 different Poisson noise levels (density: 40.000 stars).

Figures4 and5 show the structures obtained by applying the SVD to the original PSF fields BLUR1 and BLUR2, respectively. It is worth noting that the weights associated to the extracted eigenPSFs for each PSF field all have the same iso-level structure (circular for BLUR1, and vertical for BLUR2), indicating a PSF field with a single degree of freedom, as expected. In addition, weighting functions are smooth, as expected for smooth PSF fields. The basic shape of the first two eigenPSFs and their iso-level structures are quite well reproduced in the PSF field estimation from the simulated observations, as shown in Figure6. Here, one can fully appreciate the importance of nonlinearly removing the outliers, i.e., local weight values departing from the generally spatially smooth behavior.

Figure 4
figure 4

Original first BLUR1 eigenPSFs and their associated weights.

Figure 5
figure 5

Original first BLUR2 eigenPSFs and their associated weights.

Figure 6
figure 6

Estimated first BLUR1 and BLUR2 eigenPSFs from simulations with low Poisson noise level (maximum level of 65535, 16 bits), and their associated weights over the image.

Table1 shows a quantitative comparison of the estimated PSF field to the original. It shows Signal-to-noise ratio values, in decibels, of PSF field estimation in BLUR 1 and BLUR 2 with 4 Poisson noise levels and 3 different star densities: low density LD (20,000 simulated stars), medium density MD (40,000 simulated stars), high density HD (160,000 stars), and ultra high density UHD (640,000 stars). While the obtained figures are relatively modest, it is worth noting the high robustness of the method against noise and changes of the star density. We only notice a slight performance decrease when the density becomes too high, which causes a high amount of PSF overlap. Figures7,8 and9 further illustrate this robustness, by comparing the obtained results to samples of the original PSF fields, for different Poisson levels and star densities. They also indicate that the blur is slightly overestimated. However, more importantly, a big part of the error comes from obtaining kernels with different relative locations. This is not strictly an error in the estimate, as there is an intrinsic ambiguity for the pattern location in terms of its internal coordinates x in the model, but it still affects the numerical results negatively.g Disregarding this spurious shift effect, it seems that the most important source of error in these results comes from the blurring of the local pattern field caused by obtaining our local PSF estimates on relative large blocks. More sophisticated choices for the shapes of the spatial regions upon which making our local estimates, instead of square blocks, should improve the classical bias-variance trade-off (bigger regions produce higher bias and lower variance, and vice versa) present at any estimation problem on non-stationary random fields, as pointed out in the conclusions.

Table 1 Accuracy of the PSF field estimation (Signal-to-noise ratio, in decibels)
Figure 7
figure 7

Sampled PSFs with pair indexes from the lower right quadrant of BLUR1 PSF field with medium star density (40,000 stars, pooling region of 256×256 pixels with half block overlapping). Top left: Original PSFs; top right: estimated PSFs in a 12-bit image (medium noise level). In the bottom, both linear plots show a central cross section of selected PSFs for all considered noise levels.

Figure 8
figure 8

Sampled PSFs with pair indexes from the lower right quadrant of BLUR2 PSF field with medium star density (40,000 stars, pooling region of 256×256 pixels with half block overlapping). Top left: Original PSFs; top right: estimated PSFs in a 12-bit image (medium noise level). In the bottom, both linear plots show a central cross section of selected PSFs for all considered noise levels.

Figure 9
figure 9

Sampled PSFs with pair indexes from the lower right quadrant of BLUR1 PSF field with low level of noise and different star densities. Top left: medium density (40,000 stars) with pooling region of 256×256pixels; top right: high density with smaller pooling region (128×128pixels). In the bottom, both linear plots show a central cross section of selected PSFs for all considered density studied cases.

PSF field estimation in real astronomical images

We apply our algorithm to three raw images, which were taken with the historic Cassegrain 0.9-m (36-inch) telescope at the National Science Foundation’s Kitt Peak National Observatory: Crescent Nebula (Caldwell27), Swan Nebula (M17), and Ring Nebula (M57). All these images are coded in FITS format. They are publicly available for academic and research purposes in the observatory educational websitehttp://www.noao.edu/education/arbse/arpd/ia.

The apparent magnitude scale, which we apply in the previous simulations, is adapted to the sensitivity of the human visual system. For these experiments, we have used V-filtered raw images, because the formal photoelectric V peak (around 540 nm) is closest to the peak of the human eye’s (dark-adapted) detection efficiency. The images have not been preprocessed in order to show the robustness of the proposed algorithm.

The images are star fields of 2048 × 2048 pixels, and have ranges approximately corresponding to 16 bits. Figure10 shows the eigenvalue profiles for the initial grid of estimated PSF fields, which advise us to use three significant (above noise level) components for the Caldwell27 image, and only two for M17 and M57. In Figure11 we show a 15 × 15 PSF panel summarizing the result of the PSF field estimation, for Caldwell27. We observe that the PSF varies smoothly across the image, becoming more concentrated (less blurred) as we get closer to the bottom right corner. By looking at the three significant eigenPSFs (Figure12, top left) we see how they are modulating the spatial extension of the PSF, which basically follows an off-symmetric close-to-circular pattern. There is something remarkable about the behavior of the associated weights: to a first approximation, they share the same iso-level curves. This means that, in this real example, the modeled PSF field is not only smooth, but also (approximately) uni-parametric. This surprising result is further illustrated in Figure13, where the three computed weights have been visualized as dots in a 3D space. The result clearly shows a one dimensional structure (a curve), reflecting a single effective degree of freedom in the PSF field.

Figure 10
figure 10

First set of eigenvalues in real astronomical image cases under study: Caldwell27, M17 and M57.

Figure 11
figure 11

Estimated PSF field for Caldwell27, sampled in a 15×15 grid.

Figure 12
figure 12

Estimated first eigenPSFs from real astronomical image cases under study: Caldwell27, M17 and M57, and their associated weights over the image.

Figure 13
figure 13

Estimated weights of the three main eigenPSFs computed for Caldwell27, composing a one dimensional structure in the space (225 dots coming from sampling in a 15×15 grid).

Interestingly enough, the other two studied images have rather different PSF fields, which translate into different eigenPSF shapes, especially for the second component (see Figure12, top, center, and right). Nevertheless, the processed weights follow remarkably the same approximated pattern of iso-level weights for this significant second component. Note that the lack of a clear iso-level structure of the weights associated to the first eigenPSF is not significant in those examples, as the first component has a fairly constant weight all over the image. Such a pattern appears in other images (data not shown here) from the same telescope. This pattern is not just due to an artifact of the estimation method. We know that, because we have seen a radically different (and, by comparison to the known reference, correct) behavior in the simulations. Our provisional explanation is that such iso-level structure is related to the approximately fixed PSF field component caused by the telescope structure (which uses an off-center sensor with a parabolic mirror, in this case), whereas significant differences in the overall PSF field would be caused by the atmospheric PSF at the moment and sky location of the capture. The observed PSF field can be modeled as the result of convolving atmospheric PSF field (approximately spatially invariant, for the small angle subtended, but variable for each image capture) and the telescope PSF field component (strongly spatially variant, in this case, but approximately constant for each image on the set). However, we do not have a solid explanation yet for the approximately uni-parametric behavior of the PSF fields.

Image blind restoration in real astronomical images

Finally, we apply the SV restoration method from Part I using the estimated PSF fields to compensate for the blur. Because the dominant noise source in these images is not Gaussian, but Poissonian, we use a variant of the L0-AbS method[22]. Regarding all other aspects, the image estimation method and its implementation is the same as the one explained in Part I. The resulting images gain a lot of sharpness, which allows to resolve very close stars that appear as a single cloud in the observations (as shown in Figure14). The spatial “averaging” effect due to the block-wise estimation of the local PSFs may cause a slight overestimation of the blur, which also appears in the simulation results. That would explain some over-sharpening effects such as the low-contrast halo shown in the bottom right subfigure.

Figure 14
figure 14

Original (left) and restored (right) portions of Caldwell27 case study image. One detail has been enlarged to show how close astronomical objects are resolved by the algorithm. Gray-level scale in the image has been nonlinearly modified because of the large dynamic range.

Conclusions and future work

We propose an efficient and robust method for PSF field estimation on star fields. The method is based on careful detection and sub-pixel alignment of the stars according to a simple statistical local model and the ML principle, followed by a combination of dimensionality reduction and nonlinear filtering of outliers on the weights of the eigenPSFs. We have demonstrated that the proposed method is able to robustly capture the structure of the spatially variant blur of smooth PSF fields, in spite of the simplifying assumption that the local PSFs do not overlap. This is demonstrated by image and PSF field simulations, using different Poisson counts and star densities.

Beyond the introduction of the new local model, the article offers interesting results obtained from applying the method to real images (from a medium quality, Earth-based, telescope image dataset), for which the method has revealed significant information about the PSF field structure. We have also demonstrated the applicability of this technique in conjunction with the SV restoration described in our companion paper (Part I), to jointly build a blind SV restoration method assuming Poisson statistics.

We believe that the repeated-pattern local image model presented here has a big potential to be extended and improved in several directions, even for tasks different than PSF field estimation (e.g., texture characterization, or other forms of intelligent combinations of image analysis, processing and synthesis). Perhaps the most promising and natural among these possibilities could come from formalizing the pattern explicitly as a low-dimensional deformable kernel in a global image model, instead of making the local approximation of considering it constant on certain regions, and then building a smooth field from the resulting local estimates, as we did here. It may be both very interesting and potentially useful to explore, at least for the cases where a simple and robust structure of the PSF field is detected (like the approximately uni-parametric behavior observed in the real astronomical images studied here), alternative estimation strategies for exploiting the spatial structure of this non-local redundancy. This should result in reducing the bias of the estimates without increasing the associated variance. Finally, another natural evolution of this type of models could arrive by substituting the useful, but limited concept of linear dimensionality reduction by a much more powerful (and also more difficult to deal with) nonlinear dimensionality reduction concept (see, e.g.,[23]) which aims at modeling the underlying manifold associated to a varying pattern, low dimensional only in a local sense. Evidence of low-dimensional curved manifolds, like the one shown in Figure13 are very motivating to address this conceptual and technical challenge for practical purposes.

Endnotes

aTo jointly attack the deformable PSF field estimation requires, even in the favorable case of star fields, a substantially more sophisticated approach. We are currently working on that, given the applicability of such an approach to a wider range of images and estimation problems.bIf the neighborhood size is chosen appropriately, it is a safe assumption that p(x) goes to zero at the neighborhood boundaries (μ p is the offset to this “ground level”). Consequently, it is easy to estimate μ p from p0 and revert this decomposition, even though μ p is lost in p0.cAlthough the sample mean may depart significantly from the background level when there is a star in the neighborhood, we have tested that this effect, in this case, is small compared to that of the variance in the denominator.dThe interpolation is always computed on the original full-pixel samples to avoid accumulating error.eAs in the previous section, we are keeping the PSF patterns zero-mean and L2-normalized. We, as other authors (see, e.g.,[15]) have experienced that mean subtraction followed by L2normalization generally helps to produce a higher spectral concentration as compared to leaving the PSFs positive and normalized in volume.fAlthough this is an improper pdf, as it is not integrable around zero, this does not prevent us from using it for this purpose.gWe intend to remove this ambiguity in future versions of this method.

References

  1. Almeida MSC, Almeida LB: Blind and semi-blind deblurring of natural images. IEEE Trans. Image Process 2010, 19: 36-52.

    Article  MathSciNet  Google Scholar 

  2. Gupta A, Joshi N, Lawrence Zitnick C, Cohen M, Curless B: Computer Vision–ECCV 2010. Single image deblurring using motion density functions. (2010), pp. 171–184

    Google Scholar 

  3. Tai YW, Du H, Brown MS, Lin S: Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Trans. Pattern Anal. Mach. Intell 2010, 32(6):1012-1028.

    Article  Google Scholar 

  4. Joshi N, Szeliski R, Kriegman DJ: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008. PSF estimation using sharp edge prediction. (IEEE, 2008), pp. 1–8

    Google Scholar 

  5. Kee E, Paris S, Chen S, Wang J: IEEE International Conference on Computational Photography (ICCP 2011). Modeling and removing spatially-varying optical blur. (IEEE, 2011), pp. 1–8

  6. Shan Q, Curless B, Kohno T: Computer Vision - ECCV 2010, vol. 6316 of Lecture Notes in Computer Science. Seeing through obscure glass, (Springer, Berlin, 2010), pp. 364–378

    Google Scholar 

  7. Bernstein GM, Jarvis M: Shapes and shears, stars and smears: optimal measurements for weak lensing. Astronom. J 2002, 123(2):583. 10.1086/338085

    Article  Google Scholar 

  8. Refregier A: Shapelets—I. A method for image analysis. Month. Notices R. Astronom. Soc 2003, 338(1):35-47. 10.1046/j.1365-8711.2003.05901.x

    Article  Google Scholar 

  9. Denis L, Thiebaut E, Soulez F: 2011 18th IEEE International Conference on Image Processing (ICIP). Fast model of space-variant blurring and its application to deconvolution in astronomy. (September 2011), pp. 2817–2820

    Book  Google Scholar 

  10. Jee MJ, Ford HC, Illingworth GD, White RL, Broadhurst TJ, Coe DA, Meurer GR, Van Der Wel A, Benítez N, Blakeslee JP, et al.: Discovery of a ringlike dark matter structure in the core of the galaxy cluster Cl 0024+ 17. Astrophys. J 2007, 661: 728. 10.1086/517498

    Article  Google Scholar 

  11. Perona P: Deformable kernels for early vision. IEEE Trans. Pattern Anal. Mach. Intell 1995, 17(5):488-499. 10.1109/34.391394

    Article  Google Scholar 

  12. Golub GH, Heath M, Wahba G: Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 1979, 21: 215-223. 10.1080/00401706.1979.10489751

    Article  MathSciNet  MATH  Google Scholar 

  13. Lupton R, Gunn JE, Ivezic Z, Knapp GR, Kent S: Proceedings of Astronomical Data Analysis Software and Systems X, vol. 238, 12–15 November 2000. The SDSS imaging pipelines. (Astronomical Society of the Pacific, Boston, MA, USA, 2001), pp. 269 279

    Google Scholar 

  14. Jee MJ, Blakeslee JP, Sirianni M, Martel AR, White RL, Ford HC: Principal component analysis of the time-and position-dependent point-spread function of the advanced camera for surveys. Publications Astronom. Soc. Pacific 2007, 119(862):1403-1419. 10.1086/524849

    Article  Google Scholar 

  15. Jee MJ, Tyson JA: Towards precision LSST weak-lensing measurement-I: impacts of atmospheric turbulence and optical aberration. arXiv:1011.1913v3 [astro-ph.IM], 2011

    Google Scholar 

  16. Jarvis M, Jain B: Principal component analysis of PSF variation in weak lensing surveys. arXiv:astro-ph/0412234v2, 2004

    Google Scholar 

  17. Schrabback T, Hartlap J, Joachimi B, Kilbinger M, Simon P, Benabed K, Bradac M, Eifler T, Erben T, Fassnacht CD, William High F, Hilbert S, Hildebrandt H, Hoekstra H, Kuijken K, Marshall P, Mellier Y, Morganson E, Schneider P, Semboloni E, Van Waerbeke L, Velander M: Evidence of the accelerated expansion of the universe from weak lensing tomography with COSMOS. Astronom. Astrophys 2010, 516: 1-26.

    Article  Google Scholar 

  18. Lauer TR: Deconvolution with a spatially-variant PSF. arXiv:astro-ph/0208247v1, 2002

    Google Scholar 

  19. Elad M, Figueiredo MAT, Ma Y: On the role of sparse and redundant representations in image processing. IEEE Proc. (Special Issue on Applications of Sparse Representation and Compressive Sensing) 2010, 98(6):972-982.

    Google Scholar 

  20. Donoho DL, Johnstone IM: Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81: 425-455. 10.1093/biomet/81.3.425

    Article  MathSciNet  MATH  Google Scholar 

  21. Strous L: Mr. Sunspot’s Answer Book. National Solar Observatory/Sacramento Peak, Sunspot, NM, USA (data available through the Wikipedia article: “Apparent magnitude”, and in the web archive of the original source), 1996

  22. Gil-Rodrigo E, Portilla J, Miraut D, Suarez-Mesa R: 18th IEEE International Conference on Image Processing (ICIP). Efficient joint Poisson-Gauss restoration using multi-frame L2-relaxed-L0 analysis-based sparsity. (September 2011), pp. 1385–1388

    Google Scholar 

  23. Lee JA, Verleysen M: Nonlinear Dimensionality Reduction. (Series: Information Science and Statistics), Springer, New York, 2007

    Book  MATH  Google Scholar 

Download references

Acknowledgements

Astronomical real images used to illustrate the algorithm (Caldwell27, M17, and M57) are copyrighted by National Optical Astronomy Observatory/Association of Universities for Research in Astronomy/National Science Foundation. They have been used strictly for educational and research purposes, as NOAO/AURA Image Library Conditions of Use states. We also thank the anonymous reviewers, for their meaningful comments and constructive suggestions. David Miraut has been funded by grant CICYT TIN2010-21289-C02-01. Javier Portilla has been funded by grants CICYT TEC2009-13696 and CSIC PIE201050E021. All grants from the Spanish Education, and Science&Innovation Ministries. Johannes Ballé stay in Madrid has been partially funded by the CSIC.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Javier Portilla.

Additional information

Competing interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Miraut, D., Ballé, J. & Portilla, J. Efficient shift-variant image restoration using deformable filtering (Part II): PSF field estimation. EURASIP J. Adv. Signal Process. 2012, 193 (2012). https://doi.org/10.1186/1687-6180-2012-193

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-193

Keywords