A Variable splitting Augmented Lagragian Approach to Linear Spectral Unmixing
J. Bioucas-Dias, "A variable splitting augmented Lagrangian approach to linear spectral unmixing", in First IEEE GRSS Workshop on Hyperspectral Image and Signal Processing-WHISPERS'2009, Grenoble, France, 2009. Available at http://arxiv.org/abs/0904.4635v
Abstract
This paper presents a new linear hyperspectral unmixing method of the minimum volume class, termed simplex identification via split augmented Lagrangian (SISAL). Following Craig’s seminal ideas, hyperspectral linear unmixing amounts to finding the minimum volume simplex containing the hyperspectral vectors. This is a nonconvex optimization problem with convex constraints. In the proposed approach, the positivity constraints, forcing the spectral vectors to belong to the convex hull of the endmember signatures, are replaced by soft constraints. The obtained problem is solved by a sequence of augmented Lagrangian optimizations. The resulting algorithm is very fast and able so solve problems far beyond the reach of the current state-of-the art algorithms. The effectiveness of SISAL is illustrated with simulated data.
Minimum Volume Simplex Analysis: A fast Algorithm to Unmix Hyperspectral Data
J. Li and J. Bioucas-Dias, "Minimum volume simplex analysis: a fast algorithm to unmix hyperspectral data", in IEEE International Geoscience and Remote sensing Symposium IGARSS’2008, Boston, USA, 2008.
Abstract
This paper presents a new method of minimum volume class for hyperspectral unmixing, termed minimum volume simplex analysis (MVSA). The underlying mixing model is linear; i.e., the mixed hyperspectral vectors are modeled by a linear mixture of the endmember signatures weighted by the correspondent abundance fractions. MVSA approaches hyperspectral unmixing by fitting a minimum volume simplex to the hyperspectral data, constraining the abundance fractions to belong to the probability simplex. The resulting optimization problem is solved by implementing a sequence of quadratically constrained subproblems. In a final step, the hard constraint on the abundance fractions is replaced with a hinge-type loss function to account for outliers and noise. We illustrate the state-of-the-art performance of the MVSA algorithm in unmixing simulated data sets. We are mainly concerned with the realistic scenario in which the pure pixel assumption (i.e., there exists at least one pure pixel per endmember) is not fulfilled. In these conditions, the MVSA yields much better performance than the pure pixel based algorithms.
PEARLS (phase estimation using adaptive regularization based on local smoothing)
J. Bioucas-Dias, V. Katkovnik, J. Astola, and K. Egiazarian, "Absolute phase estimation: adaptive local denoising and global unwrapping", vol. 47. no, 29, pp. 5358-5369, Applied Optics, 2008.
Abstract
The paper attacks absolute phase estimation with a two-step approach: the
first step applies an adaptive local denoising scheme to the modulo-2π noisy
phase; the second step applies a robust phase unwrapping algorithm to the
denoised modulo-2π phase obtained in the first step. The adaptive local
modulo-2π phase denoising is a new algorithm based on local polynomial
approximations. The zero-order and the firstorder approximations of the phase
are calculated in sliding windows of varying size. The zero-order approximation
is used for pointwise adaptive window size selection, whereas the first-order
approximation is used to filter the phase in the obtained windows. For phase
unwrapping, we apply the recently introduced robust (in the sense of
discontinuity preserving) PUMA unwrapping algorithm [IEEE Trans. Image Process.
16, 698 (2007)] to the denoised wrapped phase. Simulations give evidence that
the proposed algorithm yields state-of-the-art performance, enabling strong
noise attenuation while preserving image details.
HySime (hyperspectral signal subspace identification by minimum error )
J. Bioucas-Dias and J. Nascimento "Hyperspectral subspace identification", IEEE Transactions on Geoscience and Remote Sensing, vol. 46, no. 8, pp. 2435-2445, 2008
Abstract
Signal subspace identification is a crucial first step in many hyperspectral processing algorithms such as target detection, change detection, classification, and unmixing. The identification of this subspace enables a correct dimensionality reduction, yielding gains in algorithm performance and complexity and in data storage. This paper introduces a new minimum mean square error-based approach to infer the signal subspace in hyperspectral imagery. The method, which is termed hyperspectral signal identification by minimum error, is eigen decomposition based, unsupervised, and fully automatic (i.e., it does not depend on any tuning parameters). It first estimates the signal and noise correlation matrices and then selects the subset of eigenvalues that best represents the signal subspace in the least squared error sense. State-of-the-art performance of the proposed method is illustrated by using simulated and real hyperspectral images.
J. Bioucas-Dias and M. Figueiredo "A new TwIST: Two-step iterative shrinkage/thresholding algorithms for image restoration", IEEE Transactions on Image processing, December, 2007
Abstract
Iterative shrinkage/thresholding (IST) algorithms have been recently proposed
to handle a class of convex unconstrained optimization problems arising in image
restoration and other linear inverse problems. This class of problems results
from combining a linear observation model with a non-quadratic regularizer (e.g.,
total variation, or wavelet-based regularization). It happens that the
convergence rate of these IST algorithms depends heavily on the condition of the
linear observation operator, becoming very slow when this operator is
ill-conditioned.
In this paper, we introduce two-step IST (TwIST) algorithms, exhibiting much
faster convergence rate for ill-conditioned problems. For a vast class of
non-quadratic convex regularizers (including ℓp
norms, some Besov norms, and total variation), we show that the proposed TwIST
algorithm converges to a minimizer of the objective function, for a given range
of values of its parameters. The effectiveness of the new method is
experimentally confirmed on a set of image deconvolution problems.
J. Bioucas-Dias, M. Figueiredo,
J. Oliveira, "Total
variation image deconvolution: A majorization-minimization approach",
IEEE International Conference on Acoustics, Speech, and Signal Processing
- ICASSP'2006,
Abstract
The total variation regularizer is well suited to piecewise smooth images. If we add the fact that these regularizers are convex, we have, perhaps, the reason for the resurgence of interest on TV-based approaches to inverse problems. This paper proposes a new TV-based algorithm for image deconvolution, under the assumptions of linear observations and additive white Gaussian noise. To compute the TV estimate, we propose a majorization-minimization approach, which consists in replacing a di±cult optimization problem by a sequence of simpler ones, by relying on convexity arguments. The resulting algorithm has O(N) computational complexity, for finite support convolutional kernels. In a comparison with state-of-the-art methods, the proposed algorithm either outperforms or equals them, with similar computational complexity.
J. Bioucas-Dias and G. Valadão, "Phase unwrapping via graph cuts", IEEE Transactions on Image processing, vol. 16, no. 3, pp. 698-709, 2007
Abstract
Phase unwrapping is the inference of absolute phase from modulo-2π phase.
This paper introduces a new energy minimization framework for phase unwrapping.
The considered objective functions are first order Markov random fields. We
provide an exact energy minimization algorithm, whenever the corresponding
clique potentials are convex, namely for the phase unwrapping classical Lp norm,
with p ≥ 1. Its complexity is KT(n, 3n), where K is the length of the absolute
phase domain measured in 2π units and T(n,m) is the complexity of a max-flow
computation in a graph with n nodes and m edges. For nonconvex clique
potentials, often used owing to their discontinuity preserving ability, we face
an NP-hard problem for which we devise an approximate solution. Both algorithms
solve integer optimization
problems, by computing a sequence of binary optimizations, each one solved by
graph cut techniques. Accordingly, we name the wo algorithms PUMA, for phase
unwrapping max-flow/min-cut. A set of experimental results illustrates the
effectiveness of the proposed approach and its competitiveness in comparison
with state-of-the-art phase unwrapping algorithms.
VCA Algorithm (unmix hyperspectral data)
J. Nascimento and J. Dias, "Vertex Component Analysis: A fast algorithm to unmix hyperspectral data", IEEE Transactions on Geoscience and Remote Sensing, vol. 43, no. 4, pp. 898-910, 2005.
Abstract
Given a set of mixed spectral (multispectral or hyperspectral) vectors, linear spectral mixture analysis, or linear unmixing, aims at estimating the number of reference substances, also called endmembers, their spectral signatures, and their abundance fractions. This paper presents a new method for unsupervised endmember extraction from hyperspectral data, termed vertex component analysis (VCA). The algorithm exploits two facts: i) the endmembersare the vertices of a simplex and ii) the affine transformation of a simplex is also a simplex. In a series of experiments using simulated and real data, the VCA algorithm competes with state-of-the-art methods, with a computationalcomplexity between one and two orders of magnitude lower than the best available method.
MATLAB code for windows, real data (cuprite)
Code illustrating the limitions of ICA in unmix hyperspectral data
J. Nascimento and J. Dias, "Does Independent Component Analysis Play a Role
in Unmixing Hyperspectral Data?", IEEE Transactions on Geoscience and Remote Sensing,
vol. 43, no. 1, pp 175-187, 2005.
Abstract
Independent Component Analysis (ICA) has recently been proposed as a tool to unmix hyperspectral data. ICA is founded on two assumptions: i) The observed spectrum vector is a linear mixture of the constituent spectra (endmember spectra) weighted by the correspondent abundance fractions (sources); ii) Sources are statistically independent. Independent Factor Analysis (IFA) extends ICA to linear mixtures of independent sources immersed in noise. Concerning hyperspectral data, the first assumption is valid whenever the multiple scattering among the distinct the constituent substances (endmembers) is negligible and the surface is partitioned according to thefractional abundances. The second assumption, however, is violated, since the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be statistically independent. This paper studies the impact of hyperspectral source statistical dependence on ICA and IFA performances. We conclude that the accuracy of these methods improve with the increase of signature variability and of signal-to-noise ratio (SNR). In any case, there are always endmembers incorrectly unmixed. We arrive to this conclusion by minimizing the mutual information of simulated and real hyperspectral mixtures. The computation of mutual information is based on fitting mixtures of Gaussians to the observed data. A method to sort ICA and IFA estimates in terms of the likelihood of being correctly unmixed is proposed.
MATLAB code for windows, real data (cuprite)
ZpM Algorithm (absolute phase estimation, phase unwrapping)
J. Dias and J. Leitão, "The ZpM Algorithm for Interferometric Image Reconstruction in SAR/SAS", IEEE Transactions on Image processing, vol 11, no. 4, pp. 408-422, 2002.
Abstract
The paper presents an effective algorithm for absolute phase (not simply
modulo-2p) estimation from incomplete, noisy, and
modulo-2p observations in interferometric aperture radar and
sonar (InSAR/InSAS). The adopted framework is also representative of other applications such as optical interferometry, magnetic
resonance imaging, and diffraction tomography. The Bayesian viewpoint is adopted;
the observation density is 2p-periodic and accounts for the
interferometric pair decorrelation and system noise; the a priori probability of the absolute phase is modelled by a
Compound Gauss Markov random field (CGMRF) tailored to piecewise smooth absolute phase images.
We propose an iterative scheme for the computation of the maximum a posteriori
probability (MAP) phase estimate. Each
iteration embodies a discrete optimization step (Z-step), implemented by network programming techniques, and an
iterative conditional modes (ICM) step (2p-step).
Accordingly, the algorithm is termed ZpM, where the letter
M
stands for maximization. A set of experimental results, comparing the proposed algorithm with alternative approaches, illustrates
the effectiveness of the proposed method.
MATLAB code for windows, examples (demos.ppt)