Marcelo Pereyra | Welcome

Maxwell Insitute for Mathematical Sciences
School of Mathematical and Computer Sciences, Heriot-Watt University

Workshop on “New mathematical methods in computational imaging”, Heriot-Watt, 29 June 2017.

Sample logotype
Overview
Heriot-Watt University hosted a very successful workshop on "New mathematical methods in computational imaging", at the School of Mathematical and Computer Sciences on June 29 2017.

The workshop was organised by Marcelo Pereyra with the support of Heriot-Watt University and of the London Mathematical Society.

The event brought together imaging experts from the statistics, applied analysis, optimisation, inverse problems, and signal processing communities to discuss recent developments in mathematical methodology for computational imaging. The goals was to provide an opportunity to disseminate new results and to promote synergy and cross-fertilisation of ideas.

Programme (see abstracts)
09.50 - 10.00: Welcome
10.00 - 10.45: Mike Davies (slides)
Fast Data Driven Compressed Sensing with application to Compressed Quantitative MRI
Coffee break (30 minutes)
11.15 - 12.00: Marcelo Pereyra (slides)
Efficient Bayesian computation by proximal Markov chain Monte Carlo: Langevin meets Moreau
12.00 - 13.30: Lunch & Invited Poster Session
- Abderrahim Halimi: Restoration of Depth-Intensity Images using a Graph Laplacian Regularization
- Audrey Repetti: Non-convex blind deconvolution approach for sparse image processing
- Jenovah Rodriguez: Bayesian Inverse Problems with Heterogeneous Noise
- Xiaohao Cai: High-dimensional uncertainty quantification in radio interferometric imaging
13.30 - 14.15: Joao Mota (slides)
Signal Processing with Side Information: A Geometric Approach via Sparsity
14.15 - 15.00: Yoann Altmann (slides)
Comparison of sampling strategies for 3D scene reconstruction for sparse multispectral lidar
15.00 - 16.00: Coffee break & Contributed Poster Session
- Matt Mores: Approximate Posterior Inference for the Inverse Temperature of a Hidden Potts Model
- Shengheng Liu: Image Reconstruction Algorithm for EIT Based on Sparse Bayesian Learning
- Yong Bao: Nonlinear temperature field reconstruction using acoustic tomography
16.00 - 16.45: Jean-François Giovannelli (slides)
A Hierarchical Bayesian Strategy for Unsupervised Segmentation of Textured Image

Address: (Google maps link | For directions please click here)
Colin MacLauring Building, room S.01 (2nd floor)
School of Mathematical & Computer Sciences
Heriot-Watt University
Edinburgh
EH14 4AS
United Kingdom

Registration
The registration is now closed.

Titles and abstracts
Mike Davies (University of Edinburgh)
Fast Data Driven Compressed Sensing with application to Compressed Quantitative MRI
joint work with Mohammad Golbabaee and Zhouye Chen, Zaid Mahbub, Ian Marshall, Yves Wiaux.

Abstract:
We consider the problem of a class of compressed sensing with a data driven signal model. We show that fast reconstruction can be achieved through an inexact iterated projected gradient algorithm along with a cover tree data structure to enable fast nearest neighbor searches. We then apply this to a novel form of MR imaging called Magnetic Resonance Fingerprinting (MRF) that enables direct estimation of the T1, T2 and proton density parameter maps for a patient through an undersampled k-space sampling and BLIP, a gradient projection algorithm that enforces the MR Bloch dynamics. We will present both theoretical and numerical results showing that significant computational savings are possible through the use of inexact projections and a fast approximate nearest neighbor search.

Marcelo Pereyra (Heriot-Watt University, Mathematics)
Efficient Bayesian computation by proximal Markov chain Monte Carlo: Langevin meets Moreau
joint work with Alain Durmus and Eric Moulines.

Abstract:
Modern imaging methods rely strongly on Bayesian inference techniques to solve challenging imaging problems. Currently, the predominant Bayesian computation approach is convex optimisation, which scales very efficiently to high dimensional image models and delivers accurate point estimation results. However, in order to perform more complex analyses, for example image uncertainty quantification or model selection, it is necessary to use more computationally intensive Bayesian computation techniques such as Markov chain Monte Carlo methods. This paper presents a new and highly efficient Markov chain Monte Carlo methodology to perform Bayesian computation for high dimensional models that are log-concave and non-smooth, a class of models that is central in imaging sciences. The methodology is based on a regularised unadjusted Langevin algorithm that exploits tools from convex analysis, namely Moreau-Yoshida envelopes and proximal operators, to construct Markov chains with favourable convergence properties. In addition to scaling efficiently to high dimensions, the method is straightforward to apply to models that are currently solved by using proximal optimisation algorithms. We provide a detailed theoretical analysis of the proposed methodology, including asymptotic and non-asymptotic convergence results with easily verifiable conditions, and explicit bounds on the convergence rates. The proposed methodology is demonstrated with image deconvolution and tomographic reconstruction experiments where we conduct a range of challenging Bayesian analyses related to uncertainty quantification, hypothesis testing, and model selection in the absence of ground truth.

Joao Mota (Heriot-Watt University, ISSS Lab.)
Signal Processing with Side Information: A Geometric Approach via Sparsity

Abstract:
Making sense of modern datasets, in which data is often multi-modal and heterogeneous, is a challenging task that is becoming increasingly important for both academia and industry. In this work, we look at sparsity-based approaches to extract information from multi-modal data. We start with the problem of integrating prior knowledge into sparse reconstruction schemes. Prior information here means a signal similar to the signal to be reconstructed, for example, in medical imaging, a prior scan of the same patient. Our theory provides a minimal number of measurements required to reconstruct the original signal as a function of the quality of the prior information. We then develop what can be seen as a nonlinear, sparsity-based Kalman filter, where the number of measurements taken from each time sample is computed automatically. We illustrate our method with the reconstruction of a video sequence taken by a compressed sensing camera. Finally, in the last part of the talk, we describe an approach to separate the x-rays of the paintings in the door panels of the Ghent Altarpiece, a 15th century art work by Van Eyck currently under restoration. Our method uses the visual images to aid the separation process and outperforms prior state-of-the-art methods, such as morphological component analysis.

Yoann Altmann (Heriot-Watt University, ISSS Lab.)
Comparison of sampling strategies for 3D scene reconstruction from sparse multispectral lidar waveforms

Abstract:
In this talk, we will compare sampling strategies and associated unsupervised Bayesian algorithms to reconstruct scenes sensed via sparse multispectral Lidar measurements. In the presence of a target, Lidar waveforms usually consist of a peak, whose position and amplitude depend on the target distance and reflectivity, respectively. Using multiple wavelengths (e.g., multiple laser sources), it becomes possible to discriminate spectrally the main objects in the scene, in addition to extracting range profiles. We compare different sampling strategies illustrated via experiments conducted with real multispectral Lidar data and the results demonstrate the possibility to infer scene content from extremely sparse photon counts using different acquisition scenarios.

Jean-François Giovannelli (University of Bordeaux)
A Hierarchical Bayesian Strategy for Unsupervised Segmentation of Textured Image
joint work with Cornelia Vacar.

Abstract:
The proposed talk deals with the problem of segmentation of textured images. It is specifically devoted to the case of oriented textures and focused on unsupervised solutions. The images are composed of patches of textures that belong to a set of K possible classes described by a Gaussian random field. The labels that define the patches are modelled by a Potts field. The method relies on a hierarchical model and a Bayesian strategy to jointly estimate the labels, the textured images as well as the hyperparameters including texture parameters. The estimators are computed based on a convergent procedure, from samples of the posterior obtained through an MCMC algorithm (Gibbs sampler including Perturbation-Optimization). A first numerical evaluation is proposed.

Abderrahim Halimi (Heriot-Watt University, IPAQS Lab.)
Restoration of Depth and Intensity Images using a Graph Laplacian Regularization

Abstract:
This paper presents a new algorithm for the joint restoration of depth and intensity images constructed using a gated SPAD-array imaging system. The tridimensional (3D) data consists of two spatial dimensions and one temporal dimension comprising photon counts (i.e., histograms). The algorithm is based on two steps: (i) construction of a graph connecting temporally and spatially similar patches, and (ii) estimation of the depth and intensity values for pixels belonging to homogeneous spatial classes. The first step is achieved by building a graph representation of the 3D data. A special attention is given to the computational complexity of the algorithm, that is reduced by considering a Graph-cut method and a patch representation of the image. The second step is achieved using a Fisher scoring gradient descent algorithm while accounting for the data statistics and the Laplacian Regularization term. Results on synthetic and laboratory data show the benefit of the proposed strategy that improves the quality of the estimated depth and intensity images.

Audrey Repetti (Heriot-Watt University, ISSS Lab.)
Non-convex blind deconvolution approach for sparse image processing

Abstract:
New generations of imaging devices aim to produce high resolution and high dynamic range images. In this context, the associated high dimensional inverse problems can become extremely challenging from an algorithmic view point. Moreover, the imaging procedure can be affected by unknown calibration kernels. This leads to the need of performing joint image reconstruction and calibration, and thus of solving non-convex blind deconvolution problems. In this work, we focus in the case when the observed object is affected by smooth calibration kernels, and we leverage a block-coordinate forward-backward algorithm, specifically designed to minimize non-smooth non-convex and high dimensional objective functions.

Jenovah Rodrigues (University of Edinburgh)
Bayesian Inverse Problems with Heterogeneous Noise joint work with Natalia Bochkina.

Abstract:
We study linear, ill-posed inverse problems in separable Hilbert spaces with noisy observations. A Bayesian solution with Gaussian regularising priors will be studied; the aim being to select the prior distribution in such a way that the solution achieves the optimal rate of convergence, when the unknown function belongs to a Sobolev space. Consequently, we will focus on obtaining the rate of convergence, for the rate of contraction, of the whole posterior distribution to the forementioned unknown function. We consider a Gaussian noise error model with heterogeneous variance, which is investigated using the spectral decomposition of the operator defined in the inverse problem

Xiaohao Cai (University College London, MSSL)
High-dimensional uncertainty estimation with sparse priors for radio interferometric imaging joint work with Jason McEwen and Marcelo Pereyra.

Abstract:
In many fields high-dimensional inverse imaging problems are encountered. For example, imaging the raw data acquired by radio interferometric telescopes involves solving an ill-posed inverse problem to recover an image of the sky from noisy and incomplete Fourier measurements. Future telescopes, such as the Square Kilometre Array (SKA), will usher in a new big-data era for radio interferometry, with data rates comparable to world-wide internet traffic today. Sparse regularisation techniques are a powerful approach for solving these problems, typically yielding excellent reconstruction fidelity (e.g. Pratley et al. 2016). Moreover, by leveraging recent developments in convex optimisation, these techniques can be scaled to extremely large data-sets (e.g. Onose et al. 2016). However, such approaches typically recover point estimators only and uncertainty information is not quantified. Standard Markov Chain Monte Carlo (MCMC) techniques that scale to high-dimensional settings cannot support the sparse (non-differentiable) priors that have been shown to be highly effective in practice. We present work adapting the proximal Metropolis adjusted Langevin algorithm (P-MALA), developed recently by Pereyra (2016a), for radio interferometric imaging with sparse priors (Cai, Pereyra & McEwen 2017a), leveraging proximity operators from convex optimisation in an MCMC framework to recover the full posterior distribution of the sky image. While such an approach provides critical uncertainty information, scaling to extremely large data-sets, such as those anticipated from the SKA, is challenging. To address this issue we develop a technique to compute approximate local Bayesian credible intervals by post-processing the point (maximum a-posteriori) estimator recovered by solving the associated sparse regularisation problem (Cai, Pereyra & McEwen 2017b), leveraging recent results by Pereyra (2016b). This approach inherits the computational scalability of sparse regularisation techniques, while also providing critical uncertainty information. We demonstrate these techniques on simulated observations made by radio interferometric telescopes.

Shengheng Liu (University of Edinburgh)
Image Reconstruction Algorithm for Electrical Impedance Tomography Based on Sparse Bayesian Learning

Abstract:
Electrical impedance tomography (EIT) is a promising agile imaging modality that allows estimation of the electrical conductivity distribution at the interior of an object from boundary measurements, which is attractive in a broad spectrum of biomedical and industrial continuous monitoring applications. As the inverse problem of EIT image reconstruction generally suffers from severe ill-posedness, existing approaches commonly recur to regularization with various penalty terms. In our work, the concept of sparse Bayesian learning is introduced to EIT imaging, which yields improved accuracy and robustness in the image reconstruction performance by exploiting signal structures in terms of their group sparsity and intra-group correlation. The effectiveness of the proposed algorithm is confirmed by the phantom test results.

Matt Mores (University of Warwick)
Approximate Posterior Inference for the Inverse Temperature of a Hidden Potts Model

Abstract:
There are many approaches to Bayesian computation with intractable likelihoods, including the exchange algorithm and approximate Bayesian computation (ABC). A serious drawback of these algorithms is that they do not scale well for models with a large state space. An example in image analysis is the inverse temperature parameter of the Potts model, which governs the amount of smoothing and hence has a major influence over the resulting model fit. Inference for this parameter is difficult because neither the maximum likelihood estimator nor Metropolis-Hastings ratio are available in closed form. Instead, we introduce a parametric surrogate model, which approximates the intractable likelihood function using an integral curve. Our surrogate model incorporates known properties of the likelihood, such as heteroscedasticity and critical temperature. This Bayesian indirect likelihood (BIL) algorithm has been implemented in the R package ‘bayesImageS’, available from CRAN.

Yong Bao (University of Edinburgh)
Nonlinear temperature field reconstruction using acoustic tomography

Abstract:
Acoustic tomography is considered to be a promising technique for temperature field monitoring, with the advantage of non-invasive, low cost, high temporal resolution and ease of use. However, in the combustion process, the gradient of the temperature field could be relatively large, therefore the commonly used straight ray acoustic tomography may not be able to provide accurate quantitative temperature field estimation due to refraction effect. Therefore, the bent ray model and nonlinear reconstruction algorithm is applied, which allows the sound propagation trajectories and temperature distribution being reconstructed iteratively from the time-of-flight (TOF) measurements. Based on local linearity assumption, each reconstruction iteration consists of two steps, the ray tracing step to calculate the ray trajectories from the obtained temperature field estimation, and the linear reconstruction step, which utilises the SIRT method to update the temperature field estimation. The feasibility and effectiveness of the developed methods are validated in simulations.