Pattern coupled sparse Bayesian learning for recovery of time varying sparse signals

In this paper, we consider the problem of recovering time-varying sparse signals whose sparsity patterns change slowly over time. We develop a new sparse Bayesian learning method for recovery of time-varying sparse signals. A pattern-coupled hierarchical Gaussian prior model is introduced to capture the correlation of the temporal support of time-varying sparse signals. Like the conventional sparse Bayesian learning framework, a set of hyperparameters are introduced to control the sparsity of the signal coefficients. The notable difference is that, for our model, the prior for each coefficient not only involves its own hyperparameter, but also the hyperparameters associated with the coefficients of neighboring temporal observations. In doing this way, sparsity patterns of adjacent (in time) coefficients are coupled through their shared hyperparameters. Hence the prior has the potential to encourage temporally correlated sparsity patterns, while without imposing any pre-defined structures on the recovered signals. Simulation results are provided to illustrate the effectiveness of the proposed algorithm.


I. INTRODUCTION
Compressive sensing is a recently emerged technique of signal sampling and reconstruction, the main purpose of which is to recover sparse signals from much fewer linear measurements [1]- [3] y = Ax (1) where A ∈ R m×n is the sampling matrix with m ≪ n, and x denotes the n-dimensional sparse signal with only K nonzero coefficients. Such a problem has been extensively studied and a variety of algorithms that provide consistent recovery performance guarantee were proposed, e.g. [1]- [6]. In practice, sparse signals usually have additional structures that can be exploited to enhance the recovery performance. For example, the atomic decomposition of multi-band signals [7]  or audio signals [8] usually results in a block-sparse structure in which the nonzero coefficients occur in clusters. In addition, a discrete wavelet transform of an image naturally yields a tree structure of the wavelet coefficients, with each wavelet coefficient serving as a "parent" for a few "children" coefficients [9]. A number of algorithms, e.g., block-OMP [10], mixed ℓ 2 /ℓ 1 norm-minimization [11], group LASSO [12], StructOMP [13], and model-based CoSaMP [14] were proposed for recovery of block-sparse signals, and their recovery behaviors were analyzed in terms of the model-based restricted isometry property (RIP) [11], [14] and the mutual coherence [10]. Analyses suggested that exploiting the inherent structure of sparse signals helps improve the recovery performance considerably. These algorithms, albeit effective, require the knowledge of the block structure (such as locations and sizes of blocks) of sparse signals a priori. In practice, however, the prior information about the block structure of sparse signals is often unavailable. For example, we know that images have structured sparse representations but the exact tree structure of the coefficients is unknown to us. To address this difficulty, a hierarchical Bayesian "spike-and-slab" prior model is introduced in [9], [15] to encourage the sparseness and promote the cluster patterns simultaneously. Nevertheless, for both works [9], [15], the posterior distribution cannot be derived analytically, and a Markov chain Monte Carlo (MCMC) sampling method has to be employed for Bayesian inference. In [16], [17], a graphical prior, also referred to as the "Boltzmann machine", was used to model the statistical dependencies between atoms. Specifically, the Boltzmann machine is employed as a prior on the support of a sparse representation. However, the maximum a posterior (MAP) estimator with such a prior involves an exhaustive search over all possible sparsity patterns. To overcome the intractability of the combinatorial search, a greedy method [16] and a variational mean-field approximation method [17] were proposed to approximate the MAP. Recently, a sparse Bayesian learning method was proposed in [18] to address the sparse signal recovery problem when the block structure is unknown. In [18], the components of the signal are partitioned into a number of overlapping blocks and each block is assigned a Gaussian prior. An expanded model is then used to convert the overlapping structure into a block diagonal structure so that the conventional block sparse Bayesian learning algorithm can be readily applied.
In this paper, we develop a new Bayesian method for block-sparse signal recovery when the block-sparse patterns are entirely unknown. Similar to the conventional sparse Bayesian learning approach [19], [20], a Bayesian hierarchical Gaussian framework is employed to model the sparse prior, in which a set of hyperparameters are introduced to characterize the Gaussian prior and control the sparsity of the signal components. Conventional sparse learning approaches, however, assume independence between the elements of the sparse signal. Specifically, each individual hyperparameter is associated independently with each coefficient of the sparse signal. To model the block-sparse patterns, in this paper, we propose a coupled hierarchical Gaussian framework in which the sparsity of each coefficient is controlled not only by its own hyperparameter, but also by the hyperparameters of its immediate neighbors. Such a prior encourages clustered patterns and suppresses "isolated coefficients" whose pattern is different from that of its neighboring coefficients. An expectationmaximization (EM) algorithm is developed to learn the hyperparameters characterizing the coupled hierarchical model and to estimate the block-sparse signal. Our proposed algorithm not only admits a simple iterative procedure for Bayesian inference. It also demonstrates superiority over other existing methods for block-sparse signal recovery.
The rest of the paper is organized as follows. In Section II, we introduce a new coupled hierarchical Gaussian framework to model the sparse prior and the dependencies among the signal components. An expectation-maximization (EM) algorithm is developed in Section III to learn the hyperparameters characterizing the coupled hierarchical model and to estimate the block-sparse signal. Section IV extends the proposed Bayesian inference method to the scenario where the observation noise variance is unknown. Relation of our work to other existing works are discussed in V, and an iterative reweighted algorithm is proposed for the recovery of blocksparse signals. Simulation results are provided in Section VI, followed by concluding remarks in Section VII.

II. HIERARCHICAL PRIOR MODEL
We consider the problem of recovering a block-sparse signal x ∈ R n from noise-corrupted measurements where A ∈ R m×n (m < n) is the measurement matrix, and w is the additive multivariate Gaussian noise with zero mean and covariance matrix σ 2 I. The signal x has a block-sparse structure but the exact block pattern such as the location and size of each block is unavailable to us. In the conventional sparse Bayesian learning framework, to encourage the sparsity of the estimated signal, x is assigned a Gaussian prior distribution where p(x i |α i ) = N (x i |0, α −1 i ), and α {α i } are nonnegative hyperparameters controlling the sparsity of the signal x. Clearly, when α i approaches infinity, the corresponding coefficient x i becomes zero. By placing hyperpriors over {α i }, the hyperparameters {α i } can be learned by maximizing their posterior probability. We see that in the above conventional hierarchical Bayesian model, each hyperparameter is associated independently with each coefficient. The prior model assumes independence among coefficients and has no potential to encourage clustered sparse solutions.
To exploit the statistical dependencies among coefficients, we propose a new hierarchical Bayesian model in which the prior for each coefficient not only involves its own hyperparameter, but also the hyperparameters of its immediate neighbors. Specifically, a prior over x is given by where and we assume α 0 = 0 and α n+1 = 0 for the end points x 1 and x n , 0 ≤ β ≤ 1 is a parameter indicating the relevance between the coefficient x i and its neighboring coefficients {x i+1 , x i−1 }. To better understand this prior model, we can rewrite (5) as We see that the prior for x i is proportional to a product of three Gaussian distributions, with the coefficient x i associated with one of the three hyperparameters {α i , α i+1 , α i−1 } for each distribution. When β = 0, the prior distribution (6) reduces to the prior for the conventional sparse Bayesian learning. When β > 0, the sparsity of x i not only depends on the hyperparameter α i , but also on the neighboring hyperparameters {α i+1 , α i−1 }. Hence it can be expected that the sparsity patterns of neighboring coefficients are related to each other. Also, such a prior does not require the knowledge of the block-sparse structure of the sparse signal. It naturally has the tendency to suppress isolated non-zero coefficients and encourage structured-sparse solutions.
Following the conventional sparse Bayesian learning framework, we use Gamma distributions as hyperpriors over the hyperparameters {α i }, i.e. (7) where Γ(a) = ∞ 0 t a−1 e −t dt is the Gamma function. The choice of the Gamma hyperprior results in a learning process which tends to switch off most of the coefficients that are deemed to be irrelevant, and only keep very few relevant coefficients to explain the data. This mechanism is also called as "automatic relevance determination". In the conventional sparse Bayesian framework, to make the Gamma prior noninformative, very small values, e.g. 10 −4 , are assigned to the two parameters a and b. Nevertheless, in this paper, we use a more favorable prior which sets a larger a (say, a = 1) in order to achieve the desired "pruning" effect for our proposed hierarchical Bayesian model. Clearly, the Gamma prior with a larger a encourages large values of the hyperparameters, and therefore promotes the sparseness of the solution since the larger the hyperparameter, the smaller the variance of the corresponding coefficient.

III. PROPOSED BAYESIAN INFERENCE ALGORITHM
We now proceed to develop a sparse Bayesian learning method for block-sparse signal recovery. For ease of exposition, we assume that the noise variance σ 2 is known a priori. Extension of the Bayesian inference to the case of unknown noise variance will be discussed in the next section. Based on the above hierarchical model, the posterior distribution of x can be computed as where α {α i }, p(x|α) is given by (4), and It can be readily verified that the posterior p(x|α, y) follows a Gaussian distribution with its mean and covariance given respectively by Given a set of estimated hyperparameters {α i }, the maximum a posterior (MAP) estimate of x is the mean of its posterior distribution, i.e.
Our problem therefore reduces to estimating the set of hyperparameters {α i }. With hyperpriors placed over α i , learning the hyperparameters becomes a search for their posterior mode, i.e. maximization of the posterior probability p(α|y). A strategy to maximize the posterior probability is to exploit the expectation-maximization (EM) formulation which treats the signal x as the hidden variables and maximizes the expected value of the complete log-posterior of α, i.e. E x|y,α [log p(α|x)], where the operator E x|y,α [·] denotes the expectation with respect to the distribution p(x|y, α). Specifically, the EM algorithm produces a sequence of estimates α (t) , t = 1, 2, 3, . . ., by applying two alternating steps, namely, the E-step and the M-step [21]. E-Step: Given the current estimates of the hyperparameters α (t) and the observed data y, the E-step requires computing the expected value (with respect to the missing variables x) of the complete log-posterior of α, which is also referred to as the Q-function; we have where c is a constant independent of α. Ignoring the term independent of α, and recalling (4), the Q-function can be re-expressed as Since the posterior p(x|y, α (t) ) is a multivariate Gaussian distribution with its mean and covariance matrix given by (10), we have whereμ i denotes the ith entry ofμ,φ i,i denotes the ith diagonal element of the covariance matrixΦ,μ andΦ are computed according to (10), with α replaced by the current estimate α (t) . With the specified prior (7), the Q-function can eventually be written as

M-Step:
In the M-step of the EM algorithm, a new estimate of α is obtained by maximizing the Q-function, i.e.
For the conventional sparse Bayesian learning, maximization of the Q-function can be decoupled into a number of separate optimizations in which each hyperparameter α i is updated independently. This, however, is not the case for the problem being considered here. We see that the hyperparameters in the Q-function (16) are entangled with each other due to the logarithm term log(α i + βα i+1 + βα i−1 ). In this case, an analytical solution to the optimization (17) is difficult to obtain. Gradient descend methods can certainly be used to search for the optimal solution. Nevertheless, such a gradientbased search method, albeit effective, does not provide any insight into the learning process. Also, gradient-based methods involve higher computational complexity as compared with an analytical update rule. To overcome the drawbacks of gradientbased methods, we consider an alternative strategy which aims at finding a simple, analytical sub-optimal solution of (17). Such an analytical sub-optimal solution can be obtained by examining the optimality condition of (17). Suppose α * is the optimal solution of (17), then the first derivative of the Qfunction with respect to α equals to zero at the optimal point, i.e.
Combining (22)-(23), we arrive at With a = 1, and b = 10 −4 , a sub-optimal solution to (17) can be obtained aŝ for some κ within the range 1 + c 0 ≥ κ ≥ 1. We see that the solution (25) provides a simple rule for the hyperparameter update. Also, notice that the update rule (25) resembles that of the conventional sparse Bayesian learning work [19], [20] except that the parameter ω i is equal toμ 2 i +φ i,i for the conventional sparse Bayesian learning method, while for our case, ω i is a weighted summation ofμ 2 j +φ j,j for j = i − 1, i, i + 1.
where ǫ is a prescribed tolerance value. Remarks: Although the above algorithm employs a suboptimal solution (25) to update the hyperparameters in the Mstep, numerical results show that the sub-optimal update rule is quite effective and presents similar recovery performance as using a gradient-based search method. This is because the sub-optimal solution (25) provides a reasonable estimate of the optimal solution when the parameter a is set away from zero, say, a = 1. Numerical results also suggest that the proposed algorithm is insensitive to the choice of the parameter κ in (25) as long as κ is within the range [a, a + c 0 ] for a properly chosen a. We simply set κ = a in our following simulations.
The update rule (25) not only admits a simple analytical form which is computationally efficient, it also provides an insight into the EM algorithm. The Bayesian Occam's razor which contributes to the success of the conventional sparse Bayesian learning method also works here to automatically select an appropriate simple model. To see this, note that in the E-step, when computing the posterior mean and covariance matrix, a large hyperparameter α i tends to suppress the values of the corresponding components {µ j , φ j } for j = i−1, i, i+1 (c.f. (10)). As a result, the value of ω i becomes small, which in turn leads to a larger hyperparameter α i (c.f. (25)). This negative feedback mechanism keeps decreasing most of the entries inx until they reach machine precision and become zeros, while leaving only a few prominent nonzero entries survived to explain the data. Meanwhile, we see that each hyperparameter α i not only controls the sparseness of its own corresponding coefficient x i , but also has an impact on the sparseness of the neighboring coefficients {x i+1 , x i−1 }. Therefore the proposed EM algorithm has the tendency to suppress isolated non-zero coefficients and encourage structured-sparse solutions.

IV. BAYESIAN INFERENCE: UNKNOWN NOISE VARIANCE
In the previous section, for simplicity of exposition, we assume that the noise variance σ 2 is known a priori. This assumption, however, may not hold valid in practice. In this section, we discuss how to extend our previously developed Bayesian inference method to the scenario where the noise variance σ 2 is unknown.
For notational convenience, define Following the conventional sparse Bayesian learning framework [19], we place a Gamma hyperprior over γ: where the parameters c and d are set to small values, e.g. c = d = 10 −4 . As we already derived in the previous section, given the hyperparameters α and the noise variance σ 2 , the posterior p(x|α, γ, y) follows a Gaussian distribution with its mean and covariance matrix given by (10). The MAP estimate of x is equivalent to the posterior mean. Our problem therefore becomes jointly estimating the hyperparameters α and the noise variance σ 2 (or equivalently γ). Again, the EM algorithm can be used to learn these parameters via maximizing their posterior probability p(α, γ|y). The alternating EM steps are briefly discussed below.

E-Step:
In the E-step, given the current estimates of the parameters {α (t) , γ (t) } and the observed data y, we compute the expected value (with respect to the missing variables x) of the complete log-posterior of {α, γ}, that is, E x|y,α (t) ,γ (t) [log p(α, γ|x, y)], where the operator E x|y,α (t) ,γ (t) [·] denotes the expectation with respect to the distribution p(x|y, α (t) , γ (t) ). Since the Q-function can be expressed as a summation of two terms where the first term has exactly the same form as the Qfunction (13) obtained in the previous section, except with the known noise variance σ 2 replaced by the current estimate (σ (t) ) 2 = 1/γ (t) , and the second term is a function of the variable γ.

M-Step:
We observe that in the Q-function (28), the parameters α and γ to be learned are separated from each other. This allows the estimation of α and γ to be decoupled into the following two independent problems: The first optimization problem (39) has been thoroughly studied in the previous section, where we provided a simple analytical form (25) for the hyperparameter update. We now discuss the estimation of the parameter γ. Recalling (26), we have Computing the first derivative of (31) with respect to γ and setting it equal to zero, we get where χ E x|y,α (t) ,γ (t) y − Ax 2 2 Note that the posterior p(x|y, α (t) , γ (t) ) follows a Gaussian distribution with meanμ and covariance matrixΦ, whereμ andΦ are computed via (10) with γ (i.e. σ 2 ) and α replaced by the current estimates {γ (t) , α (t) }. Hence χ can be computed as where the last equality (a) follows from in whichD is given by (11) with α replaced by the current estimate α (t) , and n+1 are set to zero when computing ρ 1 and ρ n . Substituting (33) back into (32), a new estimate of γ, i.e. the optimal solution to (30), is given by i , while ρ i is given by (35) for our algorithm.
The sparse Bayesian learning algorithm with unknown noise variance is now summarized as follows.

A. Related Work
Sparse Bayesian learning is a powerful approach for regression, classification, and sparse representation. It was firstly introduced by Tipping in his pioneering work [19], where the regression and classification problem was addressed and a sparse Bayesian learning approach was developed to automatically remove irrelevant basis vectors and retain only a few 'relevant' vectors for prediction. Such an automatic relevance determination mechanism and the resulting sparse solution not only effectively avoid the overfitting problem, but also render superior regression and classification accuracy. Later on in [20], [22], sparse Bayesian learning was introduced to solve the sparse recovery problem. In a series of experiments, sparse Bayesian learning demonstrated superior stability for sparse signal recovery, and presents uniform superiority over other methods.
In [23], sparse Bayesian learning was generalized to solve the simultaneous (block) sparse recovery problem, in which a group of coefficients sharing the same sparsity pattern are assigned a multivariate Gaussian prior parameterized by a common hyperparameter that controls the sparsity of this group of coefficients. Specifically, we have where x i denotes the group of coefficients that share a same sparsity pattern, α i is the hyperparameter controlling the sparsity of x i . In [24], the above model was further improved to accommodate temporally correlated sources in which B i is a positive definite matrix that captures the correlation structure of x i . We see that, in both models [23], [24], each coefficient is associated with only one sparsenesscontrolling hyperparameter. This explicit assignment of each coefficient to a certain hyperparameter requires to know the exact block sparsity pattern a priori. In contrast, for our hierarchical Bayesian model, each coefficient is associated with multiple hyperparameters, and the hyperparameters are somehow related to each other through their commonly connected coefficients. Such a coupled hierarchical model has the potential to encourage block-sparse patterns, while without imposing any stringent or pre-specified constraints on the structure of the recovered signals. This property enables the proposed algorithm to learn the block-sparse structure in an automatic manner. Recently, Zhang and Rao extended the block sparse Bayesian learning framework to address the sparse signal recovery problem when the block structure is unknown [18]. In their work [18], the signal x is partitioned into a number of overlapping blocks {x i } with identical block sizes, and each block x i is assigned a Gaussian prior p( To address the overlapping issue, the original data model is converted into an expanded model which removes the overlapping structure by adding redundant columns to the original measurement matrix A and stacking all blocks {x i } to form an augmented vector. In doing this way, the prior for the new augmented vector has a block diagonal form similar as that for the conventional block sparse Bayesian learning. Thus conventional block sparse Bayesian learning algorithms such as [24] can be applied to the expanded model. This overlapping structure provides flexibility in defining a block-sparse pattern. Hence it works well even when the block structure is unknown. A critical difference between our work and [24] is that for our method, a prior is directly placed on the signal x, while for the method proposed in [24], a rigorous formulation of the prior for x is not available, instead, a prior is assigned to the augmented new signal which is constructed by stacking a number of overlapping blocks {x i }.

B. A Proposed Iterative Reweighted Algorithm
Sparse Bayesian learning algorithms have a close connection with the reweighted ℓ 1 or ℓ 2 methods. In fact, a dualform analysis [25] reveals that sparse Bayesian learning can be considered as a non-separable reweighted strategy solving a non-separable penalty function. Inspired by this insight, we here propose a reweighted ℓ 1 method for the recovery of blocksparse signals when the block structure of the sparse signal is unknown.
Conventional reweighted ℓ 1 methods iteratively minimize the following weighted ℓ 1 function (for simplicity, we consider the noise-free case): where the weighting parameters are given by w , ∀i, and ǫ is a pre-specified positive parameter. In a series of experiments [26], the above iterative reweighted algorithm outperforms the conventional ℓ 1minimization method by a considerable margin. The fascinating idea of the iterative reweighted algorithm is that the weights are updated based on the previous estimate of the solution, with a large weight assigned to the coefficient whose estimate is already small and vice versa. As a result, the value of the coefficient which is assigned a large weight tends to be smaller (until become negligible) in the next estimate. This explains why iterative reweighted algorithms usually yield sparser solutions than the conventional ℓ 1 -minimization method.
As discussed in our previous section, the basic idea of our proposed sparse Bayesian learning method is to establish a coupling mechanism such that the sparseness of neighboring coefficients are somehow related to each other. With this in mind, we slightly modify the weight update rule of the reweighted ℓ 1 algorithm as follows We see that unlike the conventional update rule, the weight w is not only a function of its corresponding coefficient x (t−1) i , but also dependent on the neighboring coefficients {x i−1 }. In doing this way, a coupling effect between the sparsity patterns of neighboring coefficients is established. Hence the modified reweighted ℓ 1 -minimization algorithm has the potential to encourage block-sparse solutions. Experiments show that the proposed modified reweighted ℓ 1 method yields considerably improved results over the conventional reweighted ℓ 1 method in recovering block-sparse signals. It also serves as a good reference method for comparison with the proposed Bayesian sparse learning approach.

VI. SIMULATION RESULTS
We now carry out experiments to illustrate the performance of our proposed algorithm, also referred to as the patterncoupled sparse Bayesian learning (PC-SBL) algorithm, and its comparison with other existing methods. The performance of the proposed algorithm will be examined using both synthetic and real data 1 . The parameters a and b for our proposed algorithm are set equal to a = 0.5 and b = 10 −4 throughout our experiments.

A. Synthetic Data
Let us first consider the synthetic data case. In our simulations, we generate the block-sparse signal in a similar way to [18]. Suppose the n-dimensional sparse signal contains K nonzero coefficients which are partitioned into L blocks with random sizes and random locations. Specifically, the block sizes {B l } L l=1 can be determined as follows: we randomly generate L positive random variables {r l } L l=1 with their sum equal to one, then we can simply set B l = ⌈Kr l ⌉ for the first L − 1 blocks and B L = K − L−1 l=1 B l for the last block, where ⌈x⌉ denotes the ceiling operator that gives the smallest integer no smaller than x. Similarly, we can partition the ndimensional vector into L super-blocks using the same set of values {r l } L l=1 , and place each of the L nonzero blocks into each super-block with a randomly generated starting position (the starting position, however, is selected such that the nonzero block will not go beyond the super-block). Also, in our experiments, the nonzero coefficients of the sparse signal x and the measurement matrix A ∈ R m×n are randomly generated with each entry independently drawn from a normal distribution, and then the sparse signal x and columns of A are normalized to unit norm.
Two metrics are used to evaluate the recovery performance of respective algorithms, namely, the normalized mean squared error (NMSE) and the success rate. The NMSE is defined as x−x 2 2 / x 2 2 , wherex denotes the estimate of the true signal x. The success rate is computed as the ratio of the number of successful trials to the total number of independent runs. A trial is considered successful if the NMSE is no greater than 10 −4 . In our simulations, the success rate is used to measure the recovery performance for the noiseless case, while the NMSE is employed to measure the recovery accuracy when the measurements are corrupted by additive noise.
We first examine the recovery performance of our proposed algorithm (PC-SBL) under different choices of β. As indicated earlier in our paper, β (0 ≤ β ≤ 1) is a parameter quantifying the dependencies among neighboring coefficients. Fig. 1 depicts the success rates vs. the ratio m/n for different choices of β, where we set n = 100, K = 25, and L = 4. Results (in Fig. 1 and the following figures) are averaged over 1000 independent runs, with the measurement matrix and the sparse signal randomly generated for each run. The performance of the conventional sparse Bayesian learning method (denoted as "SBL") [19] and the basis pursuit method (denoted as "BP") [1], [2] is also included for our comparison. We see that when  β = 0, our proposed algorithm performs the same as the SBL. This is an expected result since in the case of β = 0, our proposed algorithm is simplified as the SBL. Nevertheless, when β > 0, our proposed algorithm achieves a significant performance improvement (as compared with the SBL and BP) through exploiting the underlying block-sparse structure, even without knowing the exact locations and sizes of the non-zero blocks. We also observe that our proposed algorithm is not very sensitive to the choice of β as long as β > 0: it achieves similar success rates for different positive values of β. For simplicity, we set β = 1 throughout our following experiments.
Next, we compare our proposed algorithm with some other recently developed algorithms for block-sparse signal recovery, namely, the expanded block sparse Bayesian learning method (EBSBL) [18], the Boltzman machine-based greedy pursuit algorithm (BM-MAP-OMP) [16], and the clusterstructured MCMC algorithm (CluSS-MCMC) [15]. The modified iterative reweighted ℓ 1 method (denoted as MRL1) proposed in Section V is also examined in our simulations. Note that all these algorithms were developed without the knowledge of the block-sparse structure. The block sparse Bayesian learning method (denoted as BSBL) developed in [18] is included as well. Although the BSBL algorithm requires the knowledge of the block-sparse structure, it still provides decent performance if the presumed block size, denoted by h, is properly selected. Fig. 2 plots the success rates of respective algorithms as a function of the ratio m/n and the sparsity level K, respectively. Simulation results show that our proposed algorithm achieves highest success rates among all algorithms and outperforms other methods by a considerable margin. We also noticed that the modified reweighted ℓ 1 method (MRL1), although not as good as the proposed PD-SBL, still delivers acceptable performance which is comparable to the BSBL and better than the BM-MAP-OMP and the CluSS-MCMC.
We now consider the noisy case where the measurements are contaminated by additive noise. The observation noise is assumed multivariate Gaussian with zero mean and covariance  matrix σ 2 I. Also, in our simulations, the noise variance is assumed unknown (except for the BM-MAP-OMP). The NMSEs of respective algorithms as a function of the ratio m/n and the sparsity level K are plotted in Fig. 3, where the white Gaussian noise is added such that the signal-to-noise ratio (SNR), which is defined as SNR(dB) 20 log 10 ( Ax 2 / w 2 ), is equal to 15dB for each iteration. We see that our proposed algorithm yields a lower estimation error than other methods in the presence of additive Gaussian noise.

B. Real Data
In this subsection, we carry out experiments on real world images. As it is well-known, images have sparse (or approximately sparse) structures in certain over-complete basis, such as wavelet or discrete cosine transform (DCT) basis. Moreover, the sparse representations usually demonstrate clustered structures whose significant coefficients tend to be located together (see Fig. 6). Therefore images are suitable data sets for evaluating the effectiveness of a variety of block-sparse signal recovery algorithms. We consider two famous pictures 'Lena' and 'Pirate' in our simulations. In our experiments, the image is processed in a columnwise manner: we sample each column of the 128 × 128 image using a randomly generated measurement matrix A ∈ R m×128 , recover each column from the m measurements, and reconstruct the image based on the 128 estimated columns. Fig. 4 and 5 show the original images 'Lena' and 'Pirate' and the reconstructed images using respective algorithms, where we set m = 64 and m = 80 respectively. We see that our proposed algorithm presents the finest image quality among all methods. The result, again, demonstrates its superiority over other existing methods. The reconstruction accuracy of respective algorithms can also be observed from the reconstructed wavelet coefficients. We provide the true wavelet coefficients of one randomly selected column from the image 'Lena', and the wavelet coefficients reconstructed by respective algorithms. Results are depicted in Fig. 6. It can be seen that our proposed algorithm provides reconstructed coefficients that are closest to the groundtruth.

VII. CONCLUSIONS
We developed a new Bayesian method for recovery of block-sparse signals whose block-sparse structures are entirely unknown. A pattern-coupled hierarchical Gaussian prior model was introduced to characterize both the sparseness of the coefficients and the statistical dependencies between neighboring coefficients of the signal. The prior model, similar to the conventional sparse Bayesian learning model, employs a set of hyperparameters to control the sparsity of the signal coefficients. Nevertheless, in our framework, the sparsity of each coefficient not only depends on its corresponding hyperparameter, but also depends on the neighboring hyperparameters. Such a prior has the potential to encourage clustered patterns and suppress isolated coefficients whose patterns are different from their respective neighbors. The hyperparameters, along with the sparse signal, can be estimated by maximizing their posterior probability via the expectation-maximization (EM) algorithm. Numerical results show that our proposed algorithm achieves a significant performance improvement as compared with the conventional sparse Bayesian learning method through exploiting the underlying block-sparse structure, even without knowing the exact locations and sizes of the non-zero blocks. It also demonstrates its superiority over other existing methods and provides state-of-the-art performance for block-sparse signal recovery.