Skip to main content

Dempster–Shafer fusion of multisensor signals in nonstationary Markovian context

Abstract

The latest developments in Markov models’ theory and their corresponding computational techniques have opened new rooms for image and signal modeling. In particular, the use of Dempster–Shafer theory of evidence within Markov models has brought some keys to several challenging difficulties that the conventional hidden Markov models cannot handle. These difficulties are concerned mainly with two situations: multisensor data, where the use of the Dempster–Shafer fusion is unworkable; and nonstationary data, due to the mismatch between the estimated stationary model and the actual data. For each of the two situations, the Dempster–Shafer combination rule has been applied, thanks to the triplet Markov models’ formalism, to overcome the drawbacks of the standard Bayesian models. However, so far, both situations have not been considered in the same time. In this article, we propose an evidential Markov chain that uses the Dempster–Shafer combination rule to bring the effect of contextual information into segmentation of multisensor nonstationary data. We also provide the Expectation–Maximization parameters’ estimation and the maximum posterior marginal’s restoration procedures. To validate the proposed model, experiments are conducted on some synthetic multisensor data and noised images. The obtained segmentation results are then compared to those obtained with conventional approaches to bring out the efficiency of the present model.

Introduction

Hidden Markov chains (HMCs) have been used to solve a wide range of inverse problems occurring in many application fields. They allow one to take contextual information within data into account. Their success is mainly due to the existence of efficient Bayesian techniques that allow achieving the different estimation procedures with reasonable computational complexity. Hence, HMCs have successfully been applied in signal and image processing [13], biosciences [4], econometrics and finance [5], ecology [6], and communications [3]. Let us also mention [79] as pioneering articles.

Let X = X1..N be an unobservable process that takes its values from a finite set of classes Ω = ω 1 , ... , ω K and let Y = Y1.N be an observable process that takes its values in R and that can be seen as a noisy version of X. The problem is then to estimate X from Y. According to the HMC formalism, the hidden process X has a Markov distribution, and this is why the model is qualified by “hidden Markov”. The interest of considering X Markovian relies in the possibility of embedding the dependencies existing within Y into X. The observations Y n are then assumed to be independent conditionally on X and the contextual information are considered only through X, which provides a well-designed formalism that permits to consider the data contextual information while keeping the model simple and the necessary estimation procedures workable. Explicitly, according to HMCs, the joint distribution of (X, Y) is given by

p ( x , y ) = p ( x 1 ) p ( x 1 ) p ( y 1 | x 1 ) n = 2 N p ( x n | x n 1 ) p ( y n | x n )
(1)

To estimate the hidden process of interest, one may use the Bayesian maximum posterior marginal’s (MPM) estimator that allows to minimize the ratio of erroneously assigned sites and that is given by

x ^ n = arg max ω Ω p ( x n = ω | y )
(2)

The posterior distributions p ( x n = ω | y ) are computable, thanks to the recursive Forward a n ( x n ) = p ( y 1. n , x n ) and Backward β n ( x n ) = p ( y n + 1. N | x n ) probabilities that can be computed iteratively as follows

{ α 1 ( x 1 ) = p ( x 1 , y 1 ) α n ( x n ) = ω Ω α n 1 ( x n 1 = ω ) p ( x n | x n 1 = ω ) p ( y n | x n )
(3)
{ β N ( x N ) = 1 β n ( x n ) = ω Ω β n + 1 ( x n + 1 = ω ) p ( x n + 1 = ω | x n ) p ( y n + 1 = ω )
(4)

The estimator in Equation (2) can then be derived

x ^ n = arg max ω Ω α n ( x n = ω ) β n ( x n = ω )
(5)

Notice that each x ^ n is estimated using the whole observation y 1 n and here relies the interest of the HMCs: they establish a link between all the variables Y 1 . . N , X 1 . . N in such a way that each x ^ n is estimated using all the y1…n while keeping the computation linear with the data size N. Moreover, when the model parameters are unknown, they can be estimated, thanks to some algorithms such as Expectation–Maximization (EM) [10] and Iterative Conditional Estimation (ICE) [11].

However, HMCs may become unworkable when the data to be modeled come from many heterogeneous sensors. Indeed, the conventional approaches involving Dempster–Shafer fusion (DS fusion) do not support Markov models, since such a fusion destroys Markovianity. Furthermore, standard HMCs have been shown to be inefficient when applied to nonstationary data when unsupervised processing is concerned. Let us mention, for instance, the situation when the distributions p ( x n + 1 = ω ' | x n = ω ) depend on n.

Example 1: Let us consider the problem of satellite or airborne optical image segmentation into two classes Ω = { ω 1 , ω 2 } where ω1 is “forest” and ω2 is “water”. Let ( 1 , N ) be the set of a line pixels of such an image. The problem is then to estimate the class-map x 1 N given the observed image line y 1 N . The link between a pixel observation and its corresponding class is given by the likelihood probability p ( y n | x n = ω ) , and the prior knowledge is modeled by a transition distribution p ( x n + 1 = ω | x n = ω ' ) . MPM estimation of x 1 N is then workable according to Equation (5). Let us assume now that p ( x n + 1 = ω | x n = ω ' ) depends on n. The use of standard HMCs in unsupervised segmentation in such a situation (nonstationary hidden process) provides poor results [12]. This is due to the mismatch between the estimated stationary model and the data.

The Dempster–Shafer theory of evidence [1223] overcomes these drawbacks; thanks to the rich triplet Markov chains’ (TMC) formalism. In fact, the computation of posterior distribution p ( x | y ) , crucial for Bayesian restoration, can be seen as the DS fusion of the prior knowledge given by p ( x ) = p ( x 1 ) n = 2 N p ( x n | x n 1 ) with the observation knowledge given by q ( x ) p ( y | x ) = Π n = 1 N p ( y n | x n ) . The result of such a fusion being linked with a TMC, the estimation algorithms remain workable.

Let us suppose now that we deal with more than one sensor, and that there are some clouds in the image provided by one of the sensors. The possible presence of clouds can be modeled by a probability measure on P ( Ω ) = { ϕ , ω 1 , ω 2 , Ω } , which is a mass function [24].

The theory of evidence can then be utilized in the following situations where the use of conventional HMCs poses difficulties:

  1. (1)

    When the prior distribution p(x) is not known with precision (for instance, when p ( x n | x n 1 ) depends on n), it can be replaced by a belief function obtained from p(x) to model the uncertainty or lack of accurate knowledge of p(x). It can then be merged with q(x) defined above via DS fusion. The result of this fusion gives a Bayesian probability on Ωn that can be seen as a generalization of the posterior probability. Even if this latter is not necessarily Markovian, it is a marginal of a Markov chain and, thus, a Bayesian restoration remains workable [14].

  2. (2)

    When the prior distribution is known with precision but one of the sensors is very noisy and its probabilistic noise densities are unreliable, or when both of the prior and noise distributions are exactly known, but there is an extra class in the data provided by one of the sensors.

Let us mention some previous studies that tackled the problem of using the theory of evidence in the Markovian context. The authors in [12] use evidential priors to deal with a strongly nonstationary prior distribution. The mass function extends then the Bayesian priors and the MPM restoration remains feasible; thanks to the TMC formalism. The resulting model is called evidential hidden Markov chain (EHMC). In [24], DS fusion is achieved in hidden Markov fields’ context to merge images from more than one sensor, with some unreliable one. The aim of this article is to extend the previous results of the application of theory of evidence in Markovian context to the situations when p(x) is nonstationary and when one of the sensors is unreliable at the same time. Hence, the use of the DS rule has two purposes: on the one hand, nonstationary data are modeled through an evidential model that considers uncertainty of the data distribution via evidential priors. On the other hand, the sensors’ data are fused in Markovian context to improve the segmentation accuracy.

The remainder of the article is organized as follows: The following section summarizes the pairwise and triplet Markov chains formalisms. Section “Markov models and Dempster–Shafer theory of evidence” deals with the new trends in using the Dempster–Shafer theory of evidence in Markovian context. In Section “Multisensor nonstationary hidden Markov chains”, we define the multisensor nonstationary HMC model and provide its corresponding MPM restoration and parameters estimation procedures. Experimental results are presented and discussed in Section “Experimental results”. Finally, concluding remarks and some possible future improvements end the article.

Pairwise and triplet Markov chains

In this section, we briefly describe the pairwise Markov chains (PMCs) and the TMCs that are more general than the conventional HMCs defined in the previous section. In fact, PMCs in which the hidden process is not Markovian exist and are therefore more general. Similarly, TMCs form a family which is strictly more general than PMCs since TMCs which are not PMCs exist and have been used to deal with numerous situations that neither HMCs nor PMCs can support [25].

PMCs

Let Z = (X,Y). Z is said to be a PMC if Z is itself Markovian. Therefore, Z is a PMC if and only if its joint distribution is given by

p ( z ) = p ( z 1 ) Π n = 2 N p ( z n | z n 1 )
(6)

An HMC defined by (1) can then be seen as a particular PMC in which p ( z n | z n 1 ) = p ( x n | x n 1 ) p ( y n | x n ) , while in more general PMC, such a probability is given by p ( z n | z n 1 ) = p ( x n | x n 1 , y n 1 ) p ( y n | x n 1 , y n 1 , x n ) . This shows the greater generality of PMC over HMC at the local level. At the global level, on the other hand, the noise distribution p y | x is of Markovian form in PMC whereas it is given by the simple formula p ( y | x ) = Π n 1 N p ( y n | x n ) in conventional HMC. The posterior margins p ( x n | y ) , needed for MPM restoration, are computable like in HMC model; thanks to the same forward functions α n ( x n ) = p ( y 1 , , y 1 , x n ) and the extended backward functions β n ( x n ) = p ( y n + 1 , , y N | x n , y n ) that can be evaluated recursively as follows

{ α 1 ( x 1 ) = p ( x 1 , y 1 ) α n ( x n ) = ω Ω α n - 1 ( x n 1 = ω ) p ( z n | x n 1 = ω , y n 1 )
(7)
{ β N ( x N ) = 1 β n ( x n ) = ω Ω β n + 1 ( x n + 1 = ω ) p ( x n + 1 = ω , y n + 1 | z n )
(8)

Besides, when the model parameters are unknown, they can be estimated via adapted variants of the same algorithms used for HMCs. For further details, the reader may refer to [26] where detailed related theoretical developments and experiments are provided.

TMCs

Z is referred to as a TMC if there exists a third process U = U 1 N where each U n takes its values from a finite set Λ = { λ 1 , , λ M } such that the triplet T = (X, Y, U) is a Markov chain. Let V = (U, X). T = (V, Y) is then a PMC. This makes the computation of the distributions p ( x n | y ) , required to perform MPM restoration, achievable even when Z is not Markovian. This shows the greater generality of TMC over PMC, which is itself more general than the conventional HMC [25].

The underlying process U may be used in all the situations where Z is a marginal distribution of a Markov chain. For instance, U is used to model the switches of the hidden process X[25], which constitutes in some manner, a way to deal with nonstationary aspect of X discussed in the previous section. The resulting model is called “switching hidden Markov chain”. Similarly, U has been used to consider the switches of the noise distributions in [27] and the semi-Markovianity of the hidden process in [28]. Another significant use of U is the one used within the Dempster–Shafer theory of evidence to permit the use of this latter in the Markovian context as described in the following section.

Markov models and Dempster–Shafer theory of evidence

In this section, we briefly present the so-called theory of evidence introduced by Dempster in the 1960s and reformulated by Shafer in the 1970s [13]. Let Ω = { ω 1 , , ω K } be a frame of discernment containing K exclusives hypotheses and let us consider the set of all the subsets of this frame P ( Ω ) = { ϕ , { ω 1 } , , Ω } . To understand the basics of the theory of evidence and to establish a link with the aim of the present framework, let us position ourselves in the satellite or airborne optical image segmentation problem of Example 1. The frame of discernment corresponds then to the set of hidden classes Ω = { ω 1 , ω 2 } where ω1 and ω2 correspond to “forest” and “water”, respectively. The exclusive hypotheses model the fact that one pixel of the image belongs either to the class “forest” or “water”. By considering compound hypotheses, the Dempster–Shafer theory of evidence offers an elegant formalism to model the uncertainty, the lack of precision or even missing information about the pixels classes using the so-called mass function. A mass function m is a function from P(Ω) to R + that fulfills the following conditions

{ m ( ϕ ) = 0 A P ( Ω ) m ( A ) = 1
(9)

Notice that when the mass function m vanishes outside singletons, it becomes a probability, also called “Bayesian” or “probabilistic” mass in contrast to “evidential” mass according to the theory of evidence. Hence, it can be considered as a generalization of the probability measure. This generalization is the key notion that will be used to extend the conventional Bayesian models.

In the satellite segmentation problem, let us consider a pixelwise classification, the prior knowledge of the classes ω1 and ω2 may then be modeled by a Bayesian mass m1 defined on Ω by m 1 ( x n = ω k ) = p ( x n = ω k ) . On the other hand, the observation knowledge can be modeled through a Bayesian mass m2 defined on Ω by m 2 ( x n = ω k ) p ( y n | x n = ω k ) . The DS fusion gives m ( x n ) = ( m 1 m 2 ) ( x n ) = p ( x n | y n ) . Suppose now that there are some clouds in the image provided by the sensor [24, 29]. We have then three observable classes: “forest”, “water”, and “clouds”. The concept of evidential mass may be introduced here to model the fact that we cannot see through clouds. Explicitly, we may use a mass function m2 defined on {ω1, ω2, Ω} by m 2 ( ω 1 ) p ( y n | ω 1 ) , m 2 ( ω 2 ) p ( y n | ω 2 ) , and m 2 ( Ω ) p ( y n | Ω ) . Similarly, the DS fusion of the two masses m ( x n ) = ( m 1 m 2 ) ( x n ) gives a Bayesian mass that generalizes the posterior probability p ( x n | y n ) , where the DS combination rule of a set of mass functions m1…r is given by the following formula

m ( A ) = ( m 1 m R ) ( A ) B i = A ϕ Π i = 1 R m i ( B i )
(10)

Let us bring up the following intuitive result: when one of the mass functions is Bayesian (probabilistic), the DS fusion result given by Equation (10) is also Bayesian [14].

Let us come back to the satellite or airborne optical image segmentation and let us assume now that the prior distribution of X is of Markovian form. The prior knowledge is then given by a Bayesian mass m1 defined on Ω as follows: m 1 ( x ) p ( x 1 ) p ( x 2 | x 1 ) p ( x N | x N 1 ) . On the other hand, the observation knowledge is modeled through a Bayesian mass m2 defined on Ω as follows: m 2 ( x ) p ( y 1 | x 1 ) p ( y N | x N ) . The interesting result is that the DS fusion of the two masses is the posterior probability p ( x | y ) .

m ( x ) = ( m 1 m 2 ) ( x ) = p ( x ) p ( y | x ) x ' Ω N p ( x ' ) p ( y | x ' ) = p ( x | y )
(11)

The next step is then to take advantage of both Markov theory and theory of evidence by generalizing the Bayesian masses to evidential ones and exploit the result presented above. However, when at least one of the masses involved in the DS fusion is evidential, the result of this latter may no longer be a Markov chain, and thus, the Bayesian restoration is not directly applicable. The recent TMCs surmount this difficulty through the introduction of the third underlying process U as stated in the previous section. In fact, it has been shown that the DS fusion of the masses’ functions defined above is a TMC and the calculation of the posterior distributions, necessary to achieve the different estimation procedures, remains possible.

As mentioned before, this result was used in [12] to consider evidential priors to take into account the nonstationary aspect of p(x). Therefore, the authors define an “evidential HMC model” that extends the standard HMCs to the monosensor nonstationary case. On the other hand, the authors in [24] define a multisensor hidden Markov field that can resolve the problem of the image segmentation in presence of clouds discussed in this section. The aim of this article is to consider the problem where we have both situations simultaneously. Hence, the Dempster–Shafer theory of evidence is used, for one hand, to model the lack of precision in the prior distribution and, on the other hand, to consider more than one sensor with some unreliable one like the sensor with cloudy image.

Multisensor nonstationary hidden Markov chains

In this section, we describe our new model that will be called the multisensor nonstationary hidden Markov chain (MN-HMC), and we give its corresponding MPM restoration and EM parameters estimation procedures.

Model definition

Let X = X 1 N be a hidden process that takes its values from a finite set of classes Ω = { ω 1 , , ω K } and that is to be estimated from a family of observable processes Y = Y 1 R provided by R independent sensors S1…R where Y r = Y 1 r N and where each Y n r takes its values from R . Let us assume now that the realization of X is governed by a nonstationary Markov chain in the sense that the distributions p ( x n + 1 = ω | x n = ω ' ) are not known with precision (or depend on n). Let us also suppose that at least one of the sensors S1…R is Bayesian. Without loss of generality, let S1 be such a sensor. The MPM Bayesian restoration using the whole observable process Y = Y 1 R is then workable and the number of elementary operations required for its evaluation is linear with the size of the data N (the proof can be found in [14]). The model Z = (X, Y) is then called an MN-HMC. The MPM restoration of the hidden process of an MN-HMC can then be achieved; thanks to the DS fusion of the different mass functions involved in the model.

m ( x ) = ( m 0 m R ) ( x ) = p ( x | y )
(12)

Although the result of this fusion is not necessarily a Markov chain, it is a marginal of a TMC [14] and hence, the posterior marginal distributions p x n | y are computable.

Let us now demonstrate how the data are modeled according to this model.

First, the nonstationary Markov chain governing the hidden process X is replaced by a stationary evidential Markov chain m0. Let U 0 = U 1 N 0 be a hidden process that takes its values in P ( Ω ) = { ϕ , { ω 1 } , , Ω } , m0 is then defined on [ P ( Ω ) ] N .

Second, the sensor S1 being Bayesian, its mass m1 is then defined on Ω as follows: m 1 ( x ) p ( y 1 1 | x 1 ) p ( y 2 1 | x 2 ) p ( y N 1 | x N ) .

For each sensor, we derive the corresponding observation mass. For example, let us suppose that sensor S2 is only sensitive to class ω1. The corresponding evidential mass is then defined on [ Λ 2 ] N with Λ 2 = { ω 1 } , { ω 2 , , ω K } . For this purpose, we consider an underlying process U 2 = U 1 N 2 that takes its values in Λ2.

Finally, for some sensors, we may consider more than one mass function. Let us consider the sensor of Example 1 and let us assume that there are some clouds in the image that it provides. The possible presence of clouds can then be modeled by a probability measure on P ( Ω ) = { ϕ , ω 1 , ω 2 , Ω } , which is a mass function. More explicitly, we consider an underlying process U 2 = U 1 N 3 that takes its values in { ω 1 , ω 2 , Ω } . Furthermore, we can consider an additional mass function to model the prior knowledge of cloud presence regardless of the sensor observation. This shows again the greater generality of evidential models over Bayesian ones.

Let M = m 0 S be the set of masses that model the data under consideration. For one hand, we know that the DS fusion result of all these mass functions is a TMC, and on the other hand, the result of this fusion is the posterior probability p ( x | y ) . Hence, the posterior distributions p ( x n | y ) necessary to achieve MPM restoration are computable.

Unsupervised segmentation of MN-HMCs

Let us consider the following image segmentation problem that extends the one given in Example 1.

Example 2: Let us consider the problem of satellite or airborne optical image segmentation into two classes Ω = { ω 1 , ω 2 } where ω1 corresponds to “forest” and ω2 to “water”. Let S1 and S2 be two sensors, where S1 is a RADAR sensor and S2 is an optical one. Let (1,…,N) be the set of a line sites of the ground truth image X. The problem is then to estimate the class-map x1…N given the observations y 1 N 1 and y 1 N 2 provided by S1 and S2, respectively.

Sensor 1: The digital image observations y 1 N 1 are related to the hidden classes through the noise probability density functions (pdf s) p ( y n 1 | x n = ω 1 ) and p ( y n 1 | x n = ω 2 ) .

Sensor 2: Let us assume that there are some clouds in the image y 1 N 2 provided by S2. We have then three possibilities: “forest”, “water”, and “clouds”. The observation at each pixel is related to its class through the noise pdf s p ( y n 2 | x n = ω 1 ) , p ( y n 2 | x n = ω 2 ) , and p ( y n 2 | x n = " c l o u d s " ) , respectively.

The problem consists then in how to estimate the class-map x1…N using both sensors images in Markovian context in such a way that nonstationary aspect of p(x) is taken into account. For this purpose, we use our proposed model. First, we have to gather all the information that can be fused lately to achieve the MPM restoration.

Data modeling

First, the nonstationary Markov chain governing X is replaced by a stationary evidential Markov chain m0. Let U 1 = U 1 N 1 be a hidden process that takes its values in P ( Ω ) = { ϕ , { ω 1 } , { ω 2 } , Ω } and that models the lack of knowledge of X priors, m0 is then defined on [ P ( Ω ) ] N by m 0 ( u 1 ) = m 0 ( u 1 1 ) Π n = 2 N m 0 ( u n 1 | u n 1 1 ) .

The first sensor being Bayesian, the observation knowledge may then be modeled through a probabilistic mass function m1 defined by m 1 ( x ) = Π n = 1 N m 1 ( x n ) Π n = 1 N p ( y n 1 | x n ) . Accordingly, the MPM restoration may be done based on this sensor alone.

The possible presence of clouds may be modeled by a probability measure on P ( Ω ) = { ϕ , ω 1 , ω 2 , Ω } which is a mass function given by m 2 ( u 2 ) Π n = 1 N m 2 ( u n 2 ) where u 2 [ P ( Ω ) ] N and m 2 ( ω 1 ) p ( y n 2 | u n 2 = ω 1 ) , m 2 ( ω 2 ) p ( y n 2 | u n 2 = ω 2 ) , and m 2 ( Ω ) p ( y n 2 | u n 2 = Ω ) . For this particular sensor, we may use another evidential Markov mass to model the contextual information corresponding to the presence of clouds. In fact, neighboring clouds’ pixels belong more likely to cloud class than other pixels do. Let m3 be such a mass defined on [ P ( Ω ) ] N by m 3 ( u 2 ) = m 3 ( u 1 2 ) Π n = 2 N m 3 ( u n 2 | u n 1 2 ) .

MPM restoration of hidden data

We have defined a family of four masses m0, m1, m2, and m3 that represent all the knowledge we have about the data. Let T = ( X , U , Y ) = ( V , Y ) be a TMC where X = X 1 N with each x n Ω , U = U1, U2 with each u n [ P ( Ω ) ] 2 and Y = (Y1, Y2) with each y n R 2 . The distribution of T is then given by

p ( t 1 , , t n ) q 1 ( t 1 , t 2 ) q 2 ( t 2 , t 3 ) q N 1 ( t n 1 , t N )
(13)

where

{ q 1 ( t 1 , t 2 ) = 1 x 1 u 1 1 u 1 2 1 x 2 u 2 1 u 2 2 × m 0 ( u 1 1 ) m 0 ( u 2 1 | u 1 1 ) m 1 ( x 1 ) m 1 ( x 2 ) m 2 ( u 1 2 ) m 2 ( u 2 2 ) m 3 ( u 1 2 ) m 3 ( u 2 2 | u 1 2 ) q n ( t n , t n + 1 ) = 1 x n + 1 u n + 1 1 u n + 1 2 m 0 ( u n + 1 1 | u n 1 ) m 1 ( x n + 1 ) m 2 ( u n + 1 2 ) m 3 ( u n + 1 2 | u n 2 )
(14)

Then, the DS fusion result m = ( m 0 m 1 m 2 m 3 ) is the posterior distribution p ( x | y ) defined by the joint distribution p(x, y) which is itself the marginal distribution of the TMC defined above. Hence, the posterior marginal distributions p ( x n , u n | y ) are computable and so are the interesting probabilities p ( x n | y ) .

Finally, to achieve the MPM restoration of the hidden process, we either use the theorem giving a general definition of a Markov chain [12] or the well-known forward α n ( v n ) = p ( y 1 , , y n , v n ) and backward β n ( v n ) = p ( y n + 1 , , y N | v n ) recursive functions that have been adapted to the multisensor nonstationary context. In this article, we chose to adopt the latter option. For the use of the Markov chain theorem, the reader may refer to [12].

The forward and backward functions can be calculated in the following iterative ways:

{ α 1 ( v 1 ) = p ( v 1 , y 1 ) α n ( v n ) = v n 1 α n 1 ( v n 1 ) p ( v n | v n 1 ) m 1 ( x n ) m 2 ( u n 2 )
(15)
{ β N ( x N ) = 1 β n ( v n ) = v n + 1 β n + 1 ( v n + 1 ) p ( v n + 1 | v n ) m 1 ( x n + 1 ) m 2 ( u n + 1 2 )
(16)

Where p ( v n + 1 | v n ) 1 x n u n 1 u n 2 1 x n + 1 u n + 1 1 u n + 1 2 m 0 ( u n + 1 1 | u n 1 ) m 3 ( u n + 1 2 | u n 2 ) and p ( v 1 , y 1 ) 1 x 1 u 1 1 u 1 2 m 0 ( u 1 1 ) m 1 ( x 1 ) m 2 ( u 1 2 ) m 3 ( u 1 2 ) .

The posterior marginal distributions p ( v n | y ) and p ( x n | y ) can then be computed as follows

p ( v n | y ) = α n ( v n ) β n ( v n )
(17)
p ( x n | y ) = v n x n p ( v n | y )
(18)

On the other hand, the posterior transitions and marginal distributions necessary for the parameters estimation can be calculated according to

ψ ( v n , v n + 1 ) = p ( v n , v n + 1 | y ) = α n ( v n ) p ( v n | v n 1 ) m 1 ( x n + 1 ) m 2 ( u n + 1 2 ) β n ( v n + 1 )
(19)
ξ ( v n ) = p ( v n | y ) = v n 1 ψ ( v n 1 , v n )
(20)

Model parameters estimation

To estimate the model parameters, we either use the well-known EM algorithm, its stochastic version SEM or the ICE algorithm. Let us mention that all these latter have been used in the triplet Markov models context [25, 28]. Let us also mention [30] where a brief comparative study is conducted between EM and ICE algorithms in the hidden, pairwise, and triplet Markov models contexts. As we deal with a particular TMC model, we only need to adapt each one to the situation addressed in this article.

In this article, we propose to adapt the EM algorithm to the MN-HMC case. For this purpose, let us consider the TMC T = (X, U, Y) = (V, Y) defined above. For the sake of simplicity, we consider the Gaussian case where the noise pdf s p ( y n | x n ) and p ( y n | u n 2 ) are of Gaussian form. According to Example 2 (where Ω = {ω1, ω2}, we have to estimate the following parameters: the evidential mass m ij = m 0 ( u n 1 = λ i , u n + 1 1 = λ j ) defined on [P(Ω)]2, the K = 2 means μ 1 K 1 and σ 1 K 1 standard deviations of the Gaussian pdf s governing m1, the K + 1 = 3 means μ 1 K + 1 2 and standard deviations σ 1 K + 1 1 of the Gaussian densities governing m2 and the evidential transition mass c ij = m 3 ( u n 2 = λ i | u n + 1 2 = λ j ) defined on [P(Ω)]2. The parameters estimation process is accomplished iteratively as follows:

  1. (1)

    Consider an initial set of parameters Θ 0 = ( m ij 0 , c ij 0 , ( μ 1. K 1 , μ 1. K 1 , σ 1. K 2 , σ 1. K 2 ) 0 ) .

  2. (2)

    For each iteration q, calculate Θq + 1 from Θq and Y in two steps:

  3. a.

    Step E: compute α n q ( v n ) and β n q ( v n ) and then derive ψ q ( v n , v n + 1 ) and ξq(v n ).

  4. b.

    Compute Θq + 1 as follows.

    ( μ k 1 ) q + 1 = n = 1 N u n ξ q ( x n = ω k , u n ) y n 1 n = 1 N u n ξ q ( x n = ω k , u n )
    (21)
( μ k 2 ) q + 1 = n = 1 N v n / u n 2 = λ k ξ q ( v n ) y n 2 n = 1 N v n / u n 2 = λ k ξ q ( v n )
(22)
[ ( σ k 1 ) q + 1 ] 2 = n = 1 N u n ξ q ( x n = ω k , u n ) y n 1 ( μ k 1 ) q + 1 2 n = 1 N u n ξ q ( x n = ω k , u n )
(23)
[ ( σ k 2 ) q + 1 ] 2 = n = 1 N v n / u n 2 = λ k ξ q ( v n ) y n 2 ( u k 2 ) q + 1 2 n = 1 N v n / u n 2 = λ k ξ q ( v n )
(24)
m ij q + 1 = 1 ( N 1 ) # i # j n = 1 N v n , v n + 1 / u n 1 = λ i , u n + 1 1 = λ j ψ q ( v n , v n + 1 )
(25)
c ij q + 1 = n = 1 N v n , v n + 1 / u n 2 = λ i , u n + 1 2 = λ j ψ q ( v n , v n + 1 ) n = 1 N v n / u n 2 = λ i ξ q ( v n )
(26)

where #i is the cardinal of the set λ i .

Experimental results

This section is devoted to the application of the MN-HMC, described above, to the segmentation of multisensor nonstationary signals.

For this purpose, let us consider the following situation:

Let S1 and S2 be two sensors providing two different observable signals y 1 = y 1 N 1 and y 2 = y 1 N 2 that can be seen, in some manner, as noisy versions of a ground truth x = x 1 N with the following difficulties: X, which is hidden and to be estimated in some way from Y = (Y1, Y2), is a realization of an unknown Markov chain that may be strongly nonstationary. Moreover, the signal Y2 presents an extra class which may correspond to clouds in a SPOT image or even missing observation at some signal sites. This class represents then the ignorance attached with the fact that we cannot decide whether such a site belongs to any of the classes. Let B be the process corresponding to the presence of this extra class. In this study, such a process is assumed to be Markovian.

Thereafter, we consider two series of experiments: in the first one, we deal with synthetic multisensor nonstationary data, whereas in the second one, we consider two nonstationary class-images that we noise in some manner to fit the multisensor nonstationary context. To assess the efficiency of the proposed model, MPM restoration is also achieved according to some conventional models.

Unsupervised segmentation of MN-HMCs

In this experiment, we deal with sampled multisensor nonstationary HMCs. Let T = (X, U2, Y) be such a model with Ω = {ω1, ω2} and N = 5000 and the following matrices

M 1 = 0.98 0.02 0.02 0.98 , M 2 = 0.6 0.4 0.4 0.6 , a n d J = 0.98 0.02 0.01 0.99 .

The hidden process X = X1 N is nonstationary in the following way: given the two matrices M1, M2, and a value of s = 1 , 2 , , X i = ( X ( i 1 ) s + 1 , X ( i 1 ) s + 2 , , X is ) . The realization of X fulfills the following

▪ The distribution of X1 is (0.5, 0.5).

M1 is the transition matrix in X1, X3,…

M2 is the transition matrix in X2, X4,…

A realization of X is then sampled. On the other hand, the realization of B, modeling the presence of the extra-class in the second sensor signal, is sampled as a Markov chain with transition matrix J. Accordingly, the corresponding realization of U2 can be derived as follows

{ u n 2 = Ω i f b n = 1 u n 2 = x n e l s e w h e r e
(27)

Given the realizations of X and U2, the observed signals are then sampled in the following manner

Sensor 1: y 1 = y 1 N 1 is sampled according to p ( y 1 1 | x 1 ) p ( y 2 1 | x 2 ) p ( y n 1 | x N ) where p ( y n 1 | x n = ω 1 ) is Gaussian with mean 0 and standard deviation 1, and p ( y n 1 | x n = ω 2 ) is Gaussian with mean 1 and standard deviation 1.

Sensor 2: y 2 = y 1 N 2 is sampled according to p ( y 1 2 | u 1 2 ) p ( y 2 2 | u 2 2 ) p ( y N 2 u N 2 ) where p ( y n 2 | u n 2 = { ω 1 } ) is Gaussian with mean 0 and standard deviation 1, p ( y n 2 | u n 2 = { ω 2 } ) is Gaussian with mean 1 and standard deviation 1, and p ( y n 2 | u n 2 = { ω 1 , ω 2 } ) is Gaussian with mean 2 and standard deviation 1.

As these experiments aim at assessing the proposed model against the conventional ones, MPM restoration of the hidden process of interest is also achieved according to the following family of approaches (with increasing degree of complexity):

▪ The segmentation is accomplished based on the first sensor signal using: K-Means, standard HMC, and evidential HMC.

▪ Then, we achieve MPM segmentation based on both sensors images using the multisensor stationary HMC (MS-HMC) as the one proposed in the Markov fields’ context [24], and the MN-HMC formalism, proposed in this article.

Hundred experiments are carried out for each value of s. The obtained segmentation results are summarized in Table 1.From the obtained segmentation error ratios, we can mention the following remarks:

▪ The segmentation error ratios obtained by the application of K-Means and HMCs do not depend on the value of s.

▪ As the value of s is higher, as the data nonstationarity is stronger. Since we deal with unsupervised segmentation, the evidential HMC outperforms the conventional HMC. This is due to the fact that the evidential HMC takes into account the nonstationary aspect of the data. Similarly, the multisensor nonstationary HMC outperforms the multisensor stationary HMC for high values of s.

▪ Both multisensor models outperform the standard HMC. This difference is due to the fact that they utilize more data than the conventional HMC does. This also shows that the EM procedure yields better parameters estimates in the unsupervised context when more data are available.

▪ The importance of considering data nonstationarity against the importance of the amount of data used to achieve the segmentation can be evaluated by comparing error ratios of both the EHMC with the MS-HMC. When the data are strongly nonstationary (high values of s), the EHMC provides better results than the MS-HMC since this latter does not take nonstationarity into account. On the other hand, for low values of s, the MS-HMC yields better results than the EHMC.

▪ Overall, the proposed MN-HMC outperforms the previous models. In fact, the proposed model utilizes more than one observed signal while it takes into account the nonstationary aspect of the hidden process. It benefits on one hand, from the advantages of the contextual information through the use of Markov theory and, on the other hand, from the benefits of the theory of evidence that permits to consider uncertainty in hidden classes priors and data fusion in the same time.

Table 1 Error ratios (%) of unsupervised segmentation of synthetic multisensor nonstationary data

Unsupervised segmentation of multisensor noisy nonstationary images

In this experiment, we propose to apply our model to multisensor noisy nonstationary images. To make our chain model applicable on images, these latter are converted from and to 1D signals using the Hilbert-Peano scan [31].

For this experiments set, we consider two nonstationary class images: the “Nazca bird” image (Figure 1a) and the “squares” image (Figure 2a).

Figure 1
figure 1

Unsupervised segmentation of multisensor noisy image 1. (a) Original class image X = x. (b) First sensor observed image Y1 = y1. (c) Second sensor observed image Y2 = y2. (d) Image restoration using K-Means, error ratio τK-Means = 39.4%. (e) Image MPM restoration using HMCs, error ratio τHMC = 16.5%. (f) Image MPM restoration using evidential HMCs, error ratio τEHMC = 11.1%. (g) MPM estimation of the auxiliary process U1 according to evidential HMCs. (h) Image MPM restoration using multisensor stationary HMCs, error ratio τMS-HMC = 12%. (i) Estimation of the underlying U2 process according to multisensor stationary HMCs, error ratio τMS-HMC = 0.1%. (j) Image MPM restoration using multisensor nonstationary HMCs, error ratio τMN-HMC = 7.9%. (k) MPM estimation of the auxiliary process U1 according to multisensor nonstationary HMCs. (l) Estimation of the underlying U2 process according to multisensor nonstationary HMCs, error ratio τMN-HMC = 0.1%.

Figure 2
figure 2

Unsupervised segmentation of multisensor noisy image 2. (a) Original class image X = x. (b) First sensor observed image Y1 = y1. (c) Second sensor observed image Y2 = y2. (d) Image restoration using K-Means, error ratio τK-Means = 30.9%. (e) Image MPM restoration using HMCs, error ratio τHMC = 16.4%. (f) Image MPM restoration using evidential HMCs, error ratio τEHMC = 15.9%. (g) MPM estimation of the auxiliary process U1 according to evidential HMCs. (h) Image MPM restoration using multisensor stationary HMCs, error ratio τMS-HMC = 9.1%. (i) Estimation of the underlying U2 process according to multisensor stationary HMCs, error ratio τMS-HMC ≈ 0%. (j) Image MPM restoration using multisensor nonstationary HMCs, error ratio τMN-HMC = 7.6%. (k) MPM estimation of the auxiliary process U1 according to multisensor nonstationary HMCs. (l) Estimation of the underlying U2 process according to multisensor nonstationary HMCs, error ratio τMN-HMC ≈ 0%.

Let us consider, for instance, the “Nazca bird” nonstationary image which is a 128 × 128 class-image with K = 2 classes that will serve as a ground truth image. Then, we noise the image in two different manners to have two observed images that can be lately fused using the proposed MN-HMC. Hence, we have Ω = {ω1, ω2} and N = 16384.

For the first observed image, y 1 = y 1 N 1 is sampled according to p ( y 1 1 | x 1 ) p ( y 2 1 | x 2 ) p ( y N 1 | x N ) where p ( y n 1 | x n = ω 1 ) is Gaussian with mean 0 and standard deviation 1 and p ( y n 1 | x n = ω 2 ) is Gaussian with mean 1 and standard deviation 1.

For the second observed image, let us assume that some of the image pixels are corrupted (or even missing). We have then, three classes: ω1, ω2, and an extra-class where we cannot decide whether the given pixel belongs to either of the two classes. Let B be the process that governs the presence of the third class (that we call “corrupted”). In this experiment, we assume this latter to be Markovian. The realization of this latter was sampled according to the following transition matrix defined on the set {1, 2} where ‘1’ corresponds to “corrupted” and ‘2’ corresponds to “ c o r r u p t e d ¯ ”.

J = 0.998 0.002 0.001 0.999 .

Accordingly, the corresponding realization of U2can be derived as follows

{ u n 2 = Ω i f b n = 1 u n 2 = x n e l s e w h e r e
(28)

y 2 = y 1 N 2 is then sampled according to p ( y 1 2 | u 1 2 ) p ( y 2 2 | u 2 2 ) p ( y N 2 | u N 2 ) where p ( y n 1 | u n 2 = { ω 1 } ) is Gaussian with mean 0 and standard deviation 1, p ( y n 1 | u n 2 = { ω 2 } ) is Gaussian with mean 2 and standard deviation 1, and p ( y n 1 | u n 2 = { ω 1 , ω 2 } ) is Gaussian with mean 4 and standard deviation 1.

Notice that the contrast between the two classes ω1 and ω2 is higher in the second image. However, the reliability of the corresponding sensor (presence of a third class) makes the direct application of conventional hidden Markov models unworkable. The same thing happens when the observation is missing in some pixels (which can be seen as a particular case of the present one). This challenging difficulty can be surmounted; thanks to the evidential model proposed in [24] and the one proposed in this framework.

The MPM restoration of the class-image is then achieved using the following approaches:

▪ The segmentation is carried out based on the first sensor signal only, using: K-Means, standard HMC, and EHMC.

▪ Then, we achieve MPM segmentation using both the MS-HMC and the MN-HMC.

The MPM segmentation results are shown in Figure 1.

When applying K-Means clustering algorithm, the only information used to restore data are the direct observations whereas no prior information about classes are considered. Consequently, this model is too sensitive to noise and the segmentation error ratio is relatively high τK-Means = 39.4%.

In conventional HMCs, the neighborhood of each site is taken into account. However, nonstationary aspect of the hidden data makes the restoration results poor τHMC = 16.5%. Indeed, as can be seen in the original class-image, the two classes are not distributed in the same manner along the image; there are some regions with a lot of details (wings and tail of the bird), whereas the image background is characterized by only one class (white). This particular aspect of the class-image has misled the segmentation through HMC since all their corresponding estimation procedures consider the hidden process X as stationary.

The application of the EHMC permits to overcome the difficulty discussed above (τEHMC = 11.1%) through the introduction of a mass function that generalizes the Bayesian prior probabilities and takes into account the uncertainty attached to the prior distribution of X, due to the heterogeneous aspect of the two classes along the signal. However, it would be interesting to make use of the second image where the contrast between the two classes is higher, even if some pixels are hidden with an extra-class.

Conversely, the MS-HMC exploits all the observed data, and provides then better results than the conventional HMC. Nevertheless, it does not take the nonstationary aspect of the data into account, and provides then comparable results with the evidential HMC τMS-HMC = 12%.

Finally, the MN-HMC yields the best result among all the considered models (τMN-HMC = 7.9%). This is due to the fact that this model takes advantages of two observed images while it takes the nonstationarity of the data into account. The evidential HMC can then be seen as a particular case of the MN-HMC, when only one sensor is available, whereas the MS-HMC can be considered as a particular case of the MN-HMC where the data to be modeled are actually stationary.

In the image corresponding to the estimation of the process U2 (Figure 1k), the region in white corresponds to the sub-set Ω where the confusion between the two classes is too high. In fact, this region of the image (that corresponds to the wings and tail of the bird) is characterized by a lot of details. Let us mention that for such a region, K-Means may provide comparable, and may be even, better segmentation results than standard HMC and MS- HMC. This is due to the fact that the regularization in both HMC and MS-HMC misleads the classification process in this region when considering p(x) not depending on n. The aim of the use of the EHMC relies in weakening the prior knowledge about hidden classes in such regions to consider rather observation knowledge. In the same region of interest (wings and tail of the bird), our MN-HMC model provides also the best result because it uses more information than the other models do (two sensors rather than one) while it takes into account the nonstationarity of the data.

The EM-estimated parameters according to all the Markov models are also provided in Table 2. The real Gaussian noise pdfs parameters being known ( μ 1 1 = 0 , μ 2 1 = 1 , μ 1 2 = 0 , μ 2 2 = 2 , μ 1 , 2 2 = 4 and all σ k s = 1 ), we can check that the EM-estimated parameters according to the proposed MN-HMC are closer to the real ones. Let us focus on the EM-estimated parameters according to the different considered Markov models:

▪ According to the standard HMC, the data provided by the first sensor are considered stationary. Hence, the HMC regularization misleads the parameters estimations process. The same thing happens in the MS-HMC context: even if both sensors images are used, there is a mismatch between the data and the EM-estimated stationary model, which leads to unsuitable estimated parameters set.

▪ The evidential HMC model takes into account the nonstationary aspect of data but only considers the image provided by the first sensor. Therefore, the estimated parameters are close to the real ones, but the MPM segmentation is based only on one image and hence the segmentation results are relatively limited.

▪ The parameters estimated based on the proposed model are the closest to the genuine ones. Besides, we can measure the difference between the parameters estimated according to the two multisensor models: since the MS-HMC is a particular MN-HMC where the transition mass vanishes outside the singletons {ω1} and {ω2}, the relatively high probability 0.2540 attributed to m({ω1, ω2}, {ω1, ω2}) by the proposed MN-HMC, can be seen as an inadequacy measure of the multisensor stationary HMC.

Table 2 EM-estimated parameters of the “Nazca bird” noisy multisensor image according to different Markov models

The segmentation results of the “squares” nonstationary image confirm the previous comments with the following slight difference: the segmentation error ratio based on the evidential HMC model (τEHMC = 15.9%) is significantly higher than that of the multisensor stationary HMC (τMS-HMC = 9.1%). This is due to the fact that the considered “squares” image is moderately nonstationary. In such cases, the effect of the amount of data used to achieve the MPM segmentation is more important than the effect of taking nonstationarity into account. The same concluding remark has been mentioned in the synthetic experiments set for low values of s. Let us mention that, for such kind of data, the gain in segmentation accuracy by using the proposed model is also restrained (τMN-HMC = 7.6% against τMS-HMC = 9.1%).

Conclusions

In this article, we proposed a new approach to model multisensor nonstationary signals in the Markovian context. The proposed model allows one to benefit simultaneously from both the Markov theory and theory of evidence. Accordingly, Dempster–Shafer combination rule was used for two purposes: to take into account the nonstationary aspect of the hidden data of interest, and to fuse the different sensors’ signals in the Markovian context to boost up the segmentation accuracy. The experimental results demonstrated the interest of such a modeling with respect to the conventional ones. As future improvement, we may investigate the use of evidential pairwise Markov models to consider more complex model structures. A further generalization of the present approach may also consist in adapting the proposed formalism to Markov trees models in order to model multi-resolution images [32].

References

  1. Raphael C: Automatic segmentation of acoustic musical signals using hidden Markov models. IEEE Trans Pattern Anal Mach Intell 1999, 21(4):360-370. 10.1109/34.761266

    Article  MathSciNet  Google Scholar 

  2. Chen M, Kundu A, Zhou J: Off-line handwritten word recognition using a hidden Markov model type stochastic network. IEEE Trans Pattern Anal Mach Intell 1994, 16(5):481-496. 10.1109/34.291449

    Article  Google Scholar 

  3. Cappé O, Moulines E, Ryden T: Inference in Hidden Markov Models. Springer, New York; 2005.

    MATH  Google Scholar 

  4. Koski T: Hidden Markov Models for Bioinformatics. Kluwer Academic Publishers, Dordrecht; 2001.

    Book  MATH  Google Scholar 

  5. Thomas LC, Allen DE, Morkel-Kingsbury N: A hidden Markov chain model for the term structure of bond credit risk spreads. Int Rev Financ Anal 2002, 11(3):311-329. 10.1016/S1057-5219(02)00078-9

    Article  Google Scholar 

  6. Le Ber F, Benoît M, Schott C, Mari J-F, Mignolet C: Studying crop sequences with Carrot Age, a HMM based data mining software. Ecol Model 2006, 191(1):170-185. 10.1016/j.ecolmodel.2005.08.031

    Article  Google Scholar 

  7. Baum LE, Petrie T, Soules G, Weiss N: A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann Math Stat 1970, 41: 164-171. 10.1214/aoms/1177697196

    Article  MathSciNet  MATH  Google Scholar 

  8. Fornay GD: The Viterbi algorithm. Proc IEEE 1973, 61(3):268-277.

    Article  MathSciNet  Google Scholar 

  9. Rabiner LR: A tutorial on hidden Markov models and selected applications in speech recognition. Proc IEEE 1989, 77(2):257-286. 10.1109/5.18626

    Article  Google Scholar 

  10. McLachlan G, Krishnan T: EM Algorithm and Extensions. Wiley, New York; 1997.

    MATH  Google Scholar 

  11. Delmas J-P: An equivalence of the EM and ICE algorithm for exponential family. IEEE Trans Signal Process 1997, 45(10):2613-2615. 10.1109/78.640732

    Article  Google Scholar 

  12. Lanchantin P, Pieczynski W: Unsupervised restoration of hidden non stationary Markov chain using evidential priors. IEEE Trans Signal Process 2005, 53(8):3091-3098.

    Article  MathSciNet  Google Scholar 

  13. Shafer G: A Mathematical Theory of Evidence. Princeton University Press, Princeton; 1976.

    MATH  Google Scholar 

  14. Pieczynski W: Multisensor triplet Markov chains and theory of evidence. Int J Approx Reason 2007, 45(1):1-16. 10.1016/j.ijar.2006.05.001

    Article  MathSciNet  MATH  Google Scholar 

  15. Pieczynski W, Benboudjema D: Multisensor triplet Markov fields and theory of evidence. Image Vis Comput 2006, 24(1):61-69. 10.1016/j.imavis.2005.09.012

    Article  Google Scholar 

  16. Bloch I: Some aspects of Dempster–Shafer evidence theory for classification of multi-modality medical images taking partial volume effect into account. Pattern Recognit Lett 1996, 17(8):905-919. 10.1016/0167-8655(96)00039-6

    Article  Google Scholar 

  17. Cobb BR, Shenoy PP: A Comparison of Methods for Transforming Belief Function Models to Probability Models, vol. 2711. Springer-Verlag, Berlin; 2003.

    MATH  Google Scholar 

  18. Denoeux T: Analysis of evidence-theoristic decision rules for pattern classification. Pattern Recognit 1997, 30(7):1095-1107. 10.1016/S0031-3203(96)00137-9

    Article  Google Scholar 

  19. Fouque L, Appriou A, Pieczynski W: An evidential Markovian model for data fusion and unsupervised image classification, 3rd International Conference on Information Fusion (FUSION). France, Paris; 2000:pp. 25-31.

    Google Scholar 

  20. Janez F, Appriou A: Theory of evidence and non-exhaustive frames of discernment: plausibilities correction methods. Int J Approx Reason 1998, 18(1):1-19. 10.1016/S0888-613X(97)10001-9

    Article  MathSciNet  MATH  Google Scholar 

  21. Smets P: Belief functions: the disjunctive rule of combination and the generalized Bayesian theorem. Int J Approxim Reason 1993, 9(1):1-35. 10.1016/0888-613X(93)90005-X

    Article  MathSciNet  MATH  Google Scholar 

  22. Sudano JJ: Equivalence between belief theories and naive Bayesian fusion for systems with independent evidential data, Sixth International Conference on Information Fusion (FUSION). Cairns, Australia; 2003:pp. 1357-1364.

    Google Scholar 

  23. Yager RR, Kacprzyk J, Fedrizzi M: Advances in the Dempster–Shafer Theory of Evidence. Wiley, New York; 1994.

    MATH  Google Scholar 

  24. Bendjebbour A, Delignon Y, Fouque L, Samson V, Pieczynski W: Multisensor images segmentation using Dempster–Shafer fusion in Markov fields context. IEEE Trans Geosci Remote Sens 2001, 39(8):1789-1798. 10.1109/36.942557

    Article  Google Scholar 

  25. Lanchantin P, Lapuyade-Lahorgue J, Pieczynski W: Unsupervised segmentation of randomly switching data hidden with non-Gaussian correlated noise. Signal Process 2011, 91(2):163-175. 10.1016/j.sigpro.2010.05.033

    Article  MATH  Google Scholar 

  26. Derrode S, Pieczynski W: Signal and image segmentation using Pairwise Markov chains. IEEE Trans Signal Process 2004, 52(9):2477-2489. 10.1109/TSP.2004.832015

    Article  MathSciNet  Google Scholar 

  27. Boudaren MY, Pieczynski W, Monfrini E: Unsupervised segmentation of non-stationary data hidden with non-stationary noise, IEEE International Workshop on Systems, Signal Processing and their Applications (WoSSPA). Tipasa, Algeria; 2011:pp. 255-258.

    Google Scholar 

  28. Lapuyade-Lahorgue J, Pieczynski W: Unsupervised segmentation of hidden semi-Markov non stationary chains. Signal Process 2012, 92(1):29-42. 10.1016/j.sigpro.2011.06.001

    Article  MATH  Google Scholar 

  29. Hégarat-Mascle SL, Bloch I, Vidal-Madjar D: Introduction of neighborhood information on evidence theory and application to data fusion of radar and optical images with partial cloud cover. Pattern Recognit 1998, 31(11):1811-1823. 10.1016/S0031-3203(98)00051-X

    Article  Google Scholar 

  30. Pieczynski W: EM and ICE in hidden and triplet Markov models, Stochastic Modeling Techniques and Data Analysis International Conference (SMTDA). Chania, Greece; 2010:pp. 1-9.

    Google Scholar 

  31. Fjortoft R, Delignon Y, Pieczynski W, Sigelle M, Tupin F: Unsupervised classification of radar images using hidden Markov chains and hidden Markov random fields. IEEE Trans Geosci Remote Sens 2003, 41(3):675-686. 10.1109/TGRS.2003.809940

    Article  Google Scholar 

  32. Willsky AS: Multiresolution Markov models for signal and image processing. Proc IEEE 2002, 90(8):1396-1458. 10.1109/JPROC.2002.800717

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohamed El Yazid Boudaren.

Additional information

Competing interest

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Below are the links to the authors’ original submitted files for images.

Authors’ original file for figure 1

Authors’ original file for figure 2

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Boudaren, M.E.Y., Monfrini, E., Pieczynski, W. et al. Dempster–Shafer fusion of multisensor signals in nonstationary Markovian context. EURASIP J. Adv. Signal Process. 2012, 134 (2012). https://doi.org/10.1186/1687-6180-2012-134

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/1687-6180-2012-134

Keywords