Next Article in Journal
Thermodynamic Analysis of Entropy Generation Minimization in Thermally Dissipating Flow Over a Thin Needle Moving in a Parallel Free Stream of Two Newtonian Fluids
Previous Article in Journal
A New Efficient Expression for the Conditional Expectation of the Blind Adaptive Deconvolution Problem Valid for the Entire Range ofSignal-to-Noise Ratio
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Negation of Belief Function Based on the Total Uncertainty Measure

School of Computer and Information Science, Southwest University, No. 2 Tiansheng Road, BeiBei District, Chongqing 400715, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(1), 73; https://doi.org/10.3390/e21010073
Submission received: 9 November 2018 / Revised: 5 January 2019 / Accepted: 11 January 2019 / Published: 15 January 2019
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
The negation of probability provides a new way of looking at information representation. However, the negation of basic probability assignment (BPA) is still an open issue. To address this issue, a novel negation method of basic probability assignment based on total uncertainty measure is proposed in this paper. The uncertainty of non-singleton elements in the power set is taken into account. Compared with the negation method of a probability distribution, the proposed negation method of BPA differs becausethe BPA of a certain element is reassigned to the other elements in the power set where the weight of reassignment is proportional to the cardinality of intersection of the element and each remaining element in the power set. Notably, the proposed negation method of BPA reduces to the negation of probability distribution as BPA reduces to classical probability. Furthermore, it is proved mathematically that our proposed negation method of BPA is indeed based on the maximum uncertainty.

1. Introduction

Information representation has been a crucial issue since the emergence of artificial and intelligent systems [1,2]. Human beings are able to recognize and transform the world by obtaining and recognizing information in nature and society to distinguish different things [3]. However, representing uncertain information from different information sources is still an open issue [4,5]. A large amount of literature and a number of approaches have been proposed to settle this issue, such as Dempster-Shafer theory (DST) [6,7,8,9,10], fuzzy sets theory [11,12,13,14,15], entropy [16,17,18,19,20], D-number theory [21,22,23,24] and Z-numbers [25].
In some particular circumstances, it is likely easier to say what it is not than to say what it is, since more information may be needed to describe what it is while less information may be needed to describe what it is not. For example, sometimes it is difficult to prove whether a theorem is correct or incorrect by mathematical approaches directly; however, just a particular counterexample can easily prove a statement wrong. A more intuitive example is that it must be easier to obtain a probability of a complex event by using the unit one subtracts from the probability of a simple event that is exactly the complement of the complex event, rather than calculating the probability of the complex event directly. Therefore, in this paper we try to solve the issue from the opposite side, that is to find out what is the negation of it [26].
Increasing attention has been paid to negation since it was raised by Zadeh. It is of great significance to study negation since it enables us to obtain information from the opposite side and also represent the information through the opposite side [27]. Furthermore, the measure of fuzziness proposed by Yager suggested that fuzziness can be related to the lack of distinction between a set and its negation [28]. That is, the less distinct A and A ¯ the more fuzzy A. Moreover, the negation method is also promising in muti-criteria decision (MCDM) making. For example, one of the most used methods is TOPSIS (Technique for Order Preference by Similarity to an Ideal Solution), which provides us with an ideal solution (IS) and the negative ideal solution (NIS). The best alternative is as close to the ideal solution and is as far from the negative ideal solution as possible. As a result, taking the negation side is meaningful in MCDM. In addition, the negation of BPA can also be applied in measuring the uncertainty of BPA [29]. Thus, obtaining the negation is of great significance. Recently, a negation method of a probability distribution based on the maximum entropy has been proposed and studied [27]. Some properties have been investigated regarding Yager’s negation [30]. The negation of a probability distribution can be seen as a reallocation process of probability value. In this paper, we try to extend the negation method of a probability distribution in classical probability theory to a basic probability assignment in D-S theory, which provides a novel view of the expression of uncertainty and unknown in D-S theory. A novel negation method of basic probability assignment (BPA) in Dempster-Shafer theory is proposed in a matrix form as well as the BPA which is analogous to the fact that a probability distribution can be represented as a vector. To study the evolution of a BPA vector in the repeated negation process, a total uncertainty measure H b ( m ) proposed by Pal et al. [31,32] is adopted in this paper to measure the uncertainty of basic probability assignment (BPA). Properties of the proposed negation method are presented and proved. Compared with the negation method of a probability distribution, the proposed negation method of BPA differs by the fact that the BPA of a certain element is reassigned to the other elements in the power set where the weight of reassignment is proportional to the cardinality of intersection of the element and each remaining element in the power set. Notably, the proposed negation method of BPA reduces to the negation of probability distribution as BPA reduces to classical probability.
The rest of this paper is structured as follows: Some basic knowledge associated with D-S theory and uncertainty measurement are presented in Section 2. In Section 3, the negation method for BPA is proposed, some numerical examples are presented and some properties are discussed and proved. Finally, findings are summarized in Section 4.

2. Preliminaries

2.1. Dempster-Shafer Theory

Dempster-Shafer theory (DST) was first proposed by Dempster [33] in 1967 and refined by his student Shafer [34] in 1976. It has been widely applied to decision-making [35,36,37,38,39,40,41,42], information fusion [43], fault diagnosis [44,45], uncertain reasoning [46,47,48], multi-sensor data fusion [49,50,51], aggregation [52,53], medical diagnosis [54], conflicting management [55] and other fields [56,57,58,59] owing to its ability to express uncertainty.
Let Θ be a set of mutually exclusive and exhaustive hypothesis called the frame of discernment (FOD) which has N elements and is indicated as:
Θ = { H 1 , H 2 , H 3 , , H n }
The power set of Θ consists of all subsets of Θ , containing 2 N elements is indicated as [33,34]:
2 Θ = { , { H 1 } , { H 2 } , , { H n } , { H 1 , H 2 } , , θ }
where ∅ denotes the empty set and θ denotes the whole set. A crucial concept in D-S theory is Basic Probability Assignment (BPA), the mass of belief in an element of Θ is analogous to a probability distribution, but differs by the fact that the unit mass is distributed among 2 Θ elements instead of N elements, which means the mass of belief is assigned to not only singletons but also composite hypothesises. The mass function is a mapping from 2 Θ to [0-1] representing how strongly the evidence supports the hypothesis indicated as [33,34]:
2 Θ [ 0 , 1 ]
m ( ) = 0
A Θ m ( A ) = 1
A is named as a focal element of m (mass function) if A 2 Θ and m ( A ) ≠ 0. Basic probability assignment reduces to basic probability distribution when all focal elements reduce to singletons.
According to the Basic Probability Assignment (BPA), the plausibility function P L m ( A ) and belief function B e l m ( A ) are defined as:
P L m ( A ) = B 2 Θ , B A m ( B )
B e l m ( A ) = B 2 Θ , B A m ( B )
The plausibility function P L m ( A ) measures the potential belief to A, which means the total belief that does not negate A, while the belief function B e l m ( A ) measures total belief to A.

2.2. Uncertainty Measurements of Basic Probability Assignment (BPA)

Measuring uncertainty has been a key problem in information science [60,61,62]. The concept of entropy is derived from physics, which has been a measure of uncertainty and disorder [63]. Generally, a system with higher uncertainty has greater entropy, which also contains more information [64,65]. Shannon entropy is widely adopted to measure the uncertainty of a probability distribution [66], which is defined as [67]:
H ( P ) = i = 1 n P i l o g b P i
where n is the total number of all events in an experiment, P i is the probability that the ith event happens meeting i n P i = 1 . Generally, 2 is adopted as the base of logarithm, and the unit of entropy is bit. Shannon entropy hits the maximum when the unit is assigned to each event equally, which also hits the maximum uncertainty.
However, the measurement of uncertainty is still an open issue in D-S theory. Heterogeneous definitions and requirements of uncertainty measure [68,69,70,71,72,73] are proposed to measure the uncertainty in D-S theory.
A total uncertainty measure H b ( m ) proposed by Pal et al. [31,32] is adopted in this paper to measure the uncertainty of probability assignment (BPA), which is defined as [32]:
H b ( m ) = A 2 Θ m ( A ) log 2 A m ( A )
where A denotes the cardinality of A, meaning the total number of elements in A. H b ( m ) has many advantages, such as consistency with D-S theory semantics, monotonicity, probability consistency and additivity properties [70]. The total uncertainty measure has a unique maximum for m such that m ( A ) A is satisfied [32], where ∝ denotes ‘be proportional to’. It is noted that the total uncertainty measure reduces to basic Shannon entropy when all focal elements in D-S theory reduce to singletons in classical probability theory.
A new definition of entropy of belief functions is defined as [70]:
H r p ( m ) = θ Θ P l _ P m ( θ ) l o g ( 1 P l _ P m ( θ ) ) + A 2 Θ ; A m ( A ) l o g ( | A | )
where
P l _ P m ( θ ) = K 1 P l m ( θ ) , K = θ Θ P l m ( θ )
Deng entropy is defined as [69]:
H d ( m ) = A 2 Θ m ( A ) l o g ( 2 | A | 1 m ( A ) )

2.3. Negation of Probability Distribution

Information that is contained in the negation is hardly considered in information representation. To solve this problem, Yager proposed a negation method of probability distribution, which is concerned with the information representation contained in the negation of a probability distribution [27]. Considering a probability distribution
P = ( p 1 , p 2 , p 3 , , p n )
defined on the set X = ( x 1 , x 2 , x 3 , , x n ) where 0 p i 1 and i = 1 n p i = 1 . The negation of the probability distribution is denoted by p ¯ i and defined as [27]:
P ¯ = ( p ¯ 1 , p ¯ 2 , p ¯ 3 , , p ¯ n )
where
p ¯ i = 1 p i n 1
According to Equation (10), each negation operation could be regarded as a process to reassign the probability value p i among the n 1 other hypothesises equally. Namely [27]:
p ¯ i = j = 1 , i j n p j n 1
The negation operation can also be interpreted on a different view if we observe that [27]:
p ¯ i = 1 p i i = 1 n ( 1 p i )
p ¯ i is obtained by normalizing the complementary of p i (a probability value) to make sure the sum equals 1. Furthermore, the repeated process of negation can be modeled as a difference equation denoted as [27]:
( n 1 ) p i ( k + 1 ) + p i ( k ) = 1
The solution of this difference equation approaches 1 / n as i increases, which means the unit probability value is equally allocated to each element in X. If we back to review the definition of Shannon entropy then, it is not hard to find that the maximum value of Shannon entropy is attained exactly for this uniform distribution, which demonstrates that the maximum value of uncertainty of the system is attained. Moreover, it is proved that the Shannon entropy increases constantly as the iteration of negation process increases.
According to the analysis of negation of probability distribution, three important properties are summarized as follows:
  • Repeated process of negation of probability distribution converges to a certain probability distribution.
  • The maximum value of uncertainty of the system is calculated exactly for the convergent probability distribution.
  • The entropy increases constantly till the maximum value of the total uncertainty attains.
We apply these three properties of negation to D-S theory and define a negation method of BPA in the following section.

3. Negation of BPA

3.1. Definition of Negation

D-S theory has been widely used in expressing information [74] and other fields [51,60,75] since the ability to deal with uncertainty and unknown with weaker conditions than Bayesian probability theory. In this paper, a novel negation method of BPA is proposed.
Consider a frame of discernment Θ containing N elements, then the power set of Θ containing 2 N elements is denoted as:
2 Θ = { H 1 , H 2 , H 3 , , H 2 N }
where H 1 denotes ∅ and H 2 N denotes θ . Let m be a BPA, which is represented in vector form:
m = [ m 1 , m 2 , m 3 , , m 2 N ] T
assuming that
m i = m ( H i )
where
i = 1 2 n m i = 1 , m i [ 0 , 1 ]
similarly, the vector form of the negation of BPA is defined as:
m ¯ = [ m ¯ 1 , m ¯ 2 , m ¯ 3 , , m ¯ 2 N ] T
Given a BPA vector m , the negation of the BPA is defined as:
m ¯ = E m
where E is the negation matrix defined as:
E = e 1 , 1 e 1 , 2 e 1 , 2 N e 2 , 1 e 2 , 2 e 2 , 2 N e 2 N , 1 e 2 N , 2 e 2 N , 2 N
When j 2 N and j 1 :
e i , j = 0 i = j H i H j ¯ H k θ , H k , H k 2 Θ H k H j ¯ i j
where A ¯ j = Θ A j , | H i | denotes the cardinality of H i .
When j = 2 N , as A j = θ :
e i , j = 1 i = j 0 i j
when j = 1 , as A j = ∅:
e i , j = 0
when i = 2 N :
e i , j = 1 i = j 0 i j

3.2. Steps of Constructing the Negation

In classical probability theory, the negation of a probability distribution P is obtained by allocating its probability p i equally among the n 1 other elements [27]. Similarly, for each H i in 2 Θ the negation of BPA is constructed by reassigning its mass m i to those elements 2 Θ whose intersection with complement of H i is not empty set. Specifically, the BPA in H i ( H i 2 Θ ) is reassigned to other elements in the power set without H i . Furthermore, the negation of BPA is distinct from the negation of a probability distribution by the fact that the reassignment weight of BPA is proportional to the cardinality of intersection of the element and each remaining element in the power set. For example, consider that the frame of discernment is { a , b , c } then { a } allocates twice BPA to { b c ¯ } than { a c ¯ } , since the cardinality of intersection of { b c } (complement of { a } ) and { b c } is 2, while the cardinality of intersection of { b c } (complement of { a } ) and { a c } is 1. Therefore, we are concerned with not only the belief degree of the focal elements but also the cardinality of the intersection that can affect the negation of BPA. Thus, the procedure of obtaining an element e i , j in the negation matrix could be described in three steps:
Step 1: Obtain the element in the A j ¯ by
A j ¯ = Θ A j
since Θ A j represents the complementary elements of A j , where Θ denotes the frame of discernment (FOD).
Step 2: Calculate the cardinality of intersection of A i and A j ¯ , which is the reallocation weight of negation process for H j denoted as c j and the sum σ of the cardinality from j = 2 to j = 2 N 1 (except for empty set and the whole set). Namely:
σ = j = 2 2 N 1 c j
Step 3: Normalize these weights of negation process of H j to guarantee their sum is one.
e i , j = c j σ
Consequently, the general formula of elements in negation matrix is denoted as
e i , j = | A i A j ¯ | A k Θ , A k 2 Θ | A k A j ¯ |
It is noted that m i ¯ is BPA since each focal element allocates its BPA to some other focal elements in the power set and the BPA in the whole set retains, which gives that
(1)
i = 1 i = 2 m ¯ i = 1
(2)
m ¯ i [ 0 , 1 ]
According to the definition of the negation of BPA, essentially, the negation of BPA is a process of reassignment of BPA in a particular manner. It could be noted that the ith column in the negation matrix represents the allocation weight of p i and the jth row in the negation matrix tells us the allocation weight of the given BPA vector.
First of all, the negation of two special elements in the power set (∅ and θ , which means empty set and the whole set, respectively) are discussed.
We assume that the frame of discernment is exhaustive (close-world assumption, proposed by Yager [76]), which means information sources are reliable. Thus, according to the close-world assumption, BPA of empty set ( ) is always 0, no matter how many times negation process is applied. Thus, to make sure the BPA of the empty set is always 0, elements in the first column and the first row of the negation matrix are all defined as 0, which means the other focal elements cannot allocate their BPA to the empty set when the negation process is applied.
In D-S theory, it should be noted that the BPA of the whole set ( θ ) denotes the belief of total uncertainty that it has no idea where to allocate the BPA in the whole set. Furthermore, compared with the whole set, the 2 N 2 other (except for empty set) elements are relatively certain and definitely know what elements are not in the hypothesis, which means the complement of them is not the empty set ∅. In this case, since the whole set represents total uncertainty that does not know where to allocate the BPA belonging to the whole set, Similarly, it does not know where to allocate its BPA when a negation process is constructed either. Thus, we define that the last column of the negation matrix is 0 except for the last element to make sure that the BPA of the whole set cannot be allocated to the 2 N 1 other focal elements when negation process is applied. Furthermore, according to the close-world assumption which means the frame of discernment is complete and exhaustive, each complement of focal element except for the whole set is relatively certain, and so is the negation of these focal elements, which cannot be totally uncertain ( θ ). Thus, the BPA of 2 N 1 focal elements (except for the whole set θ ) cannot be allocated to the whole set when negation process is applied. In this case, we again define that the last row of the negation matrix is 0 except for the last element to guarantee the 2 N 1 other elements are unable to allocate their BPA to the whole set when negation process is applied. Therefore, the whole set is unable to allocate its BPA to any other focal elements while any other focal elements cannot allocate their BPA to the whole set when negation process is applied. Therefore, the BPA in the whole set remains constant when negation process is applied.

3.3. Numerical Examples of the Negation Process

Example 1.
Assume that the frame of the discernment has only one element Θ = { a } , then of course we have m ( a ) = 1 according to the definition of uncertainty measurement above, the total uncertainty is calculated as:
H b ( m ) = 0
and the negation of the BPA is calculated as:
m ¯ = E m
to be more specific:
m ¯ ( a ) = 0 0 0 1 m ( a )
it follows from Equation (36) that:
m ¯ ( a ) = m ( a )
Furthermore, the total uncertainty of m ¯ is calculated as:
H b ( m ) = 0
Since N = 1 , the singleton { a } is regarded as the whole set { Θ } . Thus, the BPA remains constant after the negation process. In this case, no matter how many times the negation process is applied, the BPA remains unchanged and so does the total uncertainty.
Example 2.
The special case is noted for N = 2 . Assume a frame of discernment Θ = { a , b } , for a BPA vector
m = m ( a ) m ( b ) m ( a b ) = 0 0.2 0.3 0.5
According to Equation (7), the total uncertainty of the original BPA is:
H b ( m ) = 1.9855
it follows from Equations (20)–(24) that the negation matrix is obtained as:
E = 0 0 0 0 0 0 1 0 0 1 0 0 0 0 0 1
Since the BPA of the whole set ( { a b } ) retains unchanged, the BPA of m ( a ) reassigns to m ¯ ( b ) and the BPA of m ( b ) reassigns to m ¯ ( a ) , which means
m ¯ ( a ) = m ( b )
m ¯ ( b ) = m ( a )
which means
m ¯ = 0 0.3 0.2 0.5
The total uncertainty of m ¯ is
H b ( m ¯ ) = 1.9855
Clearly, for N = 2 , the uncertainty of the system retains unchanged, no matter how many times the negation process is applied. This property is consistent with order reversal of the negation of probability distribution proposed by Yager, and for the special case N = 2 , the negation of BPA is consistent with the negation of probability distribution [27].
Example 3.
For a more general case, assume a frame of discernment consists of three elements Θ = { a , b , c } for a BPA vector
m = m ( a ) m ( b ) m ( c ) m ( a b ) m ( a c ) m ( b c ) m ( a b c ) = 0 0.1 0.15 0 0 0.3 0.25 0.2
The total uncertainty measure of m is
H b ( m ) = 3.0952
According to the definition of proposed negation method, the negation matrix is derived as:
E = 0 0 0 0 0 0 0 0 0 0 1 6 1 6 0 0 1 3 0 0 1 6 0 1 6 0 1 3 0 0 0 1 6 1 6 0 1 3 0 0 0 0 1 6 1 6 1 3 0 1 3 1 3 0 0 1 6 1 3 1 6 1 3 0 1 3 0 0 1 3 1 6 1 6 1 3 1 3 0 0 0 0 0 0 0 0 0 1
the negation of m is calculated as:
m ¯ = E m = m ( a ) m ( b ) m ( c ) m ( a b ) m ( a c ) m ( b c ) m ( a b c ) = 0 0.1083 0.1167 0.0417 0.2250 0.1500 0.1583 0.2000
The total uncertainty measure of m ¯ is:
H b ( m ¯ ) = 3.5305
Repeat the negation process once again m ¯ ¯ is obtained as:
m ¯ ¯ = m ¯ ( a ) m ¯ ( b ) m ¯ ( c ) m ¯ ( a b ) m ¯ ( a c ) m ¯ ( b c ) m ¯ ( a b c ) = 0 0.07921 0.0750 0.1125 0.1542 0.1917 0.1875 0.2000
The total uncertainty measure of m ¯ ¯ is
H b ( m ¯ ¯ ) = 3.5649
It is noted that the BPA of m ( a ) is reassigned to { b } , { c } , { a b } , { a c } and { b c } with the proportion of 1:1:1:1:2. However the BPA of the whole set remains unchanged after the negation process is applied. Specifically, Figure 1 illustrates the weight of reallocation of m ( a ) for N = 3 intuitively.

3.4. Discussion

Recall the general case of Example 3, applying 15 successive negation process to the BPA vector in Example 3, and the results are shown in Table 1.
It is noted that the BPA of each focal element converges to the proportion by degrees that m ( a ) : m ( b ) : m ( c ) : m ( a b ) : m ( a c ) : m ( b c ) = 1:1:1:2:2:2 and the total uncertainty increases constantly till it attains 3.5749, which is the maximal value of the total uncertainty for N = 3 , as the iteration of negation process increases. To be more intuitive, the evolution of BPA as the iteration of negation process increases is illustrated in Figure 2.
Recall that m ¯ ¯ m ¯ (except for N = 1 and N = 2 ), according to Figure 2 and Table 1, this phenomenon could result from the fact that the total uncertainty measure increases after each negation process, which means that the uncertainty of the system increases after each negation process. To be more specific, the uncertainty of the BPA of m ( a ) = 0.05, m ( b ) = 0.05, m ( c ) = 0.05, m ( a b ) = 0.2, m ( a c ) = 0.2, m ( b c ) = 0.25, m ( a b c ) = 0.2 measured by the two other uncertainty measures are compared in Table 2. It is illustrated in Table 2 that the uncertainty only increases constantly in the total uncertainty measure, while the uncertainty measured by the two other uncertainty measures fluctuate back and forth. The change of uncertainty measured by H r p ( m ) and H d ( m ) is showed in Figure 3 and Figure 4, respectively. The uncertainty is unable to increase constantly when measured by H r p ( m ) and H d ( m ) , which is against our proposed negation method based on the maximum uncertainty. Therefore, the total uncertainty measure is mainly discussed in this Section.
Since we are trying to extend the negation of a probability distribution to a belief function, we argue that some particular properties of the negation of a mass function should be compatible and consistent with the negation of a probability distribution, proposed by Yager [27]. According to Yager’s idea ‘the reason that one selects maximum entropy alternatives is that it picks the allowable alternative which brings with it the least unsupported information’ and it is proved that the entropy increases once after a negation process [27]. Therefore, it is necessary that the uncertainty should increase constantly as the iteration of the negation process, in order to be compatible and consistent with the negation of a probability distribution.
According to Equation (7) the total uncertainty of m ¯ can be measured as:
H b ( m ¯ ) = A 2 Θ m ¯ ( A ) l o g 2 | A | m ¯ ( A )
Thus, the increase in the total uncertainty obtained by the negation process is denoted as the difference between the two uncertainties:
H b ( m ¯ ) H b ( m )
Since the empty set ∅ and the whole set θ have no effect on the difference between the two uncertainties it can be simplified as:
H b ( m ¯ ) H b ( m ) = A 2 Θ , A , A Θ m ¯ ( A ) l o g 2 | A | m ¯ ( A ) A 2 Θ , A , A Θ m ( A ) l o g 2 | A | m ( A )
To avoid redundant descriptions, empty set ∅ and the whole set θ will not be considered in calculation of two entropies H b ( m ) and H b ( ( m ¯ ) ) in proving H b ( m ¯ ) H b ( m ) .
Consider a frame of discernment Θ = { a , b , c } then according to the negation matrix in Equation (43) each element in m ¯ can be denoted by elements in m as Table 3:
Thus the total uncertainty of H b ( m ¯ ) can be denoted as:
log 2 ( ( 1 m ¯ ( a ) ) m ¯ ( a ) · ( 1 m ¯ ( b ) ) m ¯ ( b ) · ( 1 m ¯ ( c ) ) m ¯ ( c ) · ( 2 m ¯ ( a b ) ) 1 2 m ¯ ( a b ) · ( 2 m ¯ ( a b ) ) 1 2 m ¯ ( a b ) · ( 2 m ¯ ( a c ) ) 1 2 m ¯ ( a c ) · ( 2 m ¯ ( a c ) ) 1 2 m ¯ ( a c ) · ( 2 m ¯ ( b c ) ) 1 2 m ¯ ( b c ) · ( 2 m ¯ ( b c ) ) 1 2 m ¯ ( b c ) )
According to the fact that the geometrical mean is always greater than or equal to the harmonic mean we have:
H b ( m ¯ ) 9 1 9 1 l o g 1 m ( a ) m ( a ) 9 = l o g 1 m ( a ) m ( a ) 9
when m ( a ) : m ( b ) : m ( c ) : m ( a b ) : m ( a c ) : m ( b c ) = 1:1:1:2:2:2, which means:
min H b ( m ¯ ) = l o g 1 m ( a ) m ( a ) 9
On the other hand H b ( m ) can also be denoted as:
l o g 2 ( ( 1 m ( a ) ) m ( a ) · ( 1 m ( b ) ) m ( b ) · ( 1 m ( c ) ) m ( c ) · ( 2 m ( a b ) ) 1 2 m ( a b ) · ( 2 m ( a b ) ) 1 2 m ( a b ) · ( 2 m ( a c ) ) 1 2 m ( a c ) · ( 2 m ( a c ) ) 1 2 m ( a c ) · ( 2 m ( b c ) ) 1 2 m ( b c ) · ( 2 m ( b c ) ) 1 2 m ( b c ) )
According to the fact that the geometrical mean is always less than or equal to the arithmetical mean we have:
H b ( m ) i = 1 9 l o g 1 m ( a ) m ( a ) 9 9 = l o g 1 m ( a ) m ( a ) 9
when m ( a ) : m ( b ) : m ( c ) : m ( a b ) : m ( a c ) : m ( b c ) = 1:1:1:2:2:2, which means:
max H b ( m ) = l o g 1 m ( a ) m ( a ) 9
Consequently, the minimum value of H b ( m ¯ ) is l o g 1 m ( a ) m ( a ) 9 , which is exactly the maximum value of H b ( m ) . Hence H b ( m ¯ ) H b ( m ) 0 and thus the following theorem is proved that
H b ( m ¯ ) H b ( m )
The next proof shows that repeated negation process to a basic probability assignment (BPA) cannot only increase the total uncertainty constantly, but can also converge to the maximum value of the total uncertainty.
Proof. 
Consider the example above, a frame of discernment Θ = { a , b , c } , the BPA vector is denoted as:
m = m ( a ) m ( b ) m ( c ) m ( a b ) m ( a c ) m ( b c ) m ( a b c )
According to the proposed negation method, the negation of the BPA can be calculated as
m ¯ = E m
which can be rewritten as
m ( n + 1 ) = E m ( n )
where
m ( 1 ) = E m ¯
m ( 0 ) = E m
Thus, m ( 2 ) can be denoted as:
m ( 2 ) = E m ( 1 ) = E 2 m
Similarly, BPA after repeated negation process is obtained:
lim n + m ( n ) = lim n + E n m
where E is the negation matrix in Equation (43), and E n is obtained as:
lim n + E n = 0 0 0 0 0 0 0 0 0 1 9 1 9 1 9 1 9 1 9 1 9 0 0 1 9 1 9 1 9 1 9 1 9 1 9 0 0 1 9 1 9 1 9 1 9 1 9 1 9 0 0 2 9 2 9 2 9 2 9 2 9 2 9 0 0 2 9 2 9 2 9 2 9 2 9 2 9 0 0 2 9 2 9 2 9 2 9 2 9 2 9 0 0 0 0 0 0 0 0 1
Thus, we get the BPA after repeated negation process:
lim n + m n = m ( a ) ( n ) m ( b ) ( n ) m ( c ) ( n ) m ( a b ) ( n ) m ( a c ) ( n ) m ( b c ) ( n ) m ( θ ) ( n ) = 1 / 9 ( 1 m ( θ ) ) 1 / 9 ( 1 m ( θ ) ) 1 / 9 ( 1 m ( θ ) ) 2 / 9 ( 1 m ( θ ) ) 2 / 9 ( 1 m ( θ ) ) 2 / 9 ( 1 m ( θ ) ) m ( θ )
It is noted that m ( a ) ( n ) : m ( b ) ( n ) : m ( c ) ( n ) : m ( a b ) ( n ) : m ( a c ) ( n ) : m ( b c ) ( n ) = 1:1:1:2:2 for n + which exactly attains the maximum value of H b ( m ) for N = 3 . The evolution of the total uncertainty is illustrated in Figure 5.
According to the negation of probability proposed by Yager [27], the process of repeated negation can be modeled as a difference equation and the solution of this difference equation for n > 2 approaches 1/n as i increases. We note also that this corresponds to a maximal entropy allocation of the probabilities shows that the negation of a probability distribution converges to the unique probability distribution, that is each probability value in the probability distribution is 1/n, after repeated negation process. Moreover the converged probability distribution exactly attains the most uncertain allocation of the probabilities. Since our negation method is based on the maximal uncertainty, it is necessary of converged BPA after repeated negation process to reach the maximal uncertainty of the system (BPA), since the maximal uncertainty corresponds to a converged BPA. In the discussion part, it is also proved mathematically that the maximal of total uncertainty measure is obtained for m ( A ) is proportional to | A | which is consistent with the result of the unique converged BPA after repeated negation process.
Consequently, the total uncertainty will indeed increase constantly until the maximum value of the total uncertainty it attained with the increasing iteration of the negation process, which means the proposed negation method is based on the maximum uncertainty.
Compared with the existing negation method of BPA [46], two points are discussed as follows:
  • The existing work tried to present the negation of a mass function the same as the negation of a probability distribution proposed by Yager [27], which means the mass is equally reallocated to other focal elements and the elements in the power set is ignored. However, we believe that the uncertainty of non-singleton elements should be taken into account and the negation of BPA should be extended to the power set. Thus, the proposed negation of a mass function reallocates the corresponding BPA in a weighted manner among the power set.
  • The existing work of negation of a mass function is not based on the maximal uncertainty (entropy). Our work tried to refine this point and reflect the negation of a mass function by total uncertainty measure and proposed a negation method of a mass function based on the maximum total uncertainty mathematically, which is consistent with the negation of a probability distribution based on the maximum entropy proposed by Yager [27].

4. Conclusions

In this paper, a novel negation method is proposed to obtain the negation of basic belief assignment in Dempster-Shafer theory. The proposed negation method is implemented in a matrix form to show the reassignment of the BPA intuitively. The proposed negation of BPA reassigns the BPA of a certain element according to the cardinality of intersection of the element and each remaining element in the power set. Particular assumptions have been made for two special elements, the empty set ∅ and the whole set θ in the power set to guarantee that the proposed negation method fits in with our intuition. Closed-world assumption is postulated in this paper to make sure that the frame of discernment is complete and exhaustive, which keeps the BPA of the empty set fixed at 0 no matter how many times the negation process is applied. The BPA of the whole set is assumed to retain unchanged after each negation process is applied since little is known from the whole set regarding where to allocate its BPA, and the whole set dies not reallocate its BPA to the negation. Numerical examples are used to illustrate that how the proposed negation method works for N = 1, 2 and 3. Meanwhile, the proposed negation method reduces to the negation of probability distribution as all elements in the power set are singletons. Total uncertainty measures are used to measure the uncertainty in this paper due to the proposed negation method acting in a manner that increases the total uncertainty measure. It is found that the proposed negation method converges to a certain BPA distribution after repeated negation process, which exactly attains the maximum value of the total uncertainty measure. This also shows that our proposed negation method is based on the maximum uncertainty. Therefore, not only does this paper extend the negation of probability to BPA, but it also preserves the properties of the negation of probability.

Author Contributions

F.X. and K.X. proposed the original idea and designed the research. K.X. wrote the manuscript.

Funding

This research was funded by the Chongqing Overseas Scholars Innovation Program (No. cx2018077).

Acknowledgments

The authors greatly appreciate the reviews’ suggestions and the editor’s encouragement.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yager, R.R. On the completion of qualitative possibility measures. IEEE Trans. Fuzzy Syst. 1993, 1, 184–194. [Google Scholar] [CrossRef]
  2. Appel, O.; Chiclana, F.; Carter, J.; Fujita, H. Successes and challenges in developing a hybrid approach to sentiment analysis. Appl. Intell. 2018, 48, 1176–1188. [Google Scholar] [CrossRef]
  3. Chhipi-Shrestha, G.; Hewage, K.; Sadiq, R. Selecting sustainability indicators for small to medium sized urban water systems using fuzzy-ELECTRE. Water Environ. Res. 2017, 89, 238–249. [Google Scholar] [CrossRef] [PubMed]
  4. Jog, V.; Loh, P.-L. Analysis of centrality in sublinear preferential attachment trees via the Crump-Mode-Jagers branching process. IEEE Trans. Netw. Sci. Eng. 2017, 4, 1–12. [Google Scholar] [CrossRef]
  5. Xiao, F. A novel multi-criteria decision making method for assessing health-care waste treatment technologies based on D numbers. Eng. Appl. Artif. Intell. 2018, 71, 216–225. [Google Scholar] [CrossRef]
  6. Zhang, W.; Deng, Y. Combining conflicting evidence using the DEMATEL method. Soft Comput. 2018, 22, 1–10. [Google Scholar] [CrossRef]
  7. Fei, L.; Deng, Y. A new divergence measure for basic probability assignment and its applications in extremely uncertain environments. Int. J. Intell. Syst. 2018, 33. [Google Scholar] [CrossRef]
  8. Aminravan, F.; Sadiq, R.; Hoorfar, M.; Rodriguez, M.J.; Francisque, A.; Najjaran, H. Evidential reasoning using extended fuzzy dempster–shafer theory for handling various facets of information deficiency. Int. J. Intell. Syst. 2011, 26, 731–758. [Google Scholar] [CrossRef]
  9. Chhipi-Shrestha, G.; Mori, J.; Hewage, K.; Sadiq, R. Clostridium difficile infection incidence prediction in hospitals (CDIIPH): A predictive model based on decision tree and fuzzy techniques. Stoch. Environ. Res. Risk Assess. 2017, 31, 417–430. [Google Scholar] [CrossRef]
  10. Talukdar, S.; Bhaban, S.; Melbourne, J.; Salapaka, M. Analysis of heat dissipation and reliability in information erasure: A gaussian mixture approach. Entropy 2018, 20, 749. [Google Scholar] [CrossRef]
  11. Yager, R.R. On viewing fuzzy measures as fuzzy subsets. IEEE Trans. Fuzzy Syst. 2016, 24, 811–818. [Google Scholar] [CrossRef]
  12. Garmendia, L.; Yager, R.R.; Trillas, E.; Salvador, A. Measures of specificity of fuzzy sets under t-indistinguishabilities. IEEE Trans. Fuzzy Syst. 2006, 14, 568–572. [Google Scholar] [CrossRef]
  13. Jiang, W.; Yang, Y.; Luo, Y.; Qin, X. Determining basic probability assignment based on the improved similarity measures of generalized fuzzy numbers. Int. J. Comput. Commun. Control 2015, 10, 333–347. [Google Scholar] [CrossRef]
  14. Körner, R.; Näther, W. On the specificity of evidences. Fuzzy Sets Syst. 1995, 71, 183–196. [Google Scholar] [CrossRef]
  15. Dubois, D.; Prade, H. A note on measures of specificity for fuzzy sets. Int. J. Gen. Syst. 1985, 10, 279–283. [Google Scholar] [CrossRef]
  16. Yager, R.R. On the entropy of fuzzy measures. IEEE Trans. Fuzzy Syst. 2000, 8, 453–461. [Google Scholar] [CrossRef]
  17. Jog, V.; Anantharam, V. Intrinsic entropies of log-concave distributions. IEEE Trans. Inf. Theory 2018, 64, 93–108. [Google Scholar] [CrossRef]
  18. Deng, W.; Deng, Y. Entropic methodology for entanglement measures. Phys. A Stat. Mech. Appl. 2018, 512, 693–697. [Google Scholar] [CrossRef]
  19. Marsiglietti, A.; Melbourne, J. A Rényi entropy power inequality for log-concave vectors and parameters in [0, 1]. In Proceedings of the 2018 IEEE International Symposium on Information Theory, Vail, CO, USA, 17–22 June 2018; pp. 1964–1968. [Google Scholar]
  20. Bian, T.; Deng, Y. Identifying influential nodes in complex networks: A node information dimension approach. Chaos 2018, 28, 043109. [Google Scholar] [CrossRef]
  21. Wang, N.; Liu, X.; Wei, D. A Modified D Numbers’ Integration for Multiple Attributes Decision Making. Int. J. Fuzzy Syst. 2018, 20, 104–115. [Google Scholar] [CrossRef]
  22. Mo, H.; Deng, Y. A new MADA methodology based on D numbers. Int. J. Fuzzy Syst. 2018, 20, 2458–2469. [Google Scholar] [CrossRef]
  23. Guan, X.; Liu, H.; Yi, X.; Zhao, J. The Improved Combination Rule of D Numbers and Its Application in Radiation Source Identification. Math. Probl. Eng. 2018, 2018, 6025680. [Google Scholar] [CrossRef]
  24. Xiao, F. An intelligent complex event processing with D numbers under fuzzy environment. Math. Probl. Eng. 2016, 2016. [Google Scholar] [CrossRef]
  25. Kang, B.; Deng, Y.; Hewage, K.; Sadiq, R. A method of measuring uncertainty for Z-number. IEEE Trans. Fuzzy Syst. 2018, 2018. [Google Scholar] [CrossRef]
  26. Popeseu-Bodorin, N.; Balas, V.E. Fuzzy Membership, Possibility, Probability and Negation in Biometrics. Acta Polytech. Hung. 2014, 11, 79–100. [Google Scholar]
  27. Yager, R.R. On the Maximum Entropy Negation of a Probability Distribution. IEEE Trans. Fuzzy Syst. 2015, 23, 1899–1902. [Google Scholar] [CrossRef]
  28. Yager, R.R. Feasure of fuzziness and negation. 1. Membership in the unit inerval. Int. J. Gen. Syst. 1979, 5, 221–229. [Google Scholar] [CrossRef]
  29. Yin, L.; Deng, X.; Deng, Y. The negation of a basic probability assignment. IEEE Trans. Fuzzy Syst. 2018, 27, 135–143. [Google Scholar] [CrossRef]
  30. Srivastava, A.; Maheshwari, S. Some New Properties of Negation of a Probability Distribution. Int. J. Intell. Syst. 2018, 33, 1133–1145. [Google Scholar] [CrossRef]
  31. Pal, N.R.; Bezdek, J.C.; Hemasinha, R. Uncertainty measures for evidential reasoning i: A review. Int. J. Approx. Reason. 1992, 7, 165–183. [Google Scholar] [CrossRef]
  32. Pal, N.R.; Bezdek, J.C.; Hemasinha, R. Uncertainty measures for evidential reasoning ii: A new measure of total uncertainty. Int. J. Approx. Reason. 1993, 8, 1–16. [Google Scholar] [CrossRef]
  33. Dempster, A.P. Upper and lower probabilities induced by a multivalued mapping. In Classic Works of the Dempster-Shafer Theory of Belief Functions; Yager, R.R., Liu, L., Eds.; Springer: Basel, Switzerland, 2008; pp. 57–72. [Google Scholar]
  34. Shafer, G. A Mathematical Theory of Evidence; Princeton University Press: Princeton, NJ, USA, 1976. [Google Scholar]
  35. Yager, R.R.; Alajlan, N. Decision making with ordinal payoffs under dempster–shafer type uncertainty. Int. J. Intell. Syst. 2013, 28, 1039–1053. [Google Scholar] [CrossRef]
  36. Sun, L.; Liu, Y.; Zhang, B.; Shang, Y.; Yuan, H.; Ma, Z. An Integrated Decision-Making Model for Transformer Condition Assessment Using Game Theory and Modified Evidence Combination Extended by D Numbers. Energies 2016, 9, 697. [Google Scholar] [CrossRef]
  37. Chen, L.; Deng, X. A Modified Method for Evaluating Sustainable Transport Solutions Based on AHP and Dempster Shafer Evidence Theory. Appl. Sci. 2018, 8, 563. [Google Scholar] [CrossRef]
  38. Yager, R.R. Pythagorean Membership Grades in Multicriteria Decision Making. IEEE Trans. Fuzzy Syst. 2014, 22, 958–965. [Google Scholar] [CrossRef]
  39. Yager, R.R.; Alajlan, N. Sugeno integral with possibilistic inputs with application to multi-criteria decision making. Int. J. Intell. Syst. 2016, 31, 813–826. [Google Scholar] [CrossRef]
  40. He, Z.; Jiang, W. An evidential dynamical model to predict the interference effect of categorization on decision making. Knowl. Based Syst. 2018, 150, 139–149. [Google Scholar] [CrossRef]
  41. Chen, L.; Deng, Y. A new failure mode and effects analysis model using Dempster-Shafer evidence theory and grey relational projection method. Eng. Appl. Artif. Intell. 2018, 76, 13–20. [Google Scholar] [CrossRef]
  42. Smets, P. Decision making in the TBM: The necessity of the pignistic transformation. Int. J. Approx. Reason. 2005, 38, 133–148. [Google Scholar] [CrossRef]
  43. Li, Y.; Deng, Y. Generalized ordered propositions fusion based on belief entropy. Int. J. Comput. Commun. Control 2018, 13, 792–807. [Google Scholar] [CrossRef]
  44. Xiao, F. A novel evidence theory and fuzzy preference approach-based multi-sensor data fusion technique for fault diagnosis. Sensors 2017, 17, 2504. [Google Scholar] [CrossRef] [PubMed]
  45. Zhang, H.; Deng, Y. Engine fault diagnosis based on sensor data fusion considering information quality and evidence theory. Adv. Mech. Eng. 2018, 10. [Google Scholar] [CrossRef]
  46. Yin, L.; Deng, Y. Toward uncertainty of weighted networks: An entropy-based model. Phys. A Stat. Mech. Appl. 2018, 508, 176–186. [Google Scholar] [CrossRef]
  47. Yang, J.; Xu, D. On the evidential reasoning algorithm for multiple attribute decision analysis under uncertainty. IEEE Trans. Syst. Man Cybern. A Syst. Hum. 2002, 32, 289–304. [Google Scholar] [CrossRef]
  48. Han, Y.; Deng, Y. A novel matrix game with payoffs of Maxitive Belief Structure. Int. J. Intell. Syst. 2018, 2018. [Google Scholar] [CrossRef]
  49. Xiao, F.; Bowen, Q. A weighted combination method for conflicting evidence in multi-sensor data fusion. Sensors 2018, 18, 1487. [Google Scholar] [CrossRef] [PubMed]
  50. Seiti, H.; Hafezalkotob, A. Developing pessimistic-optimistic risk-based methods for multi-sensor fusion: An interval-valued evidence theory approach. Appl. Soft Comput. 2018, 72, 609–623. [Google Scholar] [CrossRef]
  51. Xiao, F. Multi-sensor data fusion based on the belief divergence measure of evidences and the belief entropy. Inf. Fusion 2019, 46, 23–32. [Google Scholar] [CrossRef]
  52. Merigó, J.M.; Casanovas, M. Induced aggregation operators in decision making with the dempster-shafer belief structure. Int. J. Intell. Syst. 2009, 24, 934–954. [Google Scholar] [CrossRef]
  53. Yager, R.R.; Alajlan, N. Probabilistically weighted owa aggregation. IEEE Trans. Fuzzy Syst. 2014, 22, 46–56. [Google Scholar] [CrossRef]
  54. Xiao, F. A hybrid fuzzy soft sets decision making method in medical diagnosis. IEEE Access 2018, 6, 25300–25312. [Google Scholar] [CrossRef]
  55. Wang, Y.; Deng, Y. Base belief function: An efficient method of conflict management. J. Ambient Intell. Hum. Comput. 2018, 3, 149–162. [Google Scholar] [CrossRef]
  56. Yager, R.R. Arithmetic and other operations on dempster-shafer structures. Int. J. Man Mach. Stud. 1986, 25, 357–366. [Google Scholar] [CrossRef]
  57. Zhou, X.; Hu, Y.; Deng, Y.; Chan, F.T.; Ishizaka, A. A dematel-based completion method for incomplete pairwise comparison matrix in ahp. Ann. Oper. Res. 2018, 271, 1045–1066. [Google Scholar] [CrossRef]
  58. Li, M.; Zhang, Q.; Deng, Y. Evidential identification of influential nodes in network of networks. Chaos Solitons Fractals 2018, 117, 283–296. [Google Scholar] [CrossRef]
  59. Li, Z.; Chen, L. A novel evidential FMEA method by integrating fuzzy belief structure and grey relational projection method. Eng. Appl. Artif. Intell. 2019, 77, 136–147. [Google Scholar] [CrossRef]
  60. Jiang, W.; Hu, W. An improved soft likelihood function for Dempster-Shafer belief structures. Int. J. Intell. Syst. 2018, 33, 1264–1282. [Google Scholar] [CrossRef]
  61. Deng, X.; Jiang, W.; Wang, Z. Zero-sum polymatrix games with link uncertainty: A Dempster-Shafer theory solution. Appl. Math. Comput. 2019, 340, 101–112. [Google Scholar] [CrossRef]
  62. Fei, L.; Deng, Y. Identifying influential nodes in complex networks based on the inverse-square law. Phys. A Stat. Mech. Appl. 2018, 512, 1044–1059. [Google Scholar] [CrossRef]
  63. Cao, Z.; Lin, C.-T. Inherent fuzzy entropy for the improvement of EEG complexity evaluation. IEEE Trans. Fuzzy Syst. 2018, 26, 1032–1035. [Google Scholar] [CrossRef]
  64. Xiao, F. An improved method for combining conflicting evidences based on the similarity measure and belief function entropy. Int. J. Fuzzy Syst. 2018, 20, 1256–1266. [Google Scholar] [CrossRef]
  65. Cao, Z.; Prasad, M.; Lin, C.-T. Estimation of SSVEP-based EEG complexity using inherent fuzzy entropy. In Proceedings of the 2017 IEEE International Conference on Fuzzy Systems, Naples, Italy, 9–12 July 2017; pp. 1–5. [Google Scholar]
  66. Kang, B.; Deng, Y.; Hewage, K.; Sadiq, R. Generating Z-number based on OWA weights using maximum entropy. Int. J. Intell. Syst. 2018, 33, 1745–1755. [Google Scholar] [CrossRef]
  67. Shannon, C.E. A mathematical theory of communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  68. Pan, L.; Deng, Y. A New Belief Entropy to Measure Uncertainty of Basic Probability Assignments Based on Belief Function and Plausibility Function. Entropy 2018, 20, 842. [Google Scholar] [CrossRef]
  69. Deng, Y. Deng entropy. Chaos Solitons Fractals 2016, 91, 549–553. [Google Scholar] [CrossRef]
  70. Jiroušek, R.J.; Shenoy, P.P. A new definition of entropy of belief functions in the dempster–shafer theory. Int. J. Approx. Reason. 2018, 92, 49–65. [Google Scholar] [CrossRef]
  71. Klir, G.J.; Lewis, H.W. Remarks on “measuring ambiguity in the evidence theory”. IEEE Trans. Syst. Man Cybern. Part A Syst. Hum. 2008, 38, 995–999. [Google Scholar] [CrossRef]
  72. Yager, R.R. Entropy and specificity in a mathematical theory of evidence. Int. J. Gen. Syst. 1983, 9, 249–260. [Google Scholar] [CrossRef]
  73. Klir, G.J.; Ramer, A. Uncertainty in the dempster-shafer theory: A critical re-examination. Int. J. Gen. Syst. 1990, 18, 155–166. [Google Scholar] [CrossRef]
  74. Han, Y.; Deng, Y. An enhanced fuzzy evidential DEMATEL method with its application to identify critical success factors. Soft Comput. 2018, 22, 5073–5090. [Google Scholar] [CrossRef]
  75. Deng, X.; Jiang, W. Dependence assessment in human reliability analysis using an evidential network approach extended by belief rules and uncertainty measures. Ann. Nucl. Energy 2018, 117, 183–193. [Google Scholar] [CrossRef]
  76. Yager, R.R. On the dempster-shafer framework and new combination rules. Inf. Sci. 1987, 41, 93–137. [Google Scholar] [CrossRef]
Figure 1. Reallocation weight of m ( a ) .
Figure 1. Reallocation weight of m ( a ) .
Entropy 21 00073 g001
Figure 2. Evolution of basic probability assignment (BPA) as iteration of negation process increases.
Figure 2. Evolution of basic probability assignment (BPA) as iteration of negation process increases.
Entropy 21 00073 g002
Figure 3. Uncertainty measured by H r p ( m ) as iteration of negation process increases.
Figure 3. Uncertainty measured by H r p ( m ) as iteration of negation process increases.
Entropy 21 00073 g003
Figure 4. Uncertainty measured by H d ( m ) as iteration of negation process increases.
Figure 4. Uncertainty measured by H d ( m ) as iteration of negation process increases.
Entropy 21 00073 g004
Figure 5. Evolution of total uncertainty as the iteration of negation process increases.
Figure 5. Evolution of total uncertainty as the iteration of negation process increases.
Entropy 21 00073 g005
Table 1. BPA value for each element and the total uncertainty corresponding to each negation process.
Table 1. BPA value for each element and the total uncertainty corresponding to each negation process.
Frequency of Iterations m ( a ) m ( b ) m ( c ) m ( ab ) m ( ac ) m ( bc ) m ( abc ) Total Uncertainty
00.10000.15000.00000.00000.30000.25000.20003.0952
10.10830.11670.04170.22500.15000.15830.20003.5305
20.07920.07500.11250.15420.19170.18750.20003.5647
30.09370.09580.07710.18960.17080.17290.20003.5723
40.08650.08540.09480.17190.18120.18020.20003.5742
50.09010.09060.08590.18070.17600.17660.20003.5747
60.08830.08800.09040.17630.17860.17840.20003.5748
70.08920.08930.08820.17850.17730.17750.20003.5749
80.08870.08870.08930.17740.17800.17790.20003.5749
90.08900.08900.08870.17800.17770.17770.20003.5749
100.08890.08880.08900.17770.17780.17780.20003.5749
110.08890.08890.08880.17780.17780.17780.20003.5749
120.08890.08890.08890.17780.17780.17780.20003.5749
130.08890.08890.08890.17780.17780.17780.20003.5749
140.08890.08890.08890.17780.17780.17780.20003.5749
150.08890.08890.08890.17780.17780.17780.20003.5749
Table 2. Value of uncertainty measured by different measures.
Table 2. Value of uncertainty measured by different measures.
Uncertainty Measures01234567
H b ( m ) 3.50843.57263.57433.57473.57483.57493.57493.5749
H r p ( m ) 2.55112.43492.43522.43522.43532.43532.43532.4353
H D ( m ) 4.13314.12914.13084.13124.13124.13134.13134.1313
Table 3. Distribution of BPA after negation.
Table 3. Distribution of BPA after negation.
m ¯ ( a ) 1 6 ( b + c + 2 b c )
m ¯ ( b ) 1 6 ( a + c + 2 a c )
m ¯ ( c ) 1 6 ( a + b + 2 a b )
m ¯ ( a b ) 1 6 [ a + b + 2 ( c + a c + b c ) ]
m ¯ ( a c ) 1 6 [ a + c + 2 ( b + a b + b c ) ]
m ¯ ( b c ) 1 6 [ b + c + 2 ( a + a b + a c ) ]

Share and Cite

MDPI and ACS Style

Xie, K.; Xiao, F. Negation of Belief Function Based on the Total Uncertainty Measure. Entropy 2019, 21, 73. https://doi.org/10.3390/e21010073

AMA Style

Xie K, Xiao F. Negation of Belief Function Based on the Total Uncertainty Measure. Entropy. 2019; 21(1):73. https://doi.org/10.3390/e21010073

Chicago/Turabian Style

Xie, Kangyang, and Fuyuan Xiao. 2019. "Negation of Belief Function Based on the Total Uncertainty Measure" Entropy 21, no. 1: 73. https://doi.org/10.3390/e21010073

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop